SimilarityNet: A Deep Neural Network for Similarity Analysis Within Spatio-temporal Ensembles
Abstract
Latent feature spaces of deep neural networks are frequently used to effectively capture semantic characteristics of a given dataset. In the context of spatio-temporal ensemble data, the latent space represents a similarity space without the need of an explicit definition of a field similarity measure. Commonly, these networks are trained for specific data within a targeted application. We instead propose a general training strategy in conjunction with a deep neural network architecture, which is readily applicable to any spatio-temporal ensemble data without re-training. The latent-space visualization allows for a comprehensive visual analysis of patterns and temporal evolution within the ensemble. With the use of SimilarityNet, we are able to perform similarity analyses on large-scale spatio-temporal ensembles in less than a second on commodity consumer hardware. We qualitatively compare our results to visualizations with established field similarity measures to document the interpretability of our latent space visualizations and show that they are feasible for an in-depth basic understanding of the underlying temporal evolution of a given ensemble.
References
- Almotiri J., Elleithy K., Elleithy A.: Comparison of autoencoder and principal component analysis followed by neural network for e-learning using handwritten recognition. In 2017 IEEE Long Island Systems, Applications and Technology Conference (LISAT) (2017), IEEE, pp. 1–5. 3
- Abnar S., Zuidema W.: Quantifying attention flow in transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020), pp. 4190–4197. 2
- Berger W., Piringer H., Filzmoser P., Gröller E.: Uncertainty-aware exploration of continuous parameter spaces using multivariate prediction. In Computer Graphics Forum (2011), vol. 30, Wiley Online Library, pp. 911–920. 2
- Chefer H., Gur S., Wolf L.: Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 782–791. 2
- Chen Z., Yeo C. K., Lee B. S., Lau C. T.: Autoencoder-based network anomaly detection. In 2018 Wireless Telecommunications Symposium (WTS) (2018), IEEE, pp. 1–5. 3
- Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al.: An image is worth 16×16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (2020). 2, 4
- Edelsbrunner H., Harer J., Natarajan V., Pascucci V.: Local and global comparison of continuous functions. In IEEE Visualization 2004 (2004), IEEE, pp. 275–280. 1
- Friederici A., Falk M., Hotz I.: A winding angle framework for tracking and exploring eddy transport in oceanic ensemble simulations. The Eurographics Association (2021). 2
- Franch G., Jurman G., Coviello L., Pendesini M., Furlanello C.: Mass-umap: Fast and accurate analog ensemble search in weather radar archives. Remote Sensing 11, 24 (2019), 2922. 2
- Fofonov A., Linsen L.: Projected field similarity for comparative visualization of multi-run multi-field time-varying spatial data. In Computer Graphics Forum (2019), vol. 38, Wiley Online Library, pp. 286–299. 1, 2, 5
- Fofonov A., Molchanov V., Linsen L.: Visual analysis of multi-run spatio-temporal simulations using isocontour similarity for projected views. IEEE transactions on visualization and computer graphics 22, 8 (2015), 2037–2050. 1, 2, 3, 6
- Gisler G. R., Heberling T., Plesko C. S., Weaver R. P.: Three-dimensional simulations of oblique asteroid impacts into water. Journal of Space Safety Engineering 5, 2 (2018), 106–114. 6
10.1016/j.jsse.2018.06.001 Google Scholar
- Hamby D. M.: A review of techniques for parameter sensitivity analysis of environmental models. Environmental monitoring and assessment 32, 2 (1994), 135–154. 2
- He W., Guo H., Shen H.-W., Peterka T.: efesta: Ensemble feature exploration with surface density estimates. IEEE transactions on visualization and computer graphics 26, 4 (2018), 1716–1731. 2
- Hao L., Healey C. G., Bass S. A.: Effective visualization of temporal ensembles. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2015), 787–796. 2
- Han J., Tao J., Wang C.: Flownet: A deep learning framework for clustering and selection of streamlines and stream surfaces. IEEE transactions on visualization and computer graphics 26, 4 (2018), 1732–1744. 1, 2
- He W., Wang J., Guo H., Wang K.-C., Shen H.-W., Raj M., Nashed Y. S., Peterka T.: Insitunet: Deep image synthesis for parameter space exploration of ensemble simulations. IEEE transactions on visualization and computer graphics 26, 1 (2019), 23–33. 2
- Han J., Zheng H., Chen D. Z., Wang C.: Stnet: An end-to-end generative framework for synthesizing spatiotemporal super-resolution volumes. IEEE Transactions on Visualization and Computer Graphics (2021). 2
- Jäckle D., Fischer F., Schreck T., Keim D. A.: Temporal mds plots for analysis of multivariate data. IEEE transactions on visualization and computer graphics 22, 1 (2015), 141–150. 2
- Jaunet T., Kervadec C., Vuillemot R., Antipov G., Baccouche M., Wolf C.: Visqa: X-raying vision and language reasoning in transformers. IEEE Transactions on Visualization and Computer Graphics (2021). 2
- Jo J., Seo J.: Disentangled representation of data distributions in scatterplots. In 2019 IEEE Visualization Conference (VIS) (2019), IEEE, pp. 136–140. 2
- Kumpf A., Stumpfegger J., Hartl P. F., Westermann R.: Visual analysis of multi-parameter distributions across ensembles of 3d fields. IEEE Transactions on Visualization and Computer Graphics (2021). 2
- Leistikow S., Huesmann K., Fofonov A., Linsen L.: Aggregated ensemble views for deep water asteroid impact simulations. IEEE computer graphics and applications 40, 1 (2019), 72–81. 2
- Lombardi S., Simon T., Saragih J., Schwartz G., Lehrmann A., Sheikh Y.: Neural volumes: Learning dynamic renderable volumes from images. arXiv preprint arXiv:1906.07751 (2019). 2
- Maher N., Milinski S., Suarez-Gutierrez L., Botzet M., Dobrynin M., Kornblueh L., Kröger J., Takano Y., Ghosh R., Hedemann C., et al.: The max planck institute grand ensemble: enabling the exploration of climate system variability. Journal of Advances in Modeling Earth Systems 11, 7 (2019), 2050–2069. 8
- Narechania A., Karduni A., Wesslen R., Wall E.: Vitality: Promoting serendipitous discovery of academic literature with transformers & visual analytics. IEEE Transactions on Visualization and Computer Graphics (2021). 2
- Ngo Q. Q., Linsen L.: Interactive generation of 1d embeddings from 2d multi-dimensional data projections. 2
- Nagaraj S., Natarajan V., Nanjundiah R. S.: A gradient-based comparison measure for visual analysis of multifield data. In Computer Graphics Forum (2011), vol. 30, Wiley Online Library, pp. 1101–1110. 1
- Porter W. P., Xing Y., von Ohlen B. R., Han J., Wang C.: A deep learning approach to selecting representative time steps for time-varying multivariate data. In 2019 IEEE Visualization Conference (VIS) (2019), IEEE, pp. 1–5. 2
- Sauber N., Theisel H., Seidel H.-P.: Multifield-graphs: An approach to visualizing correlations in multifield scalar data. IEEE Transactions on Visualization and Computer Graphics 12, 5 (2006), 917–924. 1
- Sun J., Wu C., Ge Y., Li Y., Yu H.: Spatial-temporal scientific data clustering via deep convolutional neural network. In 2019 IEEE International Conference on Big Data (Big Data) (2019), IEEE, pp. 3424–3429. 1, 2
- Tkachev G., Frey S., Ertl T.: S4: Self-supervised learning of spatiotemporal similarity. IEEE Transactions on Visualization and Computer Graphics (2021). 1, 2
- Vig J.: A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (2019), pp. 37–42. 2
- Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., Kaiser Ł., Polosukhin I.: Attention is all you need. In Advances in neural information processing systems (2017), pp. 5998–6008. 4
- Wang Q., Chen Z., Wang Y., Qu H.: A survey on ml4vis: Applying machinelearning advances to data visualization. IEEE Transactions on Visualization & Computer Graphics, 01 (2021), 1–1. 2
- Wolf T., Debut L., Sanh V., Chaumond J., Delangue C., Moi A., Cistac P., Rault T., Louf R., Funtowicz M., et al.: Transformers: State-of-the-art natural language processing. In EMNLP (Demos) (2020). 2, 4
- Wang J., Hazarika S., Li C., Shen H.-W.: Visualization and visual analysis of ensemble data: A survey. IEEE transactions on visualization and computer graphics 25, 9 (2019), 2853–2872. 2
- Weiss S., Işik M., Thies J., Westermann R.: Learning adaptive sampling and reconstruction for volume visualization. IEEE Transactions on Visualization and Computer Graphics (2020). 2
- Wang Y., Zhong Z., Hua J.: Deeporgannet: On-the-fly reconstruction and visualization of 3d/4d lung models from single-view projections by deep deformation network. IEEE transactions on visualization and computer graphics 26, 1 (2019), 960–970. 2