PDF
Implicit Continuous Representations for Visualization of Complex Data
Advanced Scientific Computing Research (ASCR)

Award Number and Duration

DOE DE-SC0023157

September 1, 2022 to August 31, 2025 (Estimated)

PI and Point of Contact

Bei Wang Phillips (PI, University of Utah)
Publish Under Bei Wang
Associate Professor
Kahlert School of Computing and Scientific Computing and Imaging (SCI) Institute
University of Utah
beiwang AT sci.utah.edu
http://www.sci.utah.edu/~beiwang

Collaborators

Tom Peterka (Lead PI)
Computer Scientist
Argonne National Laboratory
https://web.cels.anl.gov/~tpeterka/

Chaoli Wang (Co-PI)
Professor
University of Notre Dame
https://sites.nd.edu/chaoli-wang/

Hongfeng Yu (Co-PI)
Associate Professor
University of Nebraska-Lincoln
http://cse.unl.edu/~yu/

Hanqi Guo (Senior Personnel)
Associate Professor
The Ohio State University
https://hguo.github.io/

David Lenz (Senior Personnel)
Postdoctoral Scholar
Argonne National Laboratory
https://www.anl.gov/profile/david-lenz

Overview

This project investigates how to accurately and reliably visualize complex data consisting of multiple nonuniform domains and/or data types. Much of the problem stems from having no uniform representation of disparate datasets. To analyze such multimodal data, users face many choices in converting one modality to another; confounding processing and visualization, even for experts. Recent work in alternative data models and representations that are continuous, high-order (nonlinear), and can be queried anywhere (i.e., implicit), suggests that such models can potentially represent multiple data sources in a consistent way.

We hypothesize that implicit continuous data representations can improve the analysis, visualization, and comparison of complex data sources originating in disparate domains, compared with visualizing and analyzing the original data directly. Research in modeling data in such representations, developing visualization algorithms that operate directly on the models, and understanding how implicit continuous models support scientific conclusions can simplify complex data visualization and ensure trusted decisions based on those visualizations.

We will investigate models that are high-order, continuous, differentiable, potentially anti-differentiable (providing integrals in addition to derivatives), and can be evaluated anywhere in a continuous domain. Two classes of such models are functional models and implicit neural networks (also called coordinate networks or neural fields). We will investigate how transforming discrete data sources into these representations would allow analysis and comparison of multiple data sources in a uniform representation, solving many of the problems caused by complex, sparse, unstructured, and heterogeneous data sources.

Research will be organized into three objectives: (i) theory and techniques for modeling of data, (ii) visualization algorithms operating directly on implicit continuous models, and (iii) evaluating and understanding models. Five use cases, drawn from climate/weather, biology, materials science, and fusion science, will serve to evaluate the proposed research.

Publications and Manuscripts

Papers marked with * use alphabetic ordering of authors.
Year 1 (2022 - 2023)
PDF TopoSZ: Preserving Topology in Error-Bounded Lossy Compression.
Lin Yan, Xin Liang, Hanqi Guo, Bei Wang.
Manuscript, 2023.
arXiv:2304.11768.
Interactive Lagrangian-Based Particle Tracing Using Neural Networks.
Mengjiao Han, Sudhanshu Sane, Jixian Li, Shubham Gupta, Bei Wang, Steve Petruzza, Chris R. Johnson.
Manuscript, 2023.

An In-Depth Study into Morse Complex Generation with Topological Losses.
Syed Fahim Ahmed, Jixian Li, Mingzhe Li, Bei Wang.
Manuscript in preparation, 2023.

Importance-Driven Deep Learning Models for Topology Preservation.
Weiran Lyu, Mingzhe Li, Jingyi Shen, Hanqi Guo, Han-Wei Shen, Bei Wang.
Manuscript in preparation, 2023.

Software Downloads

Presentations, Educational Development and Broader Impacts

Year 1 (2022 - 2023)
  1. Bei Wang Keynote Talk (upcoming), TDA Week, Japan, July 21 - August 4, 2023.

  2. Bei Wang Invited Talk, Institute for Mathematical and Statistical Innovation (IMSI), Randomness in Topology and its Applications workshop, March 21, 2023.

  3. Bei Wang Keynote Talk, Machine Learning on Higher-Order Structured data (ML-HOS) Workshop at ICDM 2022. Hypergraph Co-Optimal Transport, November 28, 2022.

  4. Bei Wang Invited Talk, Mini Symposium on Statistics and Machine Learning in Topological and Geometric Data Analysis at SIAM Conference on Mathematics of Data Science (MDS22), September 29, 2022.

Students

Guanqun Ma (Ph.D., Summer 2023 - present)
Kahlert School of Computing and SCI Institute
University of Utah

Syed Fahim Ahmed (Ph.D., Spring 2023 - present)
Kahlert School of Computing and SCI Institute
University of Utah

Weiran (Nancy) Lyu (Ph.D., Fall 2022 - present)
Kahlert School of Computing and SCI Institute
University of Utah

Acknowledgement

This material is based upon work supported or partially supported by the United States Department of Energy (DOE) under Grant No. DE-SC0023157.

Any opinions, findings, and conclusions or recommendations expressed in this project are those of author(s) and do not necessarily reflect the views of the DOE.

Web page last update: May 31, 2023.