Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.
leonidzhukov

Leonid Zhukov

Director of Data Science, Ancestry.com
Professor, Department of Applied Mathematics and Informatics, Moscow, Russia
PhD, Theoretical Physics
Research Scientist at the SCI Institute 1998-2000

Home Page


Leonid Zhukov, a SCI Institute alumnus, currently serves as the director of data science at Ancestry.com. He is also a professor at the School of Applied Mathematics and Information Science at the National Research University Higher School of Economics in Moscow, Russia, for which he received the best teacher award in 2011 and again in 2013. Leonid received his diploma from the National Research Nuclear University, formerly Moscow Engineering and Physics Institute, in 1993. He received his PhD in Physics in1998 from the University of Utah.

Leonid started working at the SCI Institute in 1996 while pursuing his PhD degree and was then hired as a computational professional and finally a research scientist from 1998 to 2000. Leonid's research during his SCI tenure focused on the development of algorithms for the computational solution of inverse problems and regularization methods in biomedicine. He also participated in the development of computational steering software (SCIRun) for large-scale scientific computations and investigated EEG and MEG source localization problems. Leonid's experiences at the SCI Institute led to senior scientist positions at the California Institute of Technology, Overture, and Yahoo!, followed by work as the chief scientist for Jumptap and director of research at Openstat. He later co-founded and served as a technical director of the information security start-up company Trafica, where he was responsible for developing the company roadmap, hiring the engineering team, and leading the company's product development.

Leonid's current research and teaching interests include data mining, machine learning, information retrieval, social network analysis and visualization. His work for Ancestry.com involves leading and managing the data science team and working on large-scale machine learning and information retrieval problems. He is currently developing data-driven products and predictive analytics models, as well as performing an exploratory analysis of structured and unstructured historical data.





leonid1


Visualization of the downhill simplex algorithm converging to a dipole source. The simplex is indicated by the gray vectors joined by yellow lines. The true source is indicated in red. The surface potential map on the scalp is due to the forward solution of one of the simplex vertices, whereas the potentials at the electrodes (shown as small spheres) are the "measured" EEG values (potentials due to the true source). SCIRun problem solving environment, SCI Institute, 1999. 
WorldOfMusicLeonid


Use of Semidefinite Programming (SDP) optimization for high dimensional data layout and graph visualization. We developed a set of interactive visualization tools and used them on music artist ratings data from Yahoo!. The computed layout preserves a natural grouping of the artists and provides visual assistance for browsing large music collections. 
color-treeLeonid Many family trees have complex connectivity structure. We found that for large graphs with more than several thousand nodes, standard force directed graph layout algorithm gives good results. This is an example of a medium size tree with several thousand nodes each - See more at: http://blogs.ancestry.com/techroots/visualizing-family-trees.
 
Han-WeiShen

Han-Wei Shen


Professor, Computer Science, Ohio State University

PhD, Computer Science
Dissertation Title: High-Performance Visualization Algorithms for Large-Scale Scientific Data
Advisor: Christopher R. Johnson

Home Page
LinkedIn


Han-Wei Shen is among the very first SCIers. He began his doctoral work under Prof. Chris Johnson before the institute was even named. He received his BS degree from the Department of Computer Science and Information Engineering at the National Taiwan University in 1988, his MS degree in Computer Science from State University of New York at Stony Brook in 1992 (under Prof. Arie Kaufman), and his PhD in Computer Science from the University of Utah in 1998. He worked at NASA Ames Research Center in California from 1996 to 1999, where he was a research scientist in NASA's Advanced Supercomputing Division (NSA). Dr. Shen is currently a full professor in the Department of Computer Science and Engineering at the Ohio State University. He is also the head of the Graphics and Visualization Study (GRAVITY) research group. So far twelve PhDs and many master's students have graduated from the GRAVITY research group under his advising.

Professor Shen's research focus is primarily on scientific visualization and computer graphics. For the past two decades, he has been tackling several fundamental problems in the core area of scientific visualization. His best known works include acceleration of isosurface extraction, steady and unsteady flow visualization techniques, algorithms and data structures for the analysis and visualization of time-varying multivariate data sets, information visualization, and efficient parallel visualization algorithms. More recently he has pioneered work in developing an information-theoretical framework for quality assessment, management, and visualization of data generated from extreme-scale scientific simulations.

Professor Shen is a winner of the National Science Foundation's CAREER award and the US Department of Energy's Early Career Principal Investigator Award. He also won the Outstanding Teaching Award twice in the Department of Computer Science and Engineering at the Ohio State University. He was an associate editor for IEEE Transactions on Visualization and Computer Graphics, a paper chair for IEEE Pacific Visualization 2009 and 2010, and currently serves as a paper chair for IEEE SciVis 2013, one of the main conferences under IEEE Visualization 2013. He has directed many federally funded research projects by DoE, NSF, NIH, and NASA. He continues to collaborate with many faculty members in the SCI institute, including Prof. Chris Johnson, Prof. Chuck Hansen, and Prof. Valerio Pascucci.

HanWeiShen vis osu HanWeiShen climate OSU
Prof. Shen and his students developed a scalable FTLE computation algorithm using a pipelining model. In the algorithm, they break the compute processes into different time groups, where each group independently processes different time interval. Particles are passed between different time groups as they travel in space and time to achieve computation efficiency and reduce I/O overhead. The algorithm is able to demonstrate strong scaling up to 16K processors. The Madden-Julian oscillation (MJO) plays a significant role in intraseasonal weather variations over the Indian and Pacific Oceans. Prof. Shen and his students developed an integrated analysis and visualization tool for simulated MJO episodes. Using a Web-based interface, the tool lets scientists more easily identify cloud and environmental processes associated with the MJO. By combining domain-knowledge-assisted feature tracking and global data overviews in both space and time, the tool enables climatologists to analyze their data more effectively.



AaronLefohn

Aaron Lefohn


Director of Graphics Research, NVIDIA
PhD, Computer Science
Thesis title: Interactive Computation and Visualization of Level-Set Surfaces: A Streaming Narrow-Band Algorithm
Advisor: Ross Whitaker

Linked in
Homepage


Aaron Lefohn, SCI alumnus, recently joined NVIDIA as the Director of Real-Time Graphics Research. Aaron Lefohn received his PhD from the University of California, Davis in 2006, studying under John Owens. During his PhD program, he was employed as a researcher and graphics software engineer at Pixar Animation Studios, where he worked on a variety of R&D projects focused on interactive rendering tools for artists. He obtained his MS degree from the University of Utah, studying computer graphics and scientific visualization under Ross Whitaker at the SCI Institute. His research focused on creating the first interactive 3D level-set solver for segmentation of MRI volumetric data sets. He was challenged with figuring out how to solve dynamic, sparse PDEs in parallel on the GPU and directly volume rendering those sparse representations.

In 2006, Aaron joined Neoptica, a startup company creating new graphics programming models on heterogeneous CPU+GPU computer systems such as PlayStation3. When Intel acquired Neoptica in 2007, he led Intel's engagement in OpenCL, working closely with Apple and Khronos to define version 1.0. He then returned to rendering research, leading a small research team focused on new shadow rendering algorithms for the Larrabee graphics processor. In November 2010, he became the research lead for the Advanced Rendering Technology team, a research group focused on new real-time rendering algorithms, power-efficient rendering, and CPU-GPU programming systems for visual computing.

The first product announcement based on the research from Aaron's Intel team was at the Game Developer Conference in March 2013. The announcement was that Intel's latest GPU (Haswell) adds new capabilities that make it possible to deliver practical solutions to several long-standing problems in real-time graphics: order-independent transparency, volumetric shadows, and anti-aliasing of fine detail such as foliage and hair:

Lefohn tumorClip
From Aaron's master's thesis: Interactive level-set segmentation of a brain tumor from a 256 × 256 × 198 MRI with volume rendering to give context to the segmented surface. A clipping plane shows the user the source data, the volume rendering, and the segmentation simultaneously. The segmentation and volume rendering parameters are set by the user probing data values on the clipping plane.
Lefohn greenhouse Lefohn hair Lefohn tree
From "Adaptive Transparency," voted second best paper at High Performance Graphics 2011. This paper introduced a new algorithm for solving order-independent transparency in real-time rendering. The technique was impractical at the time the paper was published, requiring unbounded memory. However, the research led to a new hardware feature in Intel's latest GPU that enables a practical, fixed-memory implementation of the algorithm. The 2013 GRID2 car racing game from CodeMasters used a derivative of the algorithm to render anti-aliased foliage and self-shadowing smoke.



Ponnapalli

Sri Priya Ponnapalli


R&D Engineer, Portfolio & Risk Analytics, Bloomberg LP

2010 Ph.D. in Electrical and Computer Engineering, University of Texas at Austin
Dissertation Title: Higher-order generalized singular value decomposition – a comparative mathematical framework with applications to genomic signal processing
Advisor: Orly Alter


Dr. Sri Priya Ponnapalli received her Ph.D. in Electrical and Computer Engineering in 2010 from the University of Texas at Austin, working in the Genomic Signal Processing Lab of Dr. Orly Alter, USTAR Associate Professor of Bioengineering and Human Genetics at the SCI Institute. In her Ph.D. dissertation, Dr. Ponnapalli developed a novel mathematical framework for the comparison of multiple large-scale datasets that are arranged in tables of different row dimensions but the same column dimensions. The number of such datasets, recording different aspects of a single phenomenon, is fast growing in science and medicine. Gaining access to the full information that these datasets store requires mathematical frameworks that can compare and contrast them in order to find the similarities and dissimilarities among them. Until now only one such framework existed, which was limited to a comparison of two datasets at a time.

Ponnapalli and Alter, in collaboration with Drs. Charles F. Van Loan of Cornell University and Michael A. Saunders of Stanford University, formulated a novel generalization of the existing framework that enables comparison of more than just two datasets at a time. The team demonstrated the novel framework in comparative modeling of the cellular activities of three evolutionarily disparate organisms – human and budding and fission yeasts. The mathematical model successfully identified and separated cellular events that are common to the human and yeasts from those that are exclusive to only one of the organisms. (To read the 2011 PLoS One article, visit http://dx.doi.org/10.1371/journal.pone.0028072 ).

Alter's previous comparative modeling of the cellular activities of just two of the organisms – human and budding yeast – led to the computational prediction of a new mode of biological regulation, which she then experimentally verified in collaboration with Dr. John F. X. Diffley of Cancer Research UK. The Genomic Signal Processing Lab's recent comparative modeling of the genomes of just two cell types – normal and brain cancer cells – uncovered a new link between a brain tumor's genome and a patient's prognosis, which offers insights into the cancer's formation and growth, and suggests promising targets for drug therapy. Just as these discoveries were made possible by the ability to compare between two datasets, the mathematical framework formulated by Ponnapalli in her Ph.D. dissertation, which enables – for the first time – a comparison of more than two datasets at a time, promises to lead to discoveries that would have been impossible without it. Although this mathematical framework was developed with applications in biotechnology in mind, it could similarly be used to make discoveries in any of the many areas where large-scale datasets are being accumulated today.

Since graduating in 2010, Dr. Ponnapalli has been applying her mathematical expertise to large-scale financial datasets in her role as an R&D Engineer for Bloomberg LP's Portfolio & Risk Analytics team in New York City.

Ponnapalli's team develops and maintains a comprehensive portfolio risk management tool. Bloomberg's clients use this tool to analyze, evaluate and reduce potential risks that may be associated with their investment portfolios. A recent add-on to the tool – the Scenario Analysis Function – allows customers to create custom scenarios representative of world-events such as the "Debt Ceiling Crisis" or the "Libyan Oil Shock" and predict the impact of these scenarios on the value of their portfolio. (To read more, visit http://www.bloomberg.com/professional/tools-analytics/portfolio-risk-analytics/ ).

Underlying the scenario analysis function are multi-factor models, built by Ponnapalli and her team. In a factor model, the return on each security is a linear combination of a small number of common factors and asset-specific, or idiosyncratic returns. At present, factor models are built specific to each asset class such as fixed-income or equities. In the future, it may be possible to apply Ponnapalli's comparative framework to multi-asset datasets and discover a common set of factors to build a multi-asset factor model.

1-ponnapalli 3-Ponnapalli Scenario Analysis Function
Higher-order generalized singular value decomposition (HO GSVD) – a novel mathematical framework – applied to comparative modeling of the cellular activities of three evolutionarily disparate organisms. The Scenario Analysis Function allows Bloomberg's customers to stress-test their portfolios to see how they are impacted by various scenarios, and examine the performance of individual holdings within a given scenario.