CS6630 - Scientific Visualization - Project 4

Carlos Eduardo Scheidegger
cscheid@cs.utah.edu
cs6630

Project 4: Visualizing (scalar) data with direct volume rendering

Part 1: Color Compositing Transfer Functions

In this part of the project, we want to visualize the mummy dataset with volume rendering. To do this, we design a transfer function that generates a meaningful visualization. To find the correct transfer function, I experimented with isosurface extraction, to find decent values. The skin was really hard to find, and I decided around 70. There are a lot of artifacts, though. The bone surface was somewhere around 150. The isosurface rendering is shown in the following.


Figure 1: Skin and bone isosurface rendering of the mummy dataset

After I had found these, I designed a transfer function that would generate an opaque bone structure and a transparent skin. The transfer function is composed of a tent-like linear spike in the scalar range 35-100, with peak opacity 0.1 at scalar 72. I chose a small opacity because there seemed to be a large portion of the volume in this range, and any higher opacity shadowed the bone structure. To bring out the bone structured, I used a box in the range 135-165, with full opacity. The color transfer function is very simple, and it simply assigned the color (0.6, 0.4, 0.1) to the skin portion, and (1.0, 1.0, 1.0) to the bone portion. The resulting image follows:


Figure 2: Direct volume rendering of the mummy dataset

To me, volume rendering offers a clear advantage over isosurfacing for this dataset. This is because the scalars are much "fuzzier": there isn't a single scalar in the interest range, and so we would require an infinity of isosurfaces to get the desired scalars. For direct volume rendering, on the other hand, it is possible to design transfer functions that span a range of scalars, giving it a more "complete" look.

Part 2: Maximum Intensity Projection Rendering

It is also possible to use DVR in a much simpler way, which is to eschew the volume rendering integral, and use, for each pixel, simply the maximum scalar found in the ray cast. This is called "maximum intensity projection", or MIP, for short. This is a rendering of the mummy dataset with MIP volume rendering.The transfer function is set to be a ramp from 0 to 1 as the scalar goes from 0 to 255.


Figure 3: MIP volume rendering of the mummy dataset

One advantage of MIP rendering is that it is, in software mode, much faster. Unfortunately, it is impossible to do any sophisticated rendering technique with it, like occlusion, or much less, shading. If the scientist is familiar with x-ray imaging, MIP might offer an "easy way in" to volume rendering, since MIP is very similar, physically speaking, to x-ray imaging.

Part 3: Assignment for CS6630 students

  1. Sample distance. I experimented with five different sample distances: 2.0, 1.0, 0.5, 0.25, and 0.1. Rendering time is inversely proportional to the sample distance, and this is easy to explain. Each ray has computation time O(n) where n is the number of samples taken. So an image with half the sample distance will take twice as many samples, and should take, asymptotically, twice as long. The image looks much better the smaller the sample distance is, since we're skipping less and less of the dataset at a time. There are many feature sthat are lost with a sample distance that's too high, but I will highlight the front teeth, which are completely lost with sample distance 2.0, and have severe artifacts with sample distance 1.0. At lower rates, they are much better-looking. It seems that there is no significant advantage of going lower than 0.5. The images are almost indistinguishable, as can be seen by taking successive image differences.

    Figure 4: Changing sample distance in a ray casting volume rendering.

    Figure 5: Successive differences between images.
  2. Interpolation method: When raycasting, we can interpolate the dataset either linearly or with some simpler technique like nearest neighbors. Since nearest neighbors simply picks the vertex of the dataset closest to the ray point, the picture will look very blocky: every ray step that's closest to a value that's withing the bone part of the transfer function will get full opacity. The gradient computation will also suffer, and shading will look bad. Nearest-neighbor interpolation is a little bit faster than trilinear, but not much. The following shows the difference in the same image, of using both techniques, with sample distance 0.5:

  3. Figure 6: Nearest neighbor vs. trilinear interpolation.
  4. Dataset resolution. I found that dataset resolution makes a big difference, especially when trying to get good pictures. For all the previous images, I used the mummy.128.vtk dataset, while for these, I'll change between 50, 80, and 128. Nearest neighbor interpolation fails to generate any good picture, with whatefver resolution is available. But most interestingly, it takes a high-resolution dataset (at least 80) to make the volume rendering look good at all, even with small steps. This is because the dataset itself lacks enough information to resolve the bone structure.
    Figure 7: Data set resolutions. Top row, linear interpolation. Bottom row, nearest-neighbor interpolation. Columns: datasets mummy.050.vtk, mummy.080.vtk and mummy.128.vtk