Voxel density datasets?

I'm playing around with voxel rendering and I need some interesting datasets. So far all I've found is a head CT scan of a cadaver. While interesting, it's only 256x256x113 voxels. It's also very convex, so it doesn't really show off shadow effects.
Does anyone know where to find some sample datasets?

Cool renderings: http://imgur.com/Wo0Tv6a
Both renderings are from the same dataset with the same camera and lighting settings, but with different density cutoffs.
Notice how the bandages cast a translucent shadow while being themselves translucent.
The weird lines around the mouth are an issue with the original dataset. I think the subject had lots of metal fillings. That tends to throw off CT scanners.
Impressive, but ... fresh out for now.

Now I've seen voxels as simple squares, but I've seen them as more shapes/wedges like in spaceship engineers.

Are you sticking with squares for now?
I believe if I simply do a trilinear (i.e. linear in three dimansions) interpolation, that should be enough to render slopes. But yes, for now this is good enough. Linear interpolation is always a hassle because you have to take care to mind your boundaries. It's also going to be slower because every density query requires looking at eight voxels.
Last edited on
Here is a site I liked when I was toying around with volume rendering awhile ago. Has quite a few different datasets actually. Hopefully that helps a bit.

http://lgdv.cs.fau.de/External/vollib/

Edit: Think I have a few more saved in my bookmarks on my home computer, can check and see if they are still there after I head home from work if you need some more.
Last edited on
Oh, yeah. I actually found that one. I ended up not using it because I couldn't understand the format used. But I just noticed this bit at the top, so perhaps I'll give it another look:
Just download the V^3 package, unzip it, type "build.sh tools" in a Linux shell and use the pvm2raw utility in the tools folder to extract the raw data.
You can mostly fix those sampling artifacts using pre-integration.
https://pdfs.semanticscholar.org/911f/2033685fa4f67abfdf725da9d6a95c4b257d.pdf

You should make a transfer function editor as well. It lets you manually map color and opacity to intensity.

Maybe some fluid simulation data would be fun to mess with.
http://turbulence.pha.jhu.edu

What kind of interpolation are you doing now? I assume you are using lookups into a 3D texture in the shader, and the GPU is doing the trilinear interpolation for you. Without the hardware doing the interpolation it will be slower, but it's very strait forward calculation.




Right now I'm just using nearest neighbor. I'm doing something a bit more sophisticated than simple lookups that lets me render translucent volumes: I treat the voxel value as an alpha channel and lookup all voxels in the line of sight of a pixel until the sum of the alphas is at least 1. Additionally, for each of those voxels I compute an illumination value that depends on the sum of the alphas between that voxel and the light source (in this case, at infinity). The value that's displayed in the pixel is a combination of the illumination values of all visible voxels in that line of sight, with farther voxels contributing less.
Interpolating will help with the artifacts you are getting (the concentric rings).

Usually what is done is that you lookup the intensity value from the 3D texture, then use that to compute an index into another texture storing the transfer function (the transfer function maps intensity to color and opacity). The GPU hardware automatically interpolates the lookups. Then you can calculate the lighting.

Basically what you are doing is using transfer function that is a step function (ignoring small intensities, then treating all others the same). Using a more complicated tf, you can do a lot more, like coloring bone one color, and skin another, or making the skin and muscle semi-transparent and highlighting veins and bones, etc.

Last edited on
In my case the hardware doesn't interpolate automatically because I'm not using a shader. I'm doing this in CUDA.

Using a more complicated tf, you can do a lot more, like coloring bone one color, and skin another, or making the skin and muscle semi-transparent and highlighting veins and bones, etc.
I suppose that could be easily done by, after getting the illumination value, using the alpha to index an RGBA gradient and multiplying the result with the illumination, and then alpha-blending that into the final color value.
Last edited on
New render: http://imgur.com/JtVmsA3
In addition to trilinear interpolation, I increased the resolution and doubled the depth sampling, while halving the illumination sampling.

I definitely should switch to a 3D texture with hardware-accelerated interpolation. My implementation is horribly slow.

EDIT: Wow. Check out the difference just a simple interpolation makes: http://imgur.com/qRouleD
Last edited on
Topic archived. No new replies allowed.