Dynamic Cross Section Rendering... and associated musings
/What's the difference between real 3d objects and digital 3d objects? Real objects exist, while digital ones don't.
Why don't they exist?
They don't exist because they don't have a 3d presence. Mostly because they're being viewed through a static 2d medium - either a TV screen or computer monitor. It's difficult for us to really experience the existence of these digital 3d objects because we're never in the same room as them.
What if we changed that? What if we created a digital 3d object, and then superimposed it onto real 3d?
Let's do a thought experiment:
Look at this picture of a 3d model of a sphere:
Now pick a point in space right in front of you. This point that you just picked is point (0, 0, 0), which happens to also be the center of the sphere pictured above. The sphere's radius is 2 inches.
...
Great.
...
Now, what does it do?
Well, nothing. You can't see it, you can't touch it, you can't sense it whatsoever, but it's there, trust me.
Now let's give you a way to confirm it's there. Pretend you're wearing one of these.
That's a P5 glove - a cyber glove that tracks your hand's 3d location and reports it back to a computer. Let's also modify the glove; now there's an LED attached to the back of the glove, like this:
Get over the crappy image... it's just a thought experiment. : )
The computer knows where the glove is. It also knows where the LED is. If the LED is inside the 3d model of the sphere in front of you, it lights up. Otherwise, it's off. As you wave your hand around, the LED gives you valuable information about this floating orb in front of your face; the LED is a litmus test, letting you know where the sphere starts and ends.
Fortunately for us, the P5 Glove reports more than just (x, y, z). Among other things, it also reports pitch, roll, and yaw. If (x, y, z) reports where your hand is, pitch, roll, and yaw reports which way your hand is pointing. With this information, we can do some cool stuff. Let's attach some more LEDs to your glove:
This is an 8x8 grid of LEDs on the back of your hand. As your hand passes through the 3d model of the sphere, you LED grid will show something like this:
Cool, huh? But why stop at an 8x8 resolution? Let's remove the LEDs from the back of your hand and replace it with an LCD screen, upgrading our resolution to 1024x768. Here's the new animation:
Now, we're looking at the cross section of the sphere. With resolutions this high, we have the opportunity to create cross sections of more complicated structures. Additionally, if we replace the LCD screen with a piece of white foam core combined with a projector, you can get an even more compelling image. Check out a sketch I made a few weeks back:
[youtube]zP4YzBYn8B8[/youtube]
This sketch was not built with motion capture, but with a whole mess of post-production video tricks and 3d model animations. Here's what I did:
First, I modeled the box and chair in 3d Studio Max, a 3d modeling program.
Next, I needed to model the cross section - the foam core. To create this, I choreographed a series of movements for that would be easy to execute and easy to model into a computer. I filmed myself doing these movement, and then I animated the foam core into the digital 3d environment.
After this, I used a feature in the program called Boolean. This removed everything except the intersection between the foam core and the structures (the box and the chair), which is what I layered on top of the video of myself doing the movements. It mostly matched up, but if it didn't, I would either reanimate or applying some effects to the footage to make it work.There are some other, more exciting implementations to this idea that were too difficult to sketch with the process I used. What if instead of a sphere in front of your face, you had a 3d digital model of a human brain. You could use a dynamic cross section renderer to get CAT scan imagery of this brain. Instead of a series of images (like below), you'd be able to get cross sections at any location, from any angle, on the fly.
Other cool applications exist for any field that uses 3d modeling. Perhaps we could look inside the model of a running machine, or maybe cutaways of large architectural structures.
How realistic is this technology?
Well, Johnny Chung Lee of Carnegie Mellon (famed for his awesome WiiMote projects) has done some really great work on a dynamic projector calibration system:
[youtube]XgrGjJUBF_I[/youtube]
Also, if you're wealthy enough and do enough paperwork, you can buy a volumetric rendering of a human body at Visible Human Project.
So I think it's a realistic thing to accomplish in the not so distant future.