I just released “Lumarca for Processing” — an easy way to make stuff in the Lumarca with Processing. It uses two really cool and elegant techniques that I wanted to share: distance functions for modeling geometry, and extending Processing’s “Renderer” object to make the code super easy to work with.
On Modeling Geometry…
One of the questions you need to ask if you want to build a renderer for a volumetric display is, “how do I want to model geometry?” In other words, how do I want to store the idea of a 3d sphere inside a computer?
The obvious place to start is by looking at how everybody else does it.
Conventional 3d modelers start by placing a bunch of dots on the surface of the sphere. They then connect the dots to make edges, and the edges to make triangles. Here’s an example of what that looks like:
The point of this exercise is to obtain the triangles. Why? Because faces are all you see when you look at something through a 2d screen. Your brain constructs a sense of volume from what it sees on the faces — how light hits them and how textures deform around them. In other words, to create the illusion of 3d on a 2d display, make a bunch of triangles (2d shapes) and arrange them in a way that makes them appear 3d.
Unfortunately, this methodology doesn’t translate well to 3d volumetric displays.
Using 2d display practices to build stuff for Lumarca feels like using paint brushes to shape clay. In 3d volumetric displays, 2d shapes are boundaries. This is similar to how lines are boundaries on 2d displays. A square on a 2d display is bounded by 4 lines. A cube in a 3d volumetric display is bounded by 6 squares. For the Lumarca, I don’t want face data. I need volume data.
How we did it in the past
So for my first pass, I decided to ignore 2d rendering techniques altogether, and build a renderer from scratch. I used a bunch of high-school trig to solve for both a sphere and a cube. It was a weird and manual solution, but it worked. I liked how spheres weren’t just polygon approximations, but mathematically truly spheres. With all the trig, though, it took quite a bit of time to render when you wanted multiple shapes on the screen at once.
The second pass at modeling geometry was inside a library built by my colleague Matt Parker. The library was a huge improvement on the “software” that existed… which was more a collection of functions than software. This pass at modeling used OpenGL, and we ran into all those problems I ran away from in the first place — how do you know where “inside” of an object is when all you have is triangle data. There were lots of clever work arounds, but there were always a few edge case bugs that we would just have to code around instead of fix.
After a few amazing events / installations Matt and I slowly lost interest over the years and stopped making progress on the software. Eventually Processing would release version 2.0, and the Lumarca Software became outdated.
Sometime this last year I saw a video on ray marching that just knocked my socks off. I won’t go into a full explanation of how ray marching works in this post (maybe later), but if you’d like to know more about it, I’d definitely encourage that you watch the demo.
Ray marching introduced me to the idea of a distance function — which is an algorithm that tells you if a point is inside or outside an object, and by how much. So, say you had a sphere centered at (0, 0, 0) with a radius of 1 unit. Using a distance function, you’d find:
(2, 0, 0) would return 1, meaning that this point is 1 unit from the outside of the sphere
(0, 0, 1) would return 0, meaning that this point is exactly on the surface of the sphere
(0, .5, .4) would return -.36, meaning is .36 units inside the sphere
Distance functions are totally amazing. They are compact, crazy fast, and can run parallelized over the graphics card instead of the CPU. Also, they have a mathematical purity that polygon meshes don’t share. Additionally, unlike geometry defined with triangles, distance functions will also tell you important things like proximity / inclusion.
Distance Functions in this Library
This last point is super-important for the Lumarca. One small piece of code tells you if something is inside or outside a shape. Here’s how this is implemented in the Lumarca for Processing library.
When the library is run, it generates a “map” image that looks like this:
A map image is a concise way to define the physical geometry of a Lumarca structure. When a calibrated projector projects this image out onto the structure of strings, the color of each pixel describes where it lands. The RGB values describe the pixel’s XYZ location. In other words, a pixel that has an RGB pigment of (255, 0, 0), when projected, will hit a string at (x max, y min, z min). Now all I need to do is this XYZ location into a distance function, which will tell whether or not you’re inside a geometry and by how much.
What’s nice about this approach is that you compute all the expensive geometry only one time — at the generation of the map image. Everything after that is simply reading from this texture and performing simple distance functions, making it run way faster.
More significant than the speed, though, was that distance functions helped me break through a problem that was holding me and the Lumarca project back for years. I didn’t need to do crazy trig or to rely on a batch of triangles and intersection calculations. I had found something that was designed to give me answers in a volumetric manner, and so I stitched the distance function into the core of the library.
So how nicely does this all play with Processing?
Now that I had a plan for creating geometry, I just needed to wrap it all up in a Processing library that was easy to use.
To give some context, in past iterations, writing the code to pass a sphere was quite painful. If you wanted to build a sphere from the initial software, you needed to copy and paste around 100 lines of code. If you wanted to build a sphere from the 0.1.1 library, it was orders of magnitude simpler, but still quite complicated:
shape = new ObjFile(this, center, new PVector(1, 0, 0), "sphere.obj", 1.5); lumarca.drawShape(new PVector(1, 1, 0), shape);
I wanted to cut this down and make it easy. How easy? I wanted it to be as easy as the rest of Processing. I wanted to create a sphere by simply calling “sphere(10)”.
I dug around to see how realistic it would be to overwrite / replace elementary Processing functions, things like sphere() and box(). When I looked, I found that while these could be replaced, it would mean replacing the entire renderer, and potentially doing some really really ugly things. I’d also have to do it with one of my least favorite languages: Java. Cue eyeroll. But I really wanted this so I decided to investigate a bit and just survey how painful this would get.
I was dead wrong
While I’m still not a fan of Java : ) I can absolutely see the appeal where I did not before.
While I was technically correct that I’d have to replace the entire renderer, I hadn’t realized that making new renderers was really simple. Processing has, by design, swappable renderers and simple ways to build your own renderers. The heavy-handed OOP nature of Java helped me swim through this process and gave me all the guide rails I needed.
The library includes Lumarca.RENDERER. You enable it simply by passing it into the Processing size() function — so something like: size(1024, 768, Lumarca.RENDERER). You can easily flip back to size(1024, 768, P3D) if you want to see your work in a conventional 3d context if you happen to be not near a display anymore.
What’s really cool about this neat little trick is that sphere(10) means the same thing in the Lumarca.RENDERER as it does in the P3D renderer, just with a different graphical result. It means you can take hopefully any conventional processing sketch and display it on the Lumarca with a few configuration differences.