Albert Hwang



Lumarca for Processing — 1.0.0 published!


Share this post

I just released “Lumarca for Processing” — an easy way to make stuff in the Lumarca with Processing. It uses two really cool and elegant techniques that I wanted to share: distance functions for modeling geometry, and extending Processing’s “Renderer” object to make the code super easy to work with.

On Modeling Geometry…

One of the questions you need to ask if you want to build a renderer for a volumetric display is, “how do I want to model geometry?”   In other words, how do I want to store the idea of a 3d sphere inside a computer?

The obvious place to start is by looking at how everybody else does it.

Conventional 3d modelers start by placing a bunch of dots on the surface of the sphere.  They then connect the dots to make edges, and the edges to make triangles.  Here’s an example of what that looks like:

Modeling a sphere in conventional 3d

The point of this exercise is to obtain the triangles.  Why?  Because faces are all you see when you look at something through a 2d screen.  Your brain constructs a sense of volume from what it sees on the faces — how light hits them and how textures deform around them. In other words, to create the illusion of 3d on a 2d display, make a bunch of triangles (2d shapes) and arrange them in a way that makes them appear 3d.

Unfortunately, this methodology doesn’t translate well to 3d volumetric displays.

Using 2d display practices to build stuff for Lumarca feels like using paint brushes to shape clay.  In 3d volumetric displays, 2d shapes are boundaries.  This is similar to how lines are boundaries on 2d displays.  A square on a 2d display is bounded by 4 lines.  A cube in a 3d volumetric display is bounded by 6 squares.  For the Lumarca, I don’t want face data.  I need volume data.

How we did it in the past

So for my first pass, I decided to ignore 2d rendering techniques altogether, and build a renderer from scratch. I used a bunch of high-school trig to solve for both a sphere and a cube. It was a weird and manual solution, but it worked. I liked how spheres weren’t just polygon approximations, but mathematically truly spheres.  With all the trig, though, it took quite a bit of time to render when you wanted multiple shapes on the screen at once.

The second pass at modeling geometry was inside a library built by my colleague Matt Parker.  The library was a huge improvement on the “software” that existed… which was more a collection of functions than software.  This pass at modeling used OpenGL, and we ran into all those problems I ran away from in the first place — how do you know where “inside” of an object is when all you have is triangle data.  There were lots of clever work arounds, but there were always a few edge case bugs that we would just have to code around instead of fix.

After a few amazing events / installations Matt and I slowly lost interest over the years and stopped making progress on the software. Eventually Processing would release version 2.0, and the Lumarca Software became outdated.

Distance Functions

Sometime this last year I saw a video on ray marching that just knocked my socks off. I won’t go into a full explanation of how ray marching works in this post (maybe later), but if you’d like to know more about it, I’d definitely encourage that you watch the demo.

Ray marching introduced me to the idea of a distance function — which is an algorithm that tells you if a point is inside or outside an object, and by how much. So, say you had a sphere centered at (0, 0, 0) with a radius of 1 unit.  Using a distance function, you’d find:

(2, 0, 0) would return 1, meaning that this point is 1 unit from the outside of the sphere
(0, 0, 1) would return 0, meaning that this point is exactly on the surface of the sphere
(0, .5, .4) would return -.36, meaning is .36 units inside the sphere

The code of those specific algorithms for the curious.

Distance functions are totally amazing. They are compact, crazy fast, and can run parallelized over the graphics card instead of the CPU. Also, they have a mathematical purity that polygon meshes don’t share.  Additionally, unlike geometry defined with triangles, distance functions will also tell you important things like proximity / inclusion.

Distance Functions in this Library

This last point is super-important for the Lumarca.  One small piece of code tells you if something is inside or outside a shape.  Here’s how this is implemented in the Lumarca for Processing library.

When the library is run, it generates a “map” image that looks like this:

Lumarca Texture Map

A map image is a concise way to define the physical geometry of a Lumarca structure. When a calibrated projector projects this image out onto the structure of strings, the color of each pixel describes where it lands. The RGB values describe the pixel’s XYZ location. In other words, a pixel that has an RGB pigment of (255, 0, 0), when projected, will hit a string at (x max, y min, z min).  Now all I need to do is this XYZ location into a distance function, which will tell whether or not you’re inside a geometry and by how much.

What’s nice about this approach is that you compute all the expensive geometry only one time — at the generation of the map image. Everything after that is simply reading from this texture and performing simple distance functions, making it run way faster.

More significant than the speed, though, was that distance functions helped me break through a problem that was holding me and the Lumarca project back for years. I didn’t need to do crazy trig or to rely on a batch of triangles and intersection calculations. I had found something that was designed to give me answers in a volumetric manner, and so I stitched the distance function into the core of the library.

So how nicely does this all play with Processing?

Now that I had a plan for creating geometry, I just needed to wrap it all up in a Processing library that was easy to use.

To give some context, in past iterations, writing the code to pass a sphere was quite painful.  If you wanted to build a sphere from the initial software, you needed to copy and paste around 100 lines of code.  If you wanted to build a sphere from the 0.1.1 library, it was orders of magnitude simpler, but still quite complicated:

shape = new ObjFile(this, center, new PVector(1, 0, 0), "sphere.obj", 1.5);
lumarca.drawShape(new PVector(1, 1, 0), shape);

I wanted to cut this down and make it easy.  How easy?  I wanted it to be as easy as the rest of Processing.  I wanted to create a sphere by simply calling “sphere(10)”.

I dug around to see how realistic it would be to overwrite / replace elementary Processing functions, things like sphere() and box(). When I looked, I found that while these could be replaced, it would mean replacing the entire renderer, and potentially doing some really really ugly things. I’d also have to do it with one of my least favorite languages: Java. Cue eyeroll. But I really wanted this so I decided to investigate a bit and just survey how painful this would get.

I was dead wrong

While I’m still not a fan of Java : ) I can absolutely see the appeal where I did not before.

While I was technically correct that I’d have to replace the entire renderer, I hadn’t realized that making new renderers was really simple.  Processing has, by design, swappable renderers and simple ways to build your own renderers.  The heavy-handed OOP nature of Java helped me swim through this process and gave me all the guide rails I needed.

The library includes Lumarca.RENDERER.  You enable it simply by passing it into the Processing size() function — so something like: size(1024, 768, Lumarca.RENDERER).  You can easily flip back to size(1024, 768, P3D) if you want to see your work in a conventional 3d context if you happen to be not near a display anymore.

What’s really cool about this neat little trick is that sphere(10) means the same thing in the Lumarca.RENDERER as it does in the P3D renderer, just with a different graphical result.  It means you can take hopefully any conventional processing sketch and display it on the Lumarca with a few configuration differences.

Read more

Lumarca spotted in Europe


Share this post

About five months ago I was contacted by a group of artists out in Tallinn, the capital of Estonia. They were interested in building a version of the Lumarca for an event called Heliodesign.

They did that. They’ve also gone on to build bigger and better designs, and did something that I hadn’t ever tried before — projection on string outdoors. Here’s a video from another festival, Staro Riga:

Super cool! For more information about the organization that did this, Valgusklubi, check out their Facebook page.

Read more

Bleepify it!


Share this post

About two years ago I built a prototype for a web page that would convert any website into music.  As a prototype, it was very ugly and only worked in a few browsers.  I got busy and forgot about it…

Then just a few weeks back I uncovered it.  To my surprise, I found that most modern browsers now support the technology.  So I gave it a face lift and, well, here we are:

Bleepify it!

My favorites pages to try it on are Youtube and Reddit… I guess I like structure…

I also enjoy how the control bar will convert itself to music as well…  Serves as a nice coda to button everything up.

Anyhow, enjoy all the bleeping-blooping!

Read more

Progress as a Liquid Dancer


Share this post

So I’ve been doing a lot of liquid dancing lately. Apart from going to a few raucous parties now and again, I’ve been trying to contribute back to the scene that has given me so much.

I have been developing a tutorial series on the fundamentals of liquid:

I get some really amazing warm feedback from posting these from people who are intensely interested in the same stuff that I am. It also helps me strengthen my technique when I force myself to ponder these typically non-linguistic ideas and articulate them for others.

Also, I’ve assisted a friend with a project and co-authred his paper on liquid dance and HCI:

It all started when Diego contacted me about questions about the Lumarca project. We got to talking and hit it off and I learned that he was really into Laban Movement Analysis, which is a very cool way of trying to capture and describe movement from an academic perspective.

Anyhow, we hit it off very quickly and I kept trying to force down his throat this idea that I had, which was that whenever the true 3d interfaces of tomorrow make it to our living rooms, the people who study liquid will be the power users. Further, that anybody building designs for 3d interaction should be looking at this community, because the fundamental principles that drive this form of dance are perfect principles for designing elegant, efficient, beautiful, and natural 3d interfaces.

Anyhow, one thing led to another, and now this counts as the second masters thesis I’ve inspired! : D (The first being Matt Parker’s on the Lumarca at ITP).

So yeah, just wanted to share all of that. HCI people, meet liquid people. Liquid people, meet HCI people. I have a feeling y’all will be a match made in heaven someday…

Read more

Me + Internet = It’s Complicated


Share this post

I took a little over a year to rethink my relationship to the internet as an artist. It’s given me some amazing emotional rushes. It has also contributed to some rough times. Here’s what I’ve discovered about me, my work, and the internet.

Being “Right”

Before I get into “me + something else”, I want to first review who I was when I started this new media art journey.

Just outa college, I was motivated by one thing: being right. It wasn’t money, power, social good, raising awareness, or anything like that; for better or for worse, I was mostly only motivated by “rightness”.

To me, being right meant designing things the right way. In terms of process, it meant that I wanted to itch away at a riddle until I unraveled it’s definitive mechanic. Once I was confident that I’d cut away all the fat and left nothing but the core solution, I used that answer as a foundation to construct the logical hyperbole of that idea. This was a tedious process — as most creative processes are.

To assist in this tedium, I needed some tools. I needed a way to record my journey. I needed a place to workshop my thoughts. I needed a stage to exhibit my finished work.

The internet was my toolbox

The internet fit all these needs. It was my diary, my community, and my gallery. I could vent my frustrations and record my insights. I could contemplate my progress, share my ideas, get feedback, and present to the world.

It was a good toolbox to have… I felt well-equipped to test wildly outlandish ideas because I always had a trail of breadcrumbs if I needed to get back home. So what if my conclusions ended up being wrong? I had a paper trail, knew what I wanted, and had a pretty good sense of where I was coming from.

Then the Traffic Came…

Read more

World’s Largest Volumetric Display Ever?


Share this post

I just came back from a trip to Guadalajara where Matt and I built what I believe to be the world’s largest volumetric display in history.

As of right now, I don’t have much documentation (clean good photos, videos); those come later.  I hope the images below give a good enough idea:


To give a better sense of scale, below is an image of the project early in the construction process (only about 10% strung):

It’s big.

The structure itself is 40′ tall, and 30′ wide and deep.  The volume of the structure itself is 36,000 cubic feet (about the same internal volume of a 747).  The volume of the rendering size is roughly half that.  Still, much bigger than any other volumetric display I’ve ever heard of.


But this post isn’t about the build process or the technical specs or the frustrations or fears.


I just wanted to write this post to express how happy I am right now.

In 2006 I developed a little idea that people seemed to love — the idea of projecting light onto strings.  I made some prototypes, published some stuff on youtube, and got some attention from random internet people.  That’s about as far as I was initially planning to push this idea.

At the time, I knew that, at least in principle, the idea could scale to fill up an entire theater.  When I had those ideas, I never really imagined that one day that would actually happen.

In some ways, this last installation was a dream come true.  A huge team of some really talented people rallied together to get this installation up and running.  While there were plenty of frustrations and set-backs, everybody’s heart was in the right place, and everybody just placed their faith in my and Matt’s vision.  A lot of the hired help probably had no idea what was happening while they were helping out, but they were really great, and when things got rolling, we were pushed through a whole ton of work really quickly.

And as soon as Matt and I and our whole team started reaching maximum velocity with work, one by one, other artists popped up and started setting up their projects.  I was so neck-deep in Lumarca work that I didn’t really have time to engage with other people’s work or process until just before opening, really.  As soon as I finished the Lumarca work, I looked around at the other work and realized that the whole theater had turned into a digital garden with so many interesting projects.  Only then did I realize that this was going to be a damn fine show and that we all had something to be really proud of.

The most touching moment was when the doors opened and people started flooding in from the streets of Guadalajara.  I was told that 1500 people came to the opening — a number that far outdid anybody’s expectations.  I got the sense that the whole city had been patiently waiting for something like this — as if the city let out a sigh of relief — as if all the closeted media arts geeks of Guadalajara came out of hiding at the same time and realized that there was definitely space for them here in this town.

I am so grateful that I was given the chance to share my work with so many eager minds and bright faces.  I’m really proud of my work and on the potential impact that I’ve had.  I think everybody at the festival (artists, administration, crew, security, volunteers) should be proud of the entire festival.  I really don’t think I’m fantasizing when I say that we’ve helped reshape the cultural landscape of Guadalajara.

Lastly, I wanted to share few things I’m thankful for:

  • That Matt Parker guy
  • Everybody’s faith that this would all work out in the end
  • Marco’s family for extending the invitation out to Tapalpa
  • Every last staffer at MOD, but especially:
    • Margarita Sierra
    • Marco Antonio Castro
    • Juan Carlos Robertson
    • Janet, Sandra, Susana, Augustine
    • Octavio, Oscar, and the whole crew
  • Everybody at Larva, especially:
    • Alejandra
    • The people behind Cafe Bonito & El Menu Del Dia (Yum!)
    • Security
  • The crew
    • Arturo
    • “Mi Hijo” Christian 2 y sus amigos (Especialmente Christian 1, Michael, Giovanni)
    • The students from CAAV (especially Jorge, Argel)
    • Horacio
  • My 3 years of Spanish I took in high school
  • And those skype calls with Morgan

Read more

ofxPortalCam – Viewing your monitor as a window, not a picture

1 Comment

Share this post

ofxPortalCam is an addon for openFrameworks that allows you to use your monitor as if it were a window through which you could see 3d content.  In other words, it transforms your monitor into a digital diorama using the Kinect for head tracking.

YouTube Preview Image

Here’s the project in a nutshell:

1) The Big Idea

A computer (that is hooked up to both the Kinect and the screen) gives the viewer an image based on these three different things:

  1. where the viewer’s head is located
  2. where the screen is located
  3. the shape of the digital scene

With this information the computer knows what image to present to give the viewer the illusion that the screen is a portal into a 3d space.

2) The main problem: Monitor space and Kinect space

So, in isolation, I have all three criteria.  The Kinect gives me the user’s head, openFrameworks defines where the screen is located, and the code I use to build my openFrameworks app defines the shape of the scene.

But there’s one big problem.  The Kinect is defining everything according to one perspective, and openFrameworks is telling me everything in another perspective.  In other words, the location of my head according to the Kinect doesn’t correlate to the location of my head in screen space.  So we need to do some calibration so that we can translate back and forth between the two types of space.

3) The solution: The Human Body

Read more

Game-based Goal Setting

1 Comment

Share this post

If you want to watch (and not read):

YouTube Preview Image

If you want to read the cliffnotes (and not watch):

I’ve got an Excel spreadsheet that graphs my progress on my goals.  A lot of the design is inspired by my experience playing video games.

With video games, rewards are clear.  This is because success and failure are clear.  Additionally, games give you consistent feedback to affirm or discourage behavior.  One of the simplest forms of feedback is the visual feedback known as a “bar” or “meter” (like “health bar” or “experience meter”).

Clear rewards makes life better : )

I applied this sort of reward system in my life by creating some goals and visualizing my progress with this excel sheet.


*Note — it’s only tested to be compatible with Microsoft Excel. I’m quite certain it doesn’t work with the google docs spreadsheet reader. Some features work with Open Office, and some don’t.

This spreadsheet tells me how far along on my goal I’ve come.  It also tells me how far along my goal I should be according to a pace set by my start and end dates.

Download, share, innovate.  But if you innovate, send me back a copy!

Read more

Spatial Computing 3!


Share this post

Spatial Computing 3 has finally arrived!

YouTube Preview Image

First and foremost — thanks goes out to every single person who helped out.

Second, if you’re unfamiliar with the project and want some context, read more about it here.

As you should be able to tell from the title of the video, SC3 is about using the Spatial Computing paradigm as a way to reinvent the home theater system.

While the video does feature a bunch of interesting ideas, the crux of the video is the ability to scale what you’re watching with a simple gesture. This gives the user the ability to watch the content at any size, whether that be in front of them or in the space around their body. It also is a nice way to transition and travel about in a digital environment with the use of a few simple gestures.

I’m glad I decided to spend the majority of this video talking about this concept because it’s deceivingly complicated. When I set out to do this video, I wasn’t sure if the “aquarium” idea would actually look convincing. I was surprised to find that it ended up working so well.

The most exciting moments of working on the project were when I would try out a new effect and it ended up working. Heh, what a relief. The times that I specifically remember was when I was working on the glowy intersection effect (whenever a 3d model intersected with the physical wall of the room). Took a lot of trial and error but I’m pretty happy about those little effects — they really add to the visual story that there are two competing 3d environments that need to somehow synthesize.

Read more

I’m Speaking at SXSW!


Share this post

Well, I guess, more accurately, I should say that I’m going to be speaking on a panel at SXSW this year.  The topic of conversation will be “Detached Messages: Immersive and Spatial Systems“.

I’ll be speaking alongside Adam Pruden, a smart dude w/ a background in architecture, who’s involved with the Fly Fire project, which is a design for a volumetric display made of LEDs attached to mini-helicopters:

YouTube Preview Image

Here’s a brainstorm of some of the things I am considering talking about:

  • Lumarca
  • Spatial Computing
  • Dancing and some of the physical frameworks developed around spatial media
  • The craft behind the magic of making things levitate and why this is relevant to spatial media
  • Why the “digital” is thought of as visually 2d, and how this expectation plays out

In any case, if you wanna hear about it all, come on through and watch me blab for an hour!

Read more