Albert Hwang



MRL VR + Spatial Computing, Day 2


Share this post

Path to component-ization is becoming clearer.  I can see this becoming a Unity asset to kickstart Vive interactivity.  Just snap on some components to a few things and you’ll be able to plug into a simple interaction pattern.

Some observations:

I’m liking this “focal point” concept more and more as time goes on.  It’s a powerful and simple idea that’s easy to work with both as a developer and user.  Conceptually rock-solid (so far), and the more I lean into it, the more I discover obvious yet still innovative solutions.

The ability to glide an object around against the normal of your focal point was totally accidental in the above code.  This effect will behave differently on non-cube geometry.  Again, though, the principles behind this interaction, while not coded out, are pretty clear as to how they’re supposed to work.

Conceptually, whipping the world around the user (instead of pretending to move the body around a world), is much less taxing on the proprioceptors.  No perceptual dissonance, which is nice.

Environment navigation with this method is feels way more natural than the “transportation” pattern of navigation.  After all, in the real world, we always transition through spaces through translation, not teleportation.  This method is hardly disorienting.

Running into issues with rotation against two focal points.  I’m doing just a Unity Quaternion LookRotation, which produces some bad results.  The problem is that two points means that there’s an unwanted rotational degree of freedom.  I plan on building around this by planting multiple focal points per controller.

Thanks again to MRL, and also to Dave Tennent for swinging by (and helping w/ a much-needed refactor).

I’m pretty new to Unity w/ regards to source control. Next time I’m in the lab I’ll probably make a proper github repo. For now, though, here’s the main portion of the code.

Read more

MRL VR + Spatial Computing, getting started…


Share this post

I’m doing some work with NYU’s Media Research Lab. I’ve just started in earnest and here’s some of my progress:

Here’s a quick sketch of an implementation of a Spatial Computing pattern in Unity.

And here’s a quick update from the lab on my latest progress…

It feels really really great to finally get back in a creative code mindset again. Also a great feeling to bring my knowledge about my body to the table… Anyhow, more updates to come.

Read more

MVP User Stories for Physically Present Virtual Media


Share this post

For Spatial Computing I had a lot of thoughts how interactivity should work in a 3d context. I’m building out development for it with Vive. Here’s a working document of some user stories I’m working on to create this. Not positive on what’s actually MVP, and what’s not.

Have been interested in arranging all this stuff in a perceptual hierarchy of needs (don’t make me sick being more fundamental than being able to touch things)… maybe that reorganization of these ideas is next.

Read more

VR Hype and the California Gold Rush… Anything Substantive There? (and a little gushing about


Share this post

Uhm, is totally amazing. Slick, smart, and oh so purdy.. Here’s my profile on it.

Anyhow, here’s a quick bit I wrote on it:

VR and the California Gold Rush: Balancing Hype with Substance

I had wanted to write this a while ago… and feel a little late on publishing it, as VR is pretty much already here, but the hype definitely still exists.

In any case, read up and comment over there if you want. Moving forward, I’m going to have to figure out where my written online content belongs… here or there…

Read more

Lumarca for Processing — 1.0.0 published!


Share this post

I just released “Lumarca for Processing” — an easy way to make stuff in the Lumarca with Processing. It uses two really cool and elegant techniques that I wanted to share: distance functions for modeling geometry, and extending Processing’s “Renderer” object to make the code super easy to work with.

On Modeling Geometry…

One of the questions you need to ask if you want to build a renderer for a volumetric display is, “how do I want to model geometry?”   In other words, how do I want to store the idea of a 3d sphere inside a computer?

The obvious place to start is by looking at how everybody else does it.

Conventional 3d modelers start by placing a bunch of dots on the surface of the sphere.  They then connect the dots to make edges, and the edges to make triangles.  Here’s an example of what that looks like:

Modeling a sphere in conventional 3d

The point of this exercise is to obtain the triangles.  Why?  Because faces are all you see when you look at something through a 2d screen.  Your brain constructs a sense of volume from what it sees on the faces — how light hits them and how textures deform around them. In other words, to create the illusion of 3d on a 2d display, make a bunch of triangles (2d shapes) and arrange them in a way that makes them appear 3d.

Unfortunately, this methodology doesn’t translate well to 3d volumetric displays.

Using 2d display practices to build stuff for Lumarca feels like using paint brushes to shape clay.  In 3d volumetric displays, 2d shapes are boundaries.  This is similar to how lines are boundaries on 2d displays.  A square on a 2d display is bounded by 4 lines.  A cube in a 3d volumetric display is bounded by 6 squares.  For the Lumarca, I don’t want face data.  I need volume data.

How we did it in the past

So for my first pass, I decided to ignore 2d rendering techniques altogether, and build a renderer from scratch. I used a bunch of high-school trig to solve for both a sphere and a cube. It was a weird and manual solution, but it worked. I liked how spheres weren’t just polygon approximations, but mathematically truly spheres.  With all the trig, though, it took quite a bit of time to render when you wanted multiple shapes on the screen at once.

The second pass at modeling geometry was inside a library built by my colleague Matt Parker.  The library was a huge improvement on the “software” that existed… which was more a collection of functions than software.  This pass at modeling used OpenGL, and we ran into all those problems I ran away from in the first place — how do you know where “inside” of an object is when all you have is triangle data.  There were lots of clever work arounds, but there were always a few edge case bugs that we would just have to code around instead of fix.

After a few amazing events / installations Matt and I slowly lost interest over the years and stopped making progress on the software. Eventually Processing would release version 2.0, and the Lumarca Software became outdated.

Distance Functions

Sometime this last year I saw a video on ray marching that just knocked my socks off. I won’t go into a full explanation of how ray marching works in this post (maybe later), but if you’d like to know more about it, I’d definitely encourage that you watch the demo.

Ray marching introduced me to the idea of a distance function — which is an algorithm that tells you if a point is inside or outside an object, and by how much. So, say you had a sphere centered at (0, 0, 0) with a radius of 1 unit.  Using a distance function, you’d find:

(2, 0, 0) would return 1, meaning that this point is 1 unit from the outside of the sphere
(0, 0, 1) would return 0, meaning that this point is exactly on the surface of the sphere
(0, .5, .4) would return -.36, meaning is .36 units inside the sphere

The code of those specific algorithms for the curious.

Distance functions are totally amazing. They are compact, crazy fast, and can run parallelized over the graphics card instead of the CPU. Also, they have a mathematical purity that polygon meshes don’t share.  Additionally, unlike geometry defined with triangles, distance functions will also tell you important things like proximity / inclusion.

Distance Functions in this Library

This last point is super-important for the Lumarca.  One small piece of code tells you if something is inside or outside a shape.  Here’s how this is implemented in the Lumarca for Processing library.

When the library is run, it generates a “map” image that looks like this:

Lumarca Texture Map

A map image is a concise way to define the physical geometry of a Lumarca structure. When a calibrated projector projects this image out onto the structure of strings, the color of each pixel describes where it lands. The RGB values describe the pixel’s XYZ location. In other words, a pixel that has an RGB pigment of (255, 0, 0), when projected, will hit a string at (x max, y min, z min).  Now all I need to do is this XYZ location into a distance function, which will tell whether or not you’re inside a geometry and by how much.

What’s nice about this approach is that you compute all the expensive geometry only one time — at the generation of the map image. Everything after that is simply reading from this texture and performing simple distance functions, making it run way faster.

More significant than the speed, though, was that distance functions helped me break through a problem that was holding me and the Lumarca project back for years. I didn’t need to do crazy trig or to rely on a batch of triangles and intersection calculations. I had found something that was designed to give me answers in a volumetric manner, and so I stitched the distance function into the core of the library.

So how nicely does this all play with Processing?

Now that I had a plan for creating geometry, I just needed to wrap it all up in a Processing library that was easy to use.

To give some context, in past iterations, writing the code to pass a sphere was quite painful.  If you wanted to build a sphere from the initial software, you needed to copy and paste around 100 lines of code.  If you wanted to build a sphere from the 0.1.1 library, it was orders of magnitude simpler, but still quite complicated:

shape = new ObjFile(this, center, new PVector(1, 0, 0), "sphere.obj", 1.5);
lumarca.drawShape(new PVector(1, 1, 0), shape);

I wanted to cut this down and make it easy.  How easy?  I wanted it to be as easy as the rest of Processing.  I wanted to create a sphere by simply calling “sphere(10)”.

I dug around to see how realistic it would be to overwrite / replace elementary Processing functions, things like sphere() and box(). When I looked, I found that while these could be replaced, it would mean replacing the entire renderer, and potentially doing some really really ugly things. I’d also have to do it with one of my least favorite languages: Java. Cue eyeroll. But I really wanted this so I decided to investigate a bit and just survey how painful this would get.

I was dead wrong

While I’m still not a fan of Java : ) I can absolutely see the appeal where I did not before.

While I was technically correct that I’d have to replace the entire renderer, I hadn’t realized that making new renderers was really simple.  Processing has, by design, swappable renderers and simple ways to build your own renderers.  The heavy-handed OOP nature of Java helped me swim through this process and gave me all the guide rails I needed.

The library includes Lumarca.RENDERER.  You enable it simply by passing it into the Processing size() function — so something like: size(1024, 768, Lumarca.RENDERER).  You can easily flip back to size(1024, 768, P3D) if you want to see your work in a conventional 3d context if you happen to be not near a display anymore.

What’s really cool about this neat little trick is that sphere(10) means the same thing in the Lumarca.RENDERER as it does in the P3D renderer, just with a different graphical result.  It means you can take hopefully any conventional processing sketch and display it on the Lumarca with a few configuration differences.

Read more

Lumarca spotted in Europe


Share this post

About five months ago I was contacted by a group of artists out in Tallinn, the capital of Estonia. They were interested in building a version of the Lumarca for an event called Heliodesign.

They did that. They’ve also gone on to build bigger and better designs, and did something that I hadn’t ever tried before — projection on string outdoors. Here’s a video from another festival, Staro Riga:

Super cool! For more information about the organization that did this, Valgusklubi, check out their Facebook page.

Read more

Bleepify it!


Share this post

About two years ago I built a prototype for a web page that would convert any website into music.  As a prototype, it was very ugly and only worked in a few browsers.  I got busy and forgot about it…

Then just a few weeks back I uncovered it.  To my surprise, I found that most modern browsers now support the technology.  So I gave it a face lift and, well, here we are:

Bleepify it!

My favorites pages to try it on are Youtube and Reddit… I guess I like structure…

I also enjoy how the control bar will convert itself to music as well…  Serves as a nice coda to button everything up.

Anyhow, enjoy all the bleeping-blooping!

Read more

Progress as a Liquid Dancer


Share this post

So I’ve been doing a lot of liquid dancing lately. Apart from going to a few raucous parties now and again, I’ve been trying to contribute back to the scene that has given me so much.

I have been developing a tutorial series on the fundamentals of liquid:

I get some really amazing warm feedback from posting these from people who are intensely interested in the same stuff that I am. It also helps me strengthen my technique when I force myself to ponder these typically non-linguistic ideas and articulate them for others.

Also, I’ve assisted a friend with a project and co-authred his paper on liquid dance and HCI:

It all started when Diego contacted me about questions about the Lumarca project. We got to talking and hit it off and I learned that he was really into Laban Movement Analysis, which is a very cool way of trying to capture and describe movement from an academic perspective.

Anyhow, we hit it off very quickly and I kept trying to force down his throat this idea that I had, which was that whenever the true 3d interfaces of tomorrow make it to our living rooms, the people who study liquid will be the power users. Further, that anybody building designs for 3d interaction should be looking at this community, because the fundamental principles that drive this form of dance are perfect principles for designing elegant, efficient, beautiful, and natural 3d interfaces.

Anyhow, one thing led to another, and now this counts as the second masters thesis I’ve inspired! : D (The first being Matt Parker’s on the Lumarca at ITP).

So yeah, just wanted to share all of that. HCI people, meet liquid people. Liquid people, meet HCI people. I have a feeling y’all will be a match made in heaven someday…

Read more

Me + Internet = It’s Complicated


Share this post

I took a little over a year to rethink my relationship to the internet as an artist. It’s given me some amazing emotional rushes. It has also contributed to some rough times. Here’s what I’ve discovered about me, my work, and the internet.

Being “Right”

Before I get into “me + something else”, I want to first review who I was when I started this new media art journey.

Just outa college, I was motivated by one thing: being right. It wasn’t money, power, social good, raising awareness, or anything like that; for better or for worse, I was mostly only motivated by “rightness”.

To me, being right meant designing things the right way. In terms of process, it meant that I wanted to itch away at a riddle until I unraveled it’s definitive mechanic. Once I was confident that I’d cut away all the fat and left nothing but the core solution, I used that answer as a foundation to construct the logical hyperbole of that idea. This was a tedious process — as most creative processes are.

To assist in this tedium, I needed some tools. I needed a way to record my journey. I needed a place to workshop my thoughts. I needed a stage to exhibit my finished work.

The internet was my toolbox

The internet fit all these needs. It was my diary, my community, and my gallery. I could vent my frustrations and record my insights. I could contemplate my progress, share my ideas, get feedback, and present to the world.

It was a good toolbox to have… I felt well-equipped to test wildly outlandish ideas because I always had a trail of breadcrumbs if I needed to get back home. So what if my conclusions ended up being wrong? I had a paper trail, knew what I wanted, and had a pretty good sense of where I was coming from.

Then the Traffic Came…

Read more

World’s Largest Volumetric Display Ever?


Share this post

I just came back from a trip to Guadalajara where Matt and I built what I believe to be the world’s largest volumetric display in history.

As of right now, I don’t have much documentation (clean good photos, videos); those come later.  I hope the images below give a good enough idea:


To give a better sense of scale, below is an image of the project early in the construction process (only about 10% strung):

It’s big.

The structure itself is 40′ tall, and 30′ wide and deep.  The volume of the structure itself is 36,000 cubic feet (about the same internal volume of a 747).  The volume of the rendering size is roughly half that.  Still, much bigger than any other volumetric display I’ve ever heard of.


But this post isn’t about the build process or the technical specs or the frustrations or fears.


I just wanted to write this post to express how happy I am right now.

In 2006 I developed a little idea that people seemed to love — the idea of projecting light onto strings.  I made some prototypes, published some stuff on youtube, and got some attention from random internet people.  That’s about as far as I was initially planning to push this idea.

At the time, I knew that, at least in principle, the idea could scale to fill up an entire theater.  When I had those ideas, I never really imagined that one day that would actually happen.

In some ways, this last installation was a dream come true.  A huge team of some really talented people rallied together to get this installation up and running.  While there were plenty of frustrations and set-backs, everybody’s heart was in the right place, and everybody just placed their faith in my and Matt’s vision.  A lot of the hired help probably had no idea what was happening while they were helping out, but they were really great, and when things got rolling, we were pushed through a whole ton of work really quickly.

And as soon as Matt and I and our whole team started reaching maximum velocity with work, one by one, other artists popped up and started setting up their projects.  I was so neck-deep in Lumarca work that I didn’t really have time to engage with other people’s work or process until just before opening, really.  As soon as I finished the Lumarca work, I looked around at the other work and realized that the whole theater had turned into a digital garden with so many interesting projects.  Only then did I realize that this was going to be a damn fine show and that we all had something to be really proud of.

The most touching moment was when the doors opened and people started flooding in from the streets of Guadalajara.  I was told that 1500 people came to the opening — a number that far outdid anybody’s expectations.  I got the sense that the whole city had been patiently waiting for something like this — as if the city let out a sigh of relief — as if all the closeted media arts geeks of Guadalajara came out of hiding at the same time and realized that there was definitely space for them here in this town.

I am so grateful that I was given the chance to share my work with so many eager minds and bright faces.  I’m really proud of my work and on the potential impact that I’ve had.  I think everybody at the festival (artists, administration, crew, security, volunteers) should be proud of the entire festival.  I really don’t think I’m fantasizing when I say that we’ve helped reshape the cultural landscape of Guadalajara.

Lastly, I wanted to share few things I’m thankful for:

  • That Matt Parker guy
  • Everybody’s faith that this would all work out in the end
  • Marco’s family for extending the invitation out to Tapalpa
  • Every last staffer at MOD, but especially:
    • Margarita Sierra
    • Marco Antonio Castro
    • Juan Carlos Robertson
    • Janet, Sandra, Susana, Augustine
    • Octavio, Oscar, and the whole crew
  • Everybody at Larva, especially:
    • Alejandra
    • The people behind Cafe Bonito & El Menu Del Dia (Yum!)
    • Security
  • The crew
    • Arturo
    • “Mi Hijo” Christian 2 y sus amigos (Especialmente Christian 1, Michael, Giovanni)
    • The students from CAAV (especially Jorge, Argel)
    • Horacio
  • My 3 years of Spanish I took in high school
  • And those skype calls with Morgan

Read more