Why I built "Focal Point VR" -- and other related thoughts

Context

If you know me personally, you may or may not know that, for better or for worse, I have some very strong and deeply rooted opinions on exactly how I think VR ought to be.

Some of this is based on my dance and theater background (I like to think that my 20 years of illusion-style dance experience gives me a head start on virtual object manipulation) -- but most of my opinions are totally unsubstantiated.  The more I dig into my opinions, the more I realize that they are instincts at best, and at worst, unproven beliefs and the feelings that come along with them.  I guess being a dancer helps me feel like a prima donna now and again too : )

But I think this unsureness is okay.  The more I dig into VR, the more I see that nobody really knows how it’ll shake out.  How could we?  The field is so young, and almost all the ideas are still unproven.  Unsubstantiated opinions do have a place at the table -- not as total solutions, but as sparks of inspiration to build new things...

The Impulse to Build

The good part about having an admittedly high-and-mighty point of view produces a lot of useful creative tension.  If it’s blatantly obvious to me how it should work, it should be easy to prove it, right?  Always easier said than done.

The purpose of this build, then, was to build something materially useful for others -- to ignite something new in others so that they will be inspired to build cool things.  If this resonates with others, maybe that’s proof that these ideas have do have traction.

The question I wanted to solve was: what sort of framework is necessary to create VR experiences that incorporate joyful human movement?

The Build

So I built Focal Point.  The work mainly involved bringing the interaction patterns from Spatial Computing into the HTC Vive on the Unity platform.

All in all, I think it’s a pretty solid first crack at the problem.  Here’s the promo clip:

The work is emotionally charged and feels exuberant in a way that I feel separates it from other VR content.  While it is obviously not nearly as polished, the core mechanics of object manipulation and movement feel really great.  The gestures feel physically expressive and never awkward.  For a more in depth view of the mechanics of movement, check the Focal Point VR Demo Instructions video:

Apart from the end deliverable result, I’m also pleased to see my appreciation for the problem set grow.  This stuff is hard, but my instincts do feel as right as I hoped that they would.  Implementing an idea always reveals new things. Often those new things reveal that the idea doesn't have traction, but sometimes (and in this case), those new learnings reveal that you should dig in deeper.

Opportunities for Improvement

Saying that this project is successful doesn’t mean by any means it’s perfect.  The two areas that I think can be improved are the code and the communication of the idea.

The Code

This is my first serious C# project, so I’m very likely coding things in a non-C# way, resulting in code that’s harder for people to read and possibly more end-user headaches (literally) due to slower frame rates.

More on architecture side of things, I don’t quite understand the proper way to author code that is both extensible and easy for beginners to understand.  As such, there’s a lot of repetition where I feel an experienced C# developer would be able to standardize some of this stuff.  (if this is you, please contact me!)

Communication of the Idea

My main frustration at this point, though, isn't the code.  It is that I am having a difficult time getting at precisely what it is I want to express.  I’m having a difficult time articulating something that I feel in my body.  The best way I have yet to find to describe it so far is that human bodies seem to work very nicely with 3d cartesian points.  3d points are mechanically reliable, emotionally charged (think tip of a knife, stamen of a flower), and, perhaps most importantly, totally kinesthetically / proprioceptively grock-able.  I believe this concept to be central to the future of VR IXD.

This project is perhaps an attempt at expressing this, but for now a lot has been left for others to fill in the details, and, as I said before, I’m pretty intent on trying to fill them in with my potentially over-opinionated perspective… hopefully for the best.

Summary

Focal Point will now serve as a base camp for helping me whacking away at the bigger question: what, precisely, are the rules that govern joyful kinesthetic interactivity?

If you’re interested in jumping in on the fun, download and run the demos and send me your feedback.  This is open source, so if you have ideas and would like to add them as well, reach out or just send me a pull request on the Github repo.

 

Preview of "Focal Point" -- a design framework for Vive

Here's some of the results of my work so far...

 

Obviously, a lot of my process here is based on stuff that I build back w/ Spatial Computing. So far, the process has been all about boiling down what was present in Spatial Computing and distilling it into it's fundamentals.  Fortunately, the fundamentals are really simple underneath it all.  My task now is to rearrange those fundamentals so that it's easy for new VR developers to grock ASAP.

The good news is, I have lots and lots of experience with education around sophisticated mental models of movement in 3d space, so I'm not too worried about not being able to boil, distill, and redistribute in a meaningful manner.

Anyhow, jump on the mailing list if you want me to let you know when the Unity asset is live.


Sorta tangentially, just wanted to add a note to some of the cool folks I've had a chance to work with at MRL:

MRL VR + Spatial Computing, Day 2

 

Path to component-ization is becoming clearer.  I can see this becoming a Unity asset to kickstart Vive interactivity.  Just snap on some components to a few things and you'll be able to plug into a simple interaction pattern.

Some observations:

I'm liking this "focal point" concept more and more as time goes on.  It's a powerful and simple idea that's easy to work with both as a developer and user.  Conceptually rock-solid (so far), and the more I lean into it, the more I discover obvious yet still innovative solutions.

The ability to glide an object around against the normal of your focal point was totally accidental in the above code.  This effect will behave differently on non-cube geometry.  Again, though, the principles behind this interaction, while not coded out, are pretty clear as to how they're supposed to work.

Conceptually, whipping the world around the user (instead of pretending to move the body around a world), is much less taxing on the proprioceptors.  No perceptual dissonance, which is nice.

Environment navigation with this method is feels way more natural than the "transportation" pattern of navigation.  After all, in the real world, we always transition through spaces through translation, not teleportation.  This method is hardly disorienting.

Running into issues with rotation against two focal points.  I'm doing just a Unity Quaternion LookRotation, which produces some bad results.  The problem is that two points means that there's an unwanted rotational degree of freedom.  I plan on building around this by planting multiple focal points per controller.

Thanks again to MRL, and also to Dave Tennent for swinging by (and helping w/ a much-needed refactor).

I'm pretty new to Unity w/ regards to source control. Next time I'm in the lab I'll probably make a proper github repo. For now, though, here's the main portion of the code.

MRL VR + Spatial Computing, getting started...

I'm doing some work with NYU's Media Research Lab. I've just started in earnest and here's some of my progress: Here's a quick sketch of an implementation of a Spatial Computing pattern in Unity.

And here's a quick update from the lab on my latest progress...

It feels really really great to finally get back in a creative code mindset again. Also a great feeling to bring my knowledge about my body to the table... Anyhow, more updates to come.