Wednesday, August 19, 2015

Code Liberation Foundation: Unity-3D UI with Oculus Rift (Lecture + Workshop) - Sept 5

I am excited to announce that I'll be teaching a class on Unity-3D UI and the Oculus Rift on September 5th for the Code Liberation Foundation.

The Code Liberation Foundation offers free/low cost development workshops in order to facilitate the creation of video game titles by women.  Code Liberation events are trans-inclusive and women-only. Women of all skill levels and walks of life are invited to attend.

If you are interested in VR, identify as a woman, and are in the NY area, I'd love to see you there. Space is limited, so sign up today!

Wednesday, July 29, 2015

Working with the Rift is changing how I dream

The first video game influenced dream I remember having was back in the late eighties. I was obsessed with Tetris and had very vivid dreams of Tetris blocks falling on me. So, it isn’t surprising to me that playing video games can change the way you dream. That said, the specific effect that working with the Rift has had on my dreams did surprise me.

When using the Rift, you are sitting in place and the the world moves around or past you instead of like real-life where you move though the world. I find now in my dreams, no matter what I am dreaming about, there are now two kinds of movement - movement where I dream I am moving through a world and movement where I am still and the world moves around or past me. Perhaps, this kind of dreaming is an attempt by my brain to make Rift movement feel more natural to me? Anyone else dreaming like this?

Tuesday, June 23, 2015

Unity + Leap: Explicit instruction and hand gestures

I have been experimenting a bit with the LEAP and looking at getting objects into the user’s hand. In one of my experiments*, the user holds their hand out flat and a fairy appears. This experiment used explicit instruction written on menus to tell the user what to do.

Explicit instruction worked in that my test users did what I wanted them to - nod and hold their hand out flat. The downside, of course, is that it required them to read instructions which isn’t very immersive or fun. In future experiments, I want to look at implicit instruction, such as having non-player characters perform actions first.

* This demo is now available from the  Leap Motion Gallery.

Notes on getting an object to appear on a user’s hand

Some quick notes on getting the fairy to appear on the user’s hand:

You can find all of the hand models in a scene using:

HandModel[] userHands = handcontroller.GetAllPhysicsHands();

To know if the hand is palm up, you can get the normal vector projecting from the hand relative to the controller, using:


To know if the hand is open or closed, you can look at the hand’s grab strength. The strength is zero for an open hand, and blends to 1.0 when a grabbing hand pose is recognized.


To know where to place the object, you can get the palm position (relative to the controller), using:


Wednesday, June 3, 2015

Unity + Leap: Raising Your Hand to Get a Character's Attention

A common interaction in real life is to raise your hand to get someone’s attention. We do it when we are meeting someone in a crowd to help them find us, we do it when we are at school to get the teacher’s attention, and we do it as parents to get our child’s attention so they know that we are there and watching. We also do it when we want to hail a cab or make a bid at an auction. It is a simple enough interaction that babies do it almost instinctively. As simple as raising your hand is, using it as a mechanic in a VR environment brings up some interesting questions. How high should the user raise their hand to trigger the interaction? How long does the user need to have their hand raised? And, what should happen if the application loses hand tracking?

To experiment with this interaction, I created a demo consisting of a single character idling, minding his own business. When the user raises their hand, the character waves back and a speech bubble appears saying “Hello, there!”

Let’s take a look at the demo setup and then look at how testing the user experience went.


To create the scene I used basic 3D objects (planes, cubes) and a directional light to create a simple room. The character in the scene is a a rigged human character ("Carl") from the Male Character Pack by Mixamo. The speech bubble is created using a world space canvas (see: Thought Bubbles in a Rift scene).  To get my hands in the scene, I used the LeapOVRPlayerController from the Leap Unity Core Assest v.2.2.4 (see: Seeing your hands in VR).

For the character animation, I used the Idle and Wave animations from the Raw Mocap data package for Macanim by Unity Technologies (free animations created from motion capture data) and I created an animation controller for the character to control when he is idling and when he waves back at you. The animation controller has two animation states, Idle and Wave. It also has two triggers that can be used to trigger the transition between each state:

The animation controller for the waving
 character has two states and two triggers.

And, of course, I wrote a script (wavinghello.cs) to detect when the user has raised their hand. The interesting bit of this script is how you know where the user’s hands are and how you know when a hand has been raised high enough  so that you can trigger the appropriate animation. Let's take a look at the script's Update() function:

void Update () {
        HandModel[] userHands = handController.GetAllPhysicsHands(); 
        if (userHands.Length > 0){
          foreach (HandModel models in userHands){

            if (models.GetPalmPosition().y >= centerEyeAnchor.transform.position.y){
             } else {

        } else {

To get the all of the hands in the scene, the script uses GetAllPhysicsHands() 
from HandController.cs:

  HandModel[] userHands = handController.GetAllPhysicsHands();

GetAllPhysicsHands() returns an array of all Leap physics HandModels for the specified HandController. To get each hand's position, the script uses  GetPalmPosition() which returns the Vector3 position of the HandModel relative to the HandController. The HandController is located at 0, 0, 0 relative to its parent object, the CenterEyeAnchor.

The HandController is a child
 of the CenterEyeAnchor.
The HandController is located at 0, 0, 0
relative to its parent the CenterEyeAnchor.

The CenterEyeAnchor object is used by the Oculus Rift integration scripts to maintain a position directly between the two eye cameras.  As the cameras are the user’s eyes, if the Y value of a HandModel object's position is greater than the Y value of the centerEyeAnchor, we know the user's hand has been raised above eye level.

The user experience

When testing this demo I was looking at how high the user should raise their hand to trigger the interaction, how long the user should have their hand raised, and, what the application should do when it loses hand tracking. Initially, I went with what seemed comfortable for me. I required the users to raise their hand (measured from the center of their palm) to above eye level and I did not require the user's hand to be raised for any specific amount of time. If the Leap lost hand tracking, the application treated it as though all hands were below eye level.

I then grabbed three people to do some testing. The only instruction I gave them was to “raise your hand to get the guy’s attention.” For my first user, the demo worked quite well. He raised his hand and the character waved back as expected. Great so far. My second user was resistant to raising his hand any higher than his nose. He quickly got frustrated as he could not get the guy’s attention. My third user raised his hand and then waved it wildly around so much so that the speech bubble flickered and was unreadable. Quite a range of results for only three users.

For my next iteration, I set  the threshold for raising one’s hand a few centimeters below eye level.

models.GetPalmPosition().y >= centerEyeAnchor.transform.position.y - 0.03f

This worked for my second user as it was low enough that he would trigger the interaction, but not so low that he would accidentally trigger it.

I haven’t done anything to address the third user yet, but whatever I do, waving my hands like a maniac is now part of my my own testing checklist.

I’d love to hear if anyone else is using this type of mechanic and what their experiences are

Friday, May 15, 2015

Oculus SDK 0.6.0 and Oculus Rift in Action

Oculus publicly released version 0.6 their SDK today.  This version represents the biggest change to the API since the first release of the C API in version 0.3, slightly over a year ago.  The new version simplifies the API in a number of ways:
  • Removed the distinction between direct and extended modes from the perspective of the developer  
  • Removes support for client-side distortion
  • Moves the management of the textures used for offscreen rendering from the client application to the SDK
  • Fixes a number of OpenGL limitations
  • etc, etc...

So those who have purchased or intend to purchase our book may be asking themselves what version we'll be covering exactly.

The short answer is that currently the book covers version 0.5 of the SDK (which itself is almost identical to version 0.4.4).  That isn't likely to change before print.  At this point we're well into the final stages of publication, doing final editorial passes for proofreading.  If we were to update to 0.6 we would have to go back and revise quite a few chapters and start much of that work over again.  That said, when we found out what a big change 0.6 was, there was much discussion, consideration and consternation.

However, with the public release of the SDK, there came a more compelling reason for us to stay with the version we're currently targeting.  In a recent blog post on the Oculus website, Atman Binstock stated...
Our development for OS X and Linux has been paused in order to focus on delivering a high quality consumer-level VR experience at launch across hardware, software, and content on Windows. We want to get back to development for OS X and Linux but we don’t have a timeline.
Since we started writing the book, we've felt very strongly about keeping a cross-platform focus.   While it's likely that most of the initial consumer demand for the Rift will be driven by gamers, we the authors feel that VR has a promising future beyond just the realms of gaming.  Two of the core components that make the Rift possible are powerful consumer graphics processors and cheap, accurate inertial measurement units (IMUs).  Much of the initial consumer demand for both of these technologies was driven by gaming, but in both cases the use of the technologies is spreading far beyond that.

We believe that VR will also grow beyond the confines of gaming, probably in ways we can't even imagine right now.  However, to do that, we need to make sure as many people as possible have the opportunity to try new ideas.  And while Macs and Linux machines may hold an almost insignificant market share when it comes to gaming, we believe that new ideas can be found anywhere and that innovators probably aren't divided proportionately in the same way that operating systems are.

So we were faced with a choice.  We could stick with the 0.5 SDK, which still functions with the new Oculus runtime (at least for the DK1 and DK2).  We could abandon Linux and Mac developers and switch to the 0.6 SDK.  Or we could try to cover both 0.6 and 0.5 in the book.  We chose the first option.

Now, if you're a Windows developer and you want to move to 0.6, we still want to support you.  To that end, we will have a branch of our Github examples that will keep up to date with the latest version of the SDK.

Additionally, we will try to cover the gaps between what the book teaches, and and the current version of the SDK with articles on this blog.  I'm not talking about errata, but something more like whole updated chapter sections.

We will also be working to create examples focused on other mechanisms of working with VR devices, such as the Razer OSVR and the HTC Vive, using the appropriate SDKs.

We also feel that there is a great deal of value in the book as it stands now that is outside of the low level SDK details.  In fact, to be honest, the bulk of the changes to the book would probably be in chapters 4 and 5, and there are a dozen chapters.

The point is, whatever platform you're on, whatever hardware you're working with, if you want to create a world, we want to help.  The forthcoming edition of Oculus Rift in Action is only the first step.

Monday, May 11, 2015

Book Update!

We are heading into the final stretch with our book Oculus Rift in Action.  All chapters are now complete and we are just about half-way through the final editing process. The book is available for pre-order from Amazon If you can't wait to get your hands on it, you can order directly from Manning Publications and get access to all chapters now through the Manning Early Access Program.