Wednesday, July 29, 2015

Working with the Rift is changing how I dream

The first video game influenced dream I remember having was back in the late eighties. I was obsessed with Tetris and had very vivid dreams of Tetris blocks falling on me. So, it isn’t surprising to me that playing video games can change the way you dream. That said, the specific effect that working with the Rift has had on my dreams did surprise me.

When using the Rift, you are sitting in place and the the world moves around or past you instead of like real-life where you move though the world. I find now in my dreams, no matter what I am dreaming about, there are now two kinds of movement - movement where I dream I am moving through a world and movement where I am still and the world moves around or past me. Perhaps, this kind of dreaming is an attempt by my brain to make Rift movement feel more natural to me? Anyone else dreaming like this?

Tuesday, June 23, 2015

Unity + Leap: Explicit instruction and hand gestures

I have been experimenting a bit with the LEAP and looking at getting objects into the user’s hand. In one of my experiments*, the user holds their hand out flat and a fairy appears. This experiment used explicit instruction written on menus to tell the user what to do.




Explicit instruction worked in that my test users did what I wanted them to - nod and hold their hand out flat. The downside, of course, is that it required them to read instructions which isn’t very immersive or fun. In future experiments, I want to look at implicit instruction, such as having non-player characters perform actions first.

* This demo is now available from the  Leap Motion Gallery.

Notes on getting an object to appear on a user’s hand

Some quick notes on getting the fairy to appear on the user’s hand:

You can find all of the hand models in a scene using:

HandModel[] userHands = handcontroller.GetAllPhysicsHands();

To know if the hand is palm up, you can get the normal vector projecting from the hand relative to the controller, using:

userHands[0].GetPalmNormal()

To know if the hand is open or closed, you can look at the hand’s grab strength. The strength is zero for an open hand, and blends to 1.0 when a grabbing hand pose is recognized.

userHands[0].GetLeapHand().GrabStrength

To know where to place the object, you can get the palm position (relative to the controller), using:

userHands[0].GetPalmPosition()

Wednesday, June 3, 2015

Unity + Leap: Raising Your Hand to Get a Character's Attention

A common interaction in real life is to raise your hand to get someone’s attention. We do it when we are meeting someone in a crowd to help them find us, we do it when we are at school to get the teacher’s attention, and we do it as parents to get our child’s attention so they know that we are there and watching. We also do it when we want to hail a cab or make a bid at an auction. It is a simple enough interaction that babies do it almost instinctively. As simple as raising your hand is, using it as a mechanic in a VR environment brings up some interesting questions. How high should the user raise their hand to trigger the interaction? How long does the user need to have their hand raised? And, what should happen if the application loses hand tracking?

To experiment with this interaction, I created a demo consisting of a single character idling, minding his own business. When the user raises their hand, the character waves back and a speech bubble appears saying “Hello, there!”



Let’s take a look at the demo setup and then look at how testing the user experience went.

Setup

To create the scene I used basic 3D objects (planes, cubes) and a directional light to create a simple room. The character in the scene is a a rigged human character ("Carl") from the Male Character Pack by Mixamo. The speech bubble is created using a world space canvas (see: Thought Bubbles in a Rift scene).  To get my hands in the scene, I used the LeapOVRPlayerController from the Leap Unity Core Assest v.2.2.4 (see: Seeing your hands in VR).

For the character animation, I used the Idle and Wave animations from the Raw Mocap data package for Macanim by Unity Technologies (free animations created from motion capture data) and I created an animation controller for the character to control when he is idling and when he waves back at you. The animation controller has two animation states, Idle and Wave. It also has two triggers that can be used to trigger the transition between each state:


The animation controller for the waving
 character has two states and two triggers.

And, of course, I wrote a script (wavinghello.cs) to detect when the user has raised their hand. The interesting bit of this script is how you know where the user’s hands are and how you know when a hand has been raised high enough  so that you can trigger the appropriate animation. Let's take a look at the script's Update() function:

void Update () {
        HandModel[] userHands = handController.GetAllPhysicsHands(); 
        if (userHands.Length > 0){
          foreach (HandModel models in userHands){

            if (models.GetPalmPosition().y >= centerEyeAnchor.transform.position.y){
                 anim.SetTrigger("Wave");
                 changeMenuDisplay(speechbubble1);
             } else {
                 anim.SetTrigger("Idle");
                 changeMenuDisplay(speechbubble0);
             }
          }

        } else {
            anim.SetTrigger("Idle");
            changeMenuDisplay(speechbubble0);
        }
    }


To get the all of the hands in the scene, the script uses GetAllPhysicsHands() 
from HandController.cs:

  HandModel[] userHands = handController.GetAllPhysicsHands();

GetAllPhysicsHands() returns an array of all Leap physics HandModels for the specified HandController. To get each hand's position, the script uses  GetPalmPosition() which returns the Vector3 position of the HandModel relative to the HandController. The HandController is located at 0, 0, 0 relative to its parent object, the CenterEyeAnchor.

The HandController is a child
 of the CenterEyeAnchor.
The HandController is located at 0, 0, 0
relative to its parent the CenterEyeAnchor.

The CenterEyeAnchor object is used by the Oculus Rift integration scripts to maintain a position directly between the two eye cameras.  As the cameras are the user’s eyes, if the Y value of a HandModel object's position is greater than the Y value of the centerEyeAnchor, we know the user's hand has been raised above eye level.

The user experience

When testing this demo I was looking at how high the user should raise their hand to trigger the interaction, how long the user should have their hand raised, and, what the application should do when it loses hand tracking. Initially, I went with what seemed comfortable for me. I required the users to raise their hand (measured from the center of their palm) to above eye level and I did not require the user's hand to be raised for any specific amount of time. If the Leap lost hand tracking, the application treated it as though all hands were below eye level.

I then grabbed three people to do some testing. The only instruction I gave them was to “raise your hand to get the guy’s attention.” For my first user, the demo worked quite well. He raised his hand and the character waved back as expected. Great so far. My second user was resistant to raising his hand any higher than his nose. He quickly got frustrated as he could not get the guy’s attention. My third user raised his hand and then waved it wildly around so much so that the speech bubble flickered and was unreadable. Quite a range of results for only three users.

For my next iteration, I set  the threshold for raising one’s hand a few centimeters below eye level.

models.GetPalmPosition().y >= centerEyeAnchor.transform.position.y - 0.03f


This worked for my second user as it was low enough that he would trigger the interaction, but not so low that he would accidentally trigger it.

I haven’t done anything to address the third user yet, but whatever I do, waving my hands like a maniac is now part of my my own testing checklist.

I’d love to hear if anyone else is using this type of mechanic and what their experiences are

Friday, May 15, 2015

Oculus SDK 0.6.0 and Oculus Rift in Action

Oculus publicly released version 0.6 their SDK today.  This version represents the biggest change to the API since the first release of the C API in version 0.3, slightly over a year ago.  The new version simplifies the API in a number of ways:
  • Removed the distinction between direct and extended modes from the perspective of the developer  
  • Removes support for client-side distortion
  • Moves the management of the textures used for offscreen rendering from the client application to the SDK
  • Fixes a number of OpenGL limitations
  • etc, etc...

So those who have purchased or intend to purchase our book may be asking themselves what version we'll be covering exactly.

The short answer is that currently the book covers version 0.5 of the SDK (which itself is almost identical to version 0.4.4).  That isn't likely to change before print.  At this point we're well into the final stages of publication, doing final editorial passes for proofreading.  If we were to update to 0.6 we would have to go back and revise quite a few chapters and start much of that work over again.  That said, when we found out what a big change 0.6 was, there was much discussion, consideration and consternation.

However, with the public release of the SDK, there came a more compelling reason for us to stay with the version we're currently targeting.  In a recent blog post on the Oculus website, Atman Binstock stated...
Our development for OS X and Linux has been paused in order to focus on delivering a high quality consumer-level VR experience at launch across hardware, software, and content on Windows. We want to get back to development for OS X and Linux but we don’t have a timeline.
Since we started writing the book, we've felt very strongly about keeping a cross-platform focus.   While it's likely that most of the initial consumer demand for the Rift will be driven by gamers, we the authors feel that VR has a promising future beyond just the realms of gaming.  Two of the core components that make the Rift possible are powerful consumer graphics processors and cheap, accurate inertial measurement units (IMUs).  Much of the initial consumer demand for both of these technologies was driven by gaming, but in both cases the use of the technologies is spreading far beyond that.

We believe that VR will also grow beyond the confines of gaming, probably in ways we can't even imagine right now.  However, to do that, we need to make sure as many people as possible have the opportunity to try new ideas.  And while Macs and Linux machines may hold an almost insignificant market share when it comes to gaming, we believe that new ideas can be found anywhere and that innovators probably aren't divided proportionately in the same way that operating systems are.

So we were faced with a choice.  We could stick with the 0.5 SDK, which still functions with the new Oculus runtime (at least for the DK1 and DK2).  We could abandon Linux and Mac developers and switch to the 0.6 SDK.  Or we could try to cover both 0.6 and 0.5 in the book.  We chose the first option.

Now, if you're a Windows developer and you want to move to 0.6, we still want to support you.  To that end, we will have a branch of our Github examples that will keep up to date with the latest version of the SDK.

Additionally, we will try to cover the gaps between what the book teaches, and and the current version of the SDK with articles on this blog.  I'm not talking about errata, but something more like whole updated chapter sections.

We will also be working to create examples focused on other mechanisms of working with VR devices, such as the Razer OSVR and the HTC Vive, using the appropriate SDKs.

We also feel that there is a great deal of value in the book as it stands now that is outside of the low level SDK details.  In fact, to be honest, the bulk of the changes to the book would probably be in chapters 4 and 5, and there are a dozen chapters.

The point is, whatever platform you're on, whatever hardware you're working with, if you want to create a world, we want to help.  The forthcoming edition of Oculus Rift in Action is only the first step.

Monday, May 11, 2015

Book Update!

We are heading into the final stretch with our book Oculus Rift in Action.  All chapters are now complete and we are just about half-way through the final editing process. The book is available for pre-order from Amazon If you can't wait to get your hands on it, you can order directly from Manning Publications and get access to all chapters now through the Manning Early Access Program.


Thursday, April 2, 2015

Unity + Leap: Hand Selection UI Prototype

Immersion is definitely affected by how closely your avatar's hand looks like your own.  In the demo I am working on I want the user to be able to select the hands they have in the game before entering the game.  A prototype in-game UI for hand selection is seen in the video below.


To create this UI, I created a world space canvas and added buttons for each of the available hands. To each button, I added a box collider as a child object. A script attached to the box collider detects when a hand has collided with it.  To detect a hand, I used the Leap libraries and then checked to see if the collision object is a Leap HandModel.

In this prototype UI, I am using large buttons for two reasons. First, reading small text in the Rift can be difficult, and second, while using the Leap allows me to see my hands, in my experience, it does not track finger motion well enough for detailed interactions to be effective. In several of the tests I ran, the user's hand was generally in the right place but the fingers more often than not were at different angles than the user's actual hand. The effect was that my users seemed to have the fine motor skills of a toddler - they could reach out and touch everything but they didn't have a lot of control. On the positive side, when the user has hands in the game, it appears to be very natural for users to try to touch items with their hands. Even when users don't have visible hands in the scene, you'll often see them reaching out to try to touch things. While I have the start button say "Touch to Start," once users know to use their hand to affect the scene they get it right away and don't need prompting or other instruction.

Leap Motion has just released a "Best Practices Guide" and I'll be looking at incorporating many of the ideas documented there in future prototypes.

Tuesday, March 24, 2015

Unity: Mac Direct to Rift Plugin by AltspaceVR

When using the Rift on a Mac, Oculus recommends using extended mode for the monitor configuration as it provides better performance.  And while performance is indeed much better in extended mode (running the Tuscany demo in extended mode I get 75 FPS and in mirrored mode with the refresh rate set to 75 hertz I was get 46 FPS and with the refresh rate set to 60 I get 60 FPS), getting the app to run on the extended portion of the desktop can be a bit of a pain

To help provide a better Mac user experience, the folks at AltspaceVR have created a plugin that can be integrated into your Unity project so that your application can be launched seamlessly on to the Rift when using a Mac. They have kindly made this project available on github.