Tuesday, July 1, 2014

Unity 4: Rift UI experiments

I have been experimenting with creating UIs for Rift applications using Unity 4. I figured it might save people some time to see the mistakes I've made so they can avoid them.

I started with a basic scene with a script that used the UnityGUI controls (OnGui with GUI.Box and  GUI.Button) to create a simple level loader. Here is what the scene looked like displayed on a typical monitor:




As a quick test to see how well this GUI would translate to the Rift, I used the OVRPlayerController prefab from the Oculus Unity Pro 4 Integration package  to get the scene on the Rift. And, well, as you can see, it doesn’t work at all.



The problem is that UnityGUI creates the GUI as a 2d overlay. Because the GUI isn't in 3d space, it doesn't get rendered properly for the Rift and therefore it can't be properly viewed on the Rift. To create the same GUI in 3d space, I used the VRGUI package found on the Oculus forums posted by boone188. This package creates a plane in 3d space where the GUI is then rendered. Following the examples in that package, I created the same basic menu as before, but now in 3d space. It looked like this:



This GUI works, but it doesn't feel right.  Having a GUI plane in between me and the world I created just isn’t very immersive. For an immersive experience, you need to integrate the UI into the world you are building. As this is a level selection menu and I’ve built it around an elevator, making the elevator buttons into the level selection buttons is a natural choice. Here’s the concept for the scene (and if this were drawn by a competent artist, I think it could look good.)



To select a level, the user just needs to look at it (Raycasting from the point between the two cameras is used to determine where the user is looking), the button will then turn green to show it has been selected and then the user can  confirm the selection using the gamepad. 

Thursday, June 26, 2014

Unity 4: Using the player eye height from the user's profile

I had been planning on writing a post on setting the default character eye height in Unity a while ago but I got side tracked and then Oculus put out the preview version of the 0.3 SDK and Unity Integration package. The first preview version didn’t work for the Mac so, although I had read there were some significant changes, I wasn’t able to test them out. Now that it is available on the Mac and I’ve had some time to play with it, I wanted to come back to the character default height question.

The big change  for character height from version 0.2 to 0.3 is that the player eye height set in the Oculus Profile tool is now used by the OVRCamraController script by default to set the character eye height.

Previously if you wanted to use the player's height as set in the user profile, you needed to go in to the Inspector for the OVRCameraController script (attached to the OVRCameraController prefab) and check the box for Use Player Eye Hght. As of version 0.3, this box is checked by default.



Assuming that the player has created a user profile, this makes it easy to have a character of the player’s actual height and, even if a profile isn’t found, the SDK provides values for a default profile. But as we looked at in another post, Working around the SDK default profile setting for gender,  there are issues with those default values, as they are for what Oculus terms the "common user" (a.k.a average adult male) and not for the average human.

If the profile is currently set to the default, the solutions we looked at there included documentation, changing the SDK values, and giving the user an option to set a profile. Like working directly with the SDK, when developing with Unity, emphasizing in the documentation that a profile needs to be set is still a big part of the solution (and a good idea anyway). And, like working directly with the SDK, changing the default values is still not a good solution.

But unlike working directly with the SDK, when working with Unity, checking for the profile used is problematic. When using the SDK directly, there is a call to get the name of the profile being used.

ovrHmd_GetString(hmd, OVR_KEY_USER, "")

Unfortunately, the Unity integration package does not appear to to include a similar call for getting the user profile name. The OVRDevice script provides the following functions to get profile data:

GetPlayerEyeHeight(ref float eyeHeight);
GetIPD(ref float IPD);
IsMagCalibrated();
IsYawCorrectionEnabled();


As you can see, there isn't a call to get the name of the profile in use. So, what to do? One suggestion I saw on the Oculus forums, was to at least warn the user that the headset has has not been calibrated. You can do that by checking if magnetic calibration and yaw correction are false using OVRDevice.IsMagCalibrated() and OVRDevice.IsYawCorrectionEnabled().

While this is a good start, it would be nice if there was an easy way to allow users to see which profile is in use and to allow them to switch between available profiles.

Sunday, June 8, 2014

Creating JOVR

When the Rift DK1 came out, the supplied software didn't support Linux, or Java or anything other than Windows by way of C++.  While other platform support for Linux and OSX were eventually added, and in the 0.3.x versions of the SDK, support for straight C was added as well, if you were a Java developer and wanted to work with the Rift you were out of luck.

Early on I set about digging into the SDK's inner workings to see how easily it could be ported to Java.  Eventually I succeeded in replicating both the head tracker reading code and some of the sensor fusion code in a Java project I called Jocular.  It was basically functional, but it didn't have all the same features as the C++ SDK.  In particular I never spent a lot of effort on magnetic yaw correction or prediction.  Largely I lost interest in it because I didn't want to make myself responsible for constantly porting stuff in order to maintain feature parity with the official codebase.

When the 0.3.x came out it included a simplified C API that provided not only access to the sensors but also included an implementation of the distortion rendering inside the SDK itself.  This was something of a boon to developers, because the implementation of distortion is non-trivial.  It was also a boon to Java developers, because tooling for creating Java bindings to C functionality is pretty functional.  That's not to say that binding the the C++ code would have been impossible, but the C API is much simpler, requiring fewer hurdles to jump to get the same results.  So I bumped the Jocular version from 1 to 2, and set about producing just such a binding.

JNA vs JNI

Binding from Java to C can be done in a couple of ways, typically either through JNI (Java Native Interface) or through JNA (Java Native Access).

JNI functionality requires that you write C code specifically designed for Java to call.  Within this C code you can then call other native C libraries, or C++ code for that matter.  This typically produces the fastest implementation, but it's a bit of overhead I wanted to avoid.

JNA on the other hand can be called with pure Java.  The magic of loading native libraries, locating the appropriate functions within them and conversion of parameters is all baked inside the JNA library, which internally uses JNI.  So really, there's only JNI for accessing C code, but the JNA library makes it easy to do in a non-library specific way.

JNA tends to be slower than JNI, but the difference tends to reflect how much information you're passing through parameters.  The Rift API is simple and doesn't require much information to be passed over the API.  The head tracking data and timing information is trivial in almost any context, while the OpenGL subsystem actually holds the bulk of the information that is used for distortion.  So despite the ostensible speed difference, the ease of use of JNA wins out here.

Building the binding

Actually creating the Java classes to map to the Oculus SDK C API was the next task, and not one I relished.  While the functions and structures I wanted to map weren't complex, there are a couple dozen of them, and going through them would have been tedious.  Fortunately, there exist tools to do this for me.  In particular I found a tool called JNAerator, which would accept as input a C header file and produce as output Java classes.  This was suitable for producing a good first pass implementation and was essentially what I used for the first release of Jocular 2.

JNAerator is kind of a finicky tool.  There's both a command line interface as well as a GUI of sorts, but neither is exactly what you'd think of as polished.  It's possible there are better tools out there, but I started working with this one and it seemed to be suitable for doing the bulk of the work I needed done, producing output classes for all the required structures, constants for all the required enums and static methods on a library wrapper for all the functions.

The mapping was imperfect.  For instance, the C API header declares the structures with underscores in their names and uses typedefs to produce non-underscored names for use.  The generated code didn't recognize the typedefs as important, so the generated types include the underscores.

All of the types also include ovr in the structure name, since C has no concept of namespacing, so all types are always in the global namespace.  Java has strong namespacing, so the naming verbosity is unnecessary.

So for instance, the OVR C API has a raw type name ovrSensorState_.  The Java name should be SensorState.  Fortunately, JNA doesn't care about the type names, just the signatures, so it was a simple matter of going through the API and using Eclipse to refactor the names.

The next problem was the access pattern for using the ovrHmd handle provided by the SDK.  This handle is essentially a pointer to a class in the C API implementation (which is written in C++), and most of the functions in the C API take the handle as their first parameter.  This pattern indicates that the best mapping for the type was to create a class that wrapped the SDK functions internally.  Fortunately the generator for the Java code was smart enough to recognize this pattern and create members on the Hmd type that wrapped calls to the static members in the library.  However, some of the calls were ripe for some improvement.

Some of the functions needed to return complex data but also indicate error state.  In these cases the C API provides an function parameter which is used for output, and the function return value itself is a flag, where a non-zero value indicates success.  In Java, it's preferable to use the exception handling in the language to indicate error conditions.  So I modified the wrappers for these functions to no longer require the user to pass in output parameters, and to raise an exception in the case of failure.

Consider the case of the rendering config function.  In the C API it's declared like this

ovrBool ovrHmd_ConfigureRendering(
  ovrHmd hmd,
  const ovrRenderAPIConfig* apiConfig,
  unsigned int distortionCaps,
  const ovrFovPort[2] eyeFovIn,
  ovrEyeRenderDesc[2] eyeRenderDescOut)
The generated code produces this equivalent (after my refactoring of the naming conventions):
byte ovrHmd_ConfigureRendering(  Hmd hmd,   RenderAPIConfig apiConfig,   int distortionCaps,   FovPort eyeFovIn[],   EyeRenderDesc eyeRenderDescOut[]);
This is made much more use friendly in the Hmd wrapper class like so:
public EyeRenderDesc[] configureRendering(    RenderAPIConfig apiConfig,     int distortionCaps,     FovPort eyeFovIn[]) {  EyeRenderDesc eyeRenderDescs[] =     (EyeRenderDesc[]) new EyeRenderDesc().toArray(2);  if (0 == OvrLibrary.INSTANCE.ovrHmd_ConfigureRendering(      this, apiConfig, distortionCaps,       eyeFovIn, eyeRenderDescs)) {    throw new IllegalStateException(      "Unable to configure rendering");  }  configuredRendering = true;  return eyeRenderDescs;}
The end user no longer needs to allocate the EyeRenderDesc array and pass it in.  The wrapper function handles this and simply returns the results, or throws an exception in the case of failure.

Finally, in order to make the API more completely accessible through the Hmd class alone, I created static methods in the Hmd type to wrap the corresponding methods in the OvrLibrary interface.  

PAX Dev 2014 panel

We'll be giving a presentation at PAX Dev 2014 on topics culled from chapters 7 and 8 of our book, titled Pitfalls & Perils of VR Development: How to Avoid Them.

Wednesday, June 4, 2014

Working around the SDK default profile setting for gender

Here’s how the Oculus SDK default profile handles gender:

#define OVR_DEFAULT_GENDER                   "Male"

According to Oculus, the purpose of this value is for avatar selection. This makes the logic behind the “male” assumption just plain odd. If a user doesn’t specify their gender, they are “male” but if they create a profile they can specify that their gender is “unspecified” by specifying it “unspecified.” It would be much better if the data matched reality. If a user’s gender is actually unspecified it should be set to “unspecified.” Users can choose to leave it that way, or they can choose to specify male or female. I really don’t like having to wonder if data I’m using is real data or not.

But more than bad data, what those default values say to me is that Oculus believes that by default men should have a good experience with the Rift, but for the same level of experience, women should be required to create a profile. This assumption sets up an artificial barrier of entry for women in VR. Not cool.

When I asked Oculus Support about this assumption, I was told that these values are for the "common user" and  "if you are worried about your users having a non-optimal experience, then you should make sure they understand they need to configure the device before usage.”

While I agree that it is absolutely correct that for the best experience users need to set up a profile, it does not change the sexist implication of the current SDK default gender value nor does it change the problem of responding to false data. But, just because the SDK default is sexist, it doesn't mean my software has to be. So, what can I do to work around the SDK default profile setting for gender? Not sure yet what the best solution is, but some ideas are:

Stress the need to create a profile in the product documentation.  This is a  good start, but relying on users to read and follow the documentation, something we all know many users will not do, isn't a complete solution.

Make sure the user has at least been offered the chance to create a personal profile. It is certainly possible to prompt the user with something like “This is the profile being used, do you want to use it or select/create another?” This approach will get people who didn't read and follow the  documentation to create a profile, something we want to do anyway,  but it also means sticking the assumption that the user is male into the face of every women who uses the Rift. I don’t want to be that rude.

Tinker with the SDK and change the defaults. This is an option. But, this approach creates a maintenance issue that will need to be addressed for each update of the SDK.


Wednesday, May 21, 2014

Rift First Reactions [DK 1]

Everyone who knows I own a Rift wants to try it out. In fact, I can’t count how many demos I’ve given since getting it last October. Aside from being really fun to see that "wow" moment when someone first moves their head, it has also given me some good ideas for VR usability. Unfortunately, with all the demos I did, I wasn't writing anything down. Time to stop screwing around and do some science.

About

These notes include reactions from 26 first time Rift users (14 women, 12 men), ranging in age from 11 to 84.

The demos tried were TuscanyDon’t Let GoMeld Media GigantosaurusVRChicken Walk, Dino Town, Proton Pulse, TNG EngineeringEnterprise and Fortune Teller (my own work-in-progress).  The PS3 controller was used with Tuscany, Chicken Walk, and Enterprise. Keyboard with Don’t Let Go, Tuscany, and TNG Engineering. And, nothing is required for Meld Media GigantosaurusVR and Proton Pulse.  Time spent using the Rift was between 5 and 10 minutes per person.

Of the 26:
  • 6 would consider themselves to be gamers. 
  • 8 said they get motion sickness easily.

Observations

  • 6 did not look around during the demo until prompted. 
  • 14 felt symptoms of motion sickness.
  • 7 of the 8 who had said they get motion sickness easily reported symptoms while using the Rift. 
  • Everyone who used the keyboard during a demo needed help finding the keys more than once. 
  • 2 gamepad users had trouble using the gamepad. All others had no issues.
  • 12 commented on having an avatar or lack of avatar (depending on the demo used).

User comments

On motion sickness
  • “Felt motion sickness when walking along the curved wall [of the Enterprise Bridge] but not elsewhere.” 
  • “Down the stairs was nauseating.” 
  • Turning around was very nauseating” 
  • “Felt motion sick all of a sudden. I thought my body should have hit something going through the door but it didn’t.” [Note: this was a very large/wide man.]
  • “Looking in one direction while moving in another direction was the most nauseating.”
On having an avatar/ lack of avatar
  • “I felt comfortable being an orb in space but it was like I was viewing video from a hovercraft. In Don’t Let Go the body was very convincing.” 
  • “I have a shoulder!”  Said by a smallish woman. This was after the knives in Don’t Let Go. 
  • “Where are my legs. This is weird!”
  • "Found my shirt!"
  •  “Very visceral” “That is some messed up stuff”[said regards to the spider]
  • “I have no body! This is so funny!"
  • “Cool! I can see myself!

 On what they loved
  • “I love the butterflies!” (3) 
  • “Oh, are those birds? How cool!”
  •  “I felt totally immersed.”
Other
  • “I didn’t think to look around when looking at a screen.” 
  • “I wish I could look at things closely.”
  • "Felt like I wanted to reach out and touch things”
  • “The only way to play is in a swivel chair.”


Note: This post was updated on June 23, 2014 to include the reactions of  6 additional test users.  This will be the last of the test results for the DK 1.  

Thursday, April 17, 2014

Unity Pro 4: Using the OVRCameraController prefab

In a previous post, I looked at getting started with Unity Pro 4 using the OVRPlayerController prefab included with the Oculus Unity Pro 4 Integration package. In this post, I want to look at using the OVRCameraController prefab with a custom player controller. Getting started requires the same basic setup as using the OvrPlayerController prefab: creating a scene and then adding the Oculus integration package into the project's assets. To save time, I just used the same scene I created for the last post, a beach with palm trees, for this test as well.  As I plan to use the Unity first person player controller as my character controller, I also added the Unity Character Controller standard assets to my project.


The OVRCameraController prefab included in the Oculus Integration package contains the stereo cameras used for Rift integration. To use this prefab, you need to attach it to a moving object in your scene. For the moving object in my scene I used a simple player controller. To create the player controller, I first added an empty GameObject (GameObject > Create Empty) and to make it easier to keep track of,  I renamed it "Player".  I then  added a Character Controller (Component >  Physics > Character Controller) to my Player object and attached the first person player controller (FPSInputController.js) and mouse look (MouseLook.cs) scripts found in the Character Controller standard assets to the Player. After I positioned my Player in the scene so that it wasn't colliding with the plane, this is what it looked like expanded in the Inspector.


I then grabbed the OVRCameraController prefab from Assets > Prefabs in the Project menu, and simply added it as a child of the Player.


And gave it a test run.


With my Rift connected and active, the scene was displayed with left and right images in the Rift’s oval views. If my Rift was not connected and active, the left and right views would simply be rectangles.

I was able to move the character around but MouseLook did not work. For that to function, I needed to make a few adjustments to the scripts. I made the following changes:
  1. In the Inspector for OVRCameraController, I set Follow Orientation to OVRCameraController
  2. I then edited the FPSInputController.js script so that it would recognize  OVRCameraController. 
    • At the top, I added a new public variable called ovrCamera of type GameObject:  public var ovrCamera : GameObject; 
    • I changed this line:
       motor.inputMoveDirection = transform.rotation * directionVector; 
      to this, so that the transform used would be from the object stored in ovrCamera:
      motor.inputMoveDirection = ovrCamera.transform.rotation * directionVector; 
    • And in the Player Inspector for the First Person Controller, I set the value of ovrCamera to CameraRight so that the transform used would be that of CameraRight. 
  3. In the Player Inspector for the MouseLook script, I set the Axes for my testing environment. I was testing with a Rift attached, so I set Axes to MouseX as I would be using the Rift to look in all directions. If testing without a Rift, I would have set it to MouseXAndY so that I could use the mouse to look in all directions. 
With those changes complete, I gave it another a test run.  I was able to navigate the scene and turn with the mouse.

After testing out both the OVRPlayerController and the Unity Standard Player controller with the Rift, there were two things that I noticed right away. First, the default player speeds for OVRPlayerController are much slower than those in the Standard Unity Player controller. Player speed plays an important part in how comfortable the VR environment is, and the faster character speed was bit uncomfortable for me. The Oculus Best Practice guide lists a walking rate of 1.4 m/s and a jogging rate of 3 m/s as most comfortable.

Second, it was clear I needed to set the default player eye height to something closer to my own height for it to be a comfortable experience. In a future post I will take a look at how to set the default player eye height.