Friday, August 22, 2014

Using basic statistical analysis to discover whether or not the Oculus Rift headset is being worn

As we were getting ready for our talk next week at PAX Dev 2014, entitled "Pitfalls & Perils of VR Development: How to Avoid Them", an interesting question came up: how can you tell if the Rift is actually on the user's head, instead of on their desk?  It's a pretty common (and annoying) scenario right now--you double-click to launch a cool new game, and immediately you can hear intro music and cutscene dialog but the Rift's still on a table.  I hate feeling like, ack!, I have to scramble to get the Rift on my face to see the intro.

Valve's SteamVR will help with this a lot, I expect; if I launch a game when I'm already wearing the Rift, there'll be no jarring switch.  But I'm leery--half the Rift demos I download today start by popping up a Unity dialog on my desktop before they switch to fullscreen VR, and that's going to be an even worse experience if I'm using Steam.

So I was mulling over how to figure out programmatically whether or not the Rift is on the user's head.  I figure that we can't just look at the position data from the tracker camera, because "camera can see Rift" isn't a firm indicator of "Rift is being used".  (Maybe the Rift is sitting on my desk, in view of the camera.)  Instead, we need to look at the noise of its position.

I recorded the eye pose at each frame, taking an average of all eye poses recorded every tenth of a second.  At 60FPS that's about six positions per decisecond.  The Rift's positional sensors are pretty freaking sensitive; when the Rift is sitting on my desk, the difference in position from one decisecond to the next from ambient vibration is on the order of a hundredth of a millimeter.  Pick it up, though, and those differences spike.

I plotted the standard deviation of the Rift's position, in a rolling window of ten samples for the past second, versus time:


This is a graph of log(standard deviation(average change in position per decisecond)) over time.  The units on the left are in log scale.  I found that when the Rift was inert on my desk, casual vibration kept log(σ) < -10.5; as I picked it up log(σ) spiked, and then while worn would generally hover between -4.5 and -10.5.  When the Rift was being put on or taken off, log(σ) climbed as high as -2, but only very briefly.

I found that distinguishing between Rift that was being put on or lifted off from a Rift that was being worn normally was pretty hard with this method, but that the distinction between "not in human hands at all" and "in use" was clear.  So this demonstrates a method for programmatically determining whether the Rift is in active use or not.  I hope it's useful.

Sample code was written in Java, and is available on the book's github repo.  (File "HeadMotionStatsDemo.java".)

Advanced uses of Timewarp II - When you're running late

[This is post three of three on Timewarp, a new technology available on the Oculus Rift. This is a draft of work in progress of Chapter 5.7 from our upcoming book, "Oculus Rift in Action", Manning Press. By posting this draft on the blog, we're looking for feedback and comments: is this useful, and is it intelligible?]


5.7.2 When you're running late

Of course, when the flak really starts to fly, odds are that you won’t be rendering frames ahead of the clock—it’s a lot more likely that you’ll be scrambling to catch up.  Sometimes rendering a single frame costs you longer than the number of milliseconds your target framerate allows.  But timewarp can be useful here too.

Say your engine realizes that it’s going to be running late.  Instead of continuing to render the current frame, you can send the previous frame to the Rift and let the Rift apply timewarp to the images generated a dozen milliseconds ago.  (Figure 5.12.)  Sure, they won’t be quite right—but if it buys you enough time to get back on top of your rendering load, it’ll be worth it, and no human eye will catch it when you drop occasionally one frame out of 75.  Far more importantly, the image sent to the Rift will continue to respond to the user’s head motions with absolute fidelity; low latency means responsive software, even with the occasional lost frame.

Remember, timewarp can distort any frame, so long as it’s clear when that frame was originally generated so that the Rift knows how much distortion to apply.

Figure 5.12: If you’re squeezed for rendering time, you can occasionally save a few cycles by dropping a frame and re-rendering the previous frame through timewarp.

The assumption here is that your code is sufficiently instrumented and capable of self-analysis that you do more than just render a frame and hope it was fast enough.  Carefully instrumented timing code isn’t hard to add, especially with some display-bound timing methods as ovrHmd_GetFrameTiming, but it does mean more complexity in the rendering loop.  If you’re using a commercial graphics engine, they may already have the support baked in.  This is the sort of monitoring that any 3D app engine that handles large, complicated, variable-density scenes will hopefully be capable of performing.

Dropping frames with timewarp is an advanced technique, and probably not worth investing engineering resources into early in a project.  This is something that you should only build when your scene has grown so complicated that you anticipate having spikes of rendering time.  But if that’s you, then timewarp will help.

Tuesday, August 19, 2014

Advanced Uses of Timewarp I - When you're running early

[This is post two of three on Timewarp, a new technology available on the Oculus Rift. This is a draft of work in progress of Chapter 5.7 from our upcoming book, "Oculus Rift in Action", Manning Press. By posting this draft on the blog, we're looking for feedback and comments: is this useful, and is it intelligible?]


5.7.1 When you're running early

One obvious use of timewarp is to fit in extra processing, when you know that you can afford it.  The Rift SDK provides access to its timing data through several API functions:
  • ovrHmd_BeginFrame       // Typically used in the render loop
  • ovrHmd_GetFrameTiming   // Typically used for custom timing and optimization
  • ovrHmd_BeginFrameTiming // Typically used when doing client-side distortion
These methods return an instance of the ovrFrameTiming structure, which stores the absolute time values associated with the frame. The Rift uses system time as an absolute time marker, instead of computing a series of differences from one frame to the next, because doing so reduces the gradual build-up of incremental error. These times are stored as doubles, which is a blessing after all the cross-platform confusion over how to count milliseconds.

ovrFrameTiming includes:
  • float DeltaSeconds
    The amount of time that has passed since the previous frame returned its BeginFrameSeconds value; usable for movement scaling. This will be clamped to no more than 0.1 seconds to prevent excessive movement after pauses for loading or initialization.
  • double ThisFrameSeconds Absolute time value of when rendering of this frame began or is expected to begin; generally equal to NextFrameSeconds of the previous frame. Can be used for animation timing.
  • double TimewarpPointSeconds Absolute point when IMU (timewarp) expects to be sampled for this frame.
  • double NextFrameSeconds
    Absolute time when frame Present + GPU Flush will finish, and the next frame starts.
  • double NextFrameSeconds Absolute time when frame Present + GPU Flush will finish, and the next frame starts.
  • double ScanoutMidpointSeconds Time when when half of the screen will be scanned out. Can be passed as a prediction value to ovrHmd_GetSensorState() to get general orientation.
  • double EyeScanoutSeconds[2]
    Timing points when each eye will be scanned out to display. Used for rendering each eye.

Generally speaking, it is expected that the following should hold:

ThisFrameSeconds
    < TimewarpPointSeconds
        < NextFrameSeconds
            < EyeScanoutSeconds[EyeOrder[0]]
                <= ScanoutMidpointSeconds
                    <= EyeScanoutSeconds[EyeOrder[1]]

…although actual results may vary during execution.

Knowing when the Rift is going to reach TimewarpPointSeconds and ScanoutMidpointSeconds gives us a lot of flexibility if we happen to be rendering faster than necessary. There are some interesting possibilities here: if we know that our code will finish generating the current frame before the clock hits TimewarpPointSeconds, then we effectively have ‘empty time’ to play with in the frame. You could use that time to do almost anything (provided it’s quick)—send data to the GPU to prepare for the next frame, compute another million particle positions, prove the Riemann Hypothesis—whatever, really (Figure 5.11.)


Figure 5.11: Timewarp means you’ve got a chance to do extra processing for ‘free’ if you know when you’re idle.

Keep this in mind when using timewarp. It effectively gives your app free license to scale its scene density, graphics level, and just plain awesomeness up or down dynamically as a function of current performance, measured and decided right down to the individual frame.

But it’s not a free pass! Remember that there are nasty consequences to overrunning your available frame time: a dropped frame. And if you don’t adjust your own timing, you risk the SDK spending a busywait cycle for almost all of the following frame, using past data for the next image, which can consume valuable CPU. So you’ve got a powerful weapon here, but you must be careful not to shoot yourself in the foot with it.

[Next post: Chapter 5.7, "Advanced uses of timewarp", part 2]

Monday, August 18, 2014

Using Timewarp on the Oculus Rift

[This is post one of three on Timewarp, a new technology available on the Oculus Rift. This is a draft of work in progress of Chapter 5.6 from our upcoming book, "Oculus Rift in Action", Manning Press. By posting this draft on the blog, we're looking for feedback and comments: is this useful, and is it intelligible?]


5.6 Using Timewarp: catching up to the user

In a Rift application, head pose information is captured before you render the image for each eye. However, rendering is not an instantaneous operation; processing time and vertical sync (“vsync”) mean that every frame can take a dozen milliseconds to get to the screen. This presents a problem, because the head pose information at the start of the frame probably won’t match where the head actually is when the frame is rendered. So, the head pose information has to be predicted for some point in the future; but the prediction is necessarily imperfect. During the rendering of eye views the user could change the speed or direction they’re turning their head, or start moving from a still position, or otherwise change their current motion vector. (Figure 5.8.)


Figure 5.8: There will be a gap between when you sample the predicted orientation of the Rift for each eye and when you actually display the rendered and distorted frame. In that time the user could change their head movement direction or speed. This is usually perceived as latency.


THE PROBLEM: POSE AND PREDICTION DIVERGE

As a consequence, the predicted head pose used to render an eye-view will rarely exactly match the actual pose the head has when the image is displayed on the screen. Even though this is typically over a time of less than 13ms, and the amount of error is very small in the grand scheme of things, the human visual cortex has millions of years of evolution behind it and is perfectly capable of perceiving the discrepancy, even if it can’t necessarily nail down what exactly is wrong. They’ll perceive it as latency—or worse, they won’t be able to say what it is that they perceive, but they’ll declare the whole experience “un-immersive”. You could even make them ill (see Chapter 8 for the relationship between latency and simulation sickness.)

THE SOLUTION: TIMEWARP

To attempt to correct for these issues, Timewarp was introduced in the 0.3.x versions of the Oculus SDK. The SDK can’t actually use time travel to send back head pose information from the future[1], so it does the next best thing. Immediately before putting the image on the screen it samples the predicted head pose again. Because this prediction occurs so close to the time at which the images will be displayed on the screen it’s much more accurate than the earlier poses. The SDK can look at the difference between the timewarp head pose and the original predicted head pose and shift the image slightly to compensate for the difference. 

5.6.1 Using timewarp in your code

Because the functionality and implementation of timewarp is part of the overall distortion mechanism inside the SDK, all you need to do to use it (assuming you’re using SDK-side distortion) is to pass the relevant flag into the SDK during distortion setup:

    int configResult = ovrHmd_ConfigureRendering(
        hmd, 
        &cfg, 
        ovrDistortionCap_TimeWarp
        hmdDesc.DefaultEyeFov, 
        eyeRenderDescs);

That’s all there is to it.

5.6.2 How timewarp works

Consider an application running at 75 frames per second. It has 13.33 milliseconds to render each frame (not to mention do everything else it has to do for each frame). Suppose your ‘update state’ code takes 1 milliseconds, each eye render takes 4 milliseconds, and the distortion (handled by the SDK) takes 1 millisecond. Assuming you start your rendering loop immediately after the previous refresh then the sequence of events would look something like Figure 5.9.

Figure 5.9: A simple timeline for a single frame showing the points at which the (predicted) head pose is fetched. By capturing the head orientation a third time immediately before ending the frame, it’s possible to warp the image to adjust for the differences between the predicted and actual orientations. Only a few milliseconds—probably less than ten—have passed since the original orientations were captured, but this penultimate update can still strongly improve the perception of responsiveness in the Rift.

  1. Immediately after the previous screen refresh, you begin your game loop, starting by updating any game state you might have.
  2. Having updated the game state, we grab the predicted head pose and start rendering the first eye. ~12 ms remain until the screen refresh, so the predicted head pose is for 12 ms in the future.
  3. We’ve finished with the first eye, so we grab the predicted head pose again, and start rendering the second eye. This pose is for ~8ms in the future, so it’s likely more accurate than the first eye pose, but still imperfect. 
  4. After rendering has completed for each eye, we pass the rendered offscreen images to the SDK. ~4ms remain until the screen refresh.
  5. The SDK wants to fetch the most accurate head pose it can for timewarp, so it will wait until the last possible moment to perform the distortion
  6. With just enough time to perform the distortion, the SDK fetches the head pose one last time. This head pose is only predicted about 1ms into the future, so it’s much more accurate than either of the per-eye render predictions. The difference between the each per-eye pose and this final pose is computed and sent into the distortion mechanism so that it can correct the rendered image position by rotating it slightly, as if on the inner surface of a sphere centered on the user. 
  7. The distorted points of view are displayed on the Rift screen. 
By capturing the head pose a third time, so close to rendering, the Rift can ‘catch up’ to unexpected motion. When the user’s head rotates, the point where the image is projected can be shifted, appearing where it would have been rendered if the Rift could have known where the head pose was going to be.

The exact nature of the shifting is similar to if you took the image and painted it on the interior of a sphere which was centered on your eye, and then slightly rotated the sphere. So for instance if the predicted head pose was incorrect, and you ended up turning your head further to the right than predicted, the timewarp mechanism would compensate by rotating the image to the left (Figure 5.10.)

Figure 5.10: The rendered image is shifted to compensate for the difference between the predicted head pose at eye render time and the actual head pose at distortion time.

One caveat: when you go adding in extra swivel to an image, there’ll be some pixels at the edge of the frame that weren’t rendered before and now they need to be filled in. How best to handle these un-computed pixels is a topic of ongoing study, although initial research from Valve and Oculus suggest that simply coloring them black is fine.

5.6.3 Limitations of timewarp 

Timewarp isn’t a latency panacea. This ‘rotation on rails’ works fine if the user’s point of view only rotates, but in real life our anatomy isn’t so simple. When you turn your head your eyes translate as well, swinging around the axis of your neck, producing parallax. So for instance, if you’re looking at a soda can on your desk, and you turn your head to the right, you’d expect a little bit more of the desk behind the right side of the can to be visible, because by turning your head you’ve also moved your eyes a little bit in 3D space. The timewarped view can’t do that, because it can’t manufacture those previously hidden pixels out of nothing. For that matter, the timewarp code doesn’t know where new pixels should appear, because by the time you’re doing timewarp, the scene is simply a flat 2D image, devoid of any information about the distance from the eye to a given pixel. 
 
This is especially visible in motion with a strong translation component, but (perhaps fortunately) the human head’s range and rate of motion in rotation is much greater than in translation. Translation generally involves large, coarse motions of the upper body which are easily predicted by hardware and difficult to amend faster than the Rift can anticipate.

Oculus recognizes that the lack of parallax in timewarped images is an issue, and they’re actively researching the topic. But all the evidence so far has been that, basically, users just don’t notice. It seems probable that parallax timewarp would be a real boost to latency if it were easy, but without it we still get real and significant improvements from rotation alone.

[Next post: Chapter 5.7, "Advanced uses of timewarp", part 1]

______________________
[1]
It's very difficult to accelerate a Rift up to 88 miles per hour

Monday, August 4, 2014

0.4 SDK: Users of unknown gender now have a gender of “Unknown”

Oculus recently released the 0.4 version of the SDK.  I am primarily a Mac developer and the fact that it isn’t available for the Mac yet is a big disappointment to me. So while I am impatiently waiting for the Mac version, I’ll poke around the Windows version. As the default profile settings were an irritation to me previously, I checked to see if Oculus made changes to the default profile settings and I am pleased to see that they have:

#define OVR_DEFAULT_GENDER                  "Unknown"

That’s right. Users who have not yet created a profile are no longer assumed to be “male”. Instead the default matches reality - when the user’s gender is unknown because the user hasn’t specified a gender, the SDK now returns a gender of “Unknown”.

This is a good step as you will no longer get false data from the SDK. Let's take a closer look at the profile default values

// Default measurements empirically determined at Oculus to make us happy
// The neck model numbers were derived as an average of the male and female averages from ANSUR-88
// NECK_TO_EYE_HORIZONTAL = H22 - H43 = INFRAORBITALE_BACK_OF_HEAD - TRAGION_BACK_OF_HEAD
// NECK_TO_EYE_VERTICAL = H21 - H15 = GONION_TOP_OF_HEAD - ECTOORBITALE_TOP_OF_HEAD
// These were determined to be the best in a small user study, clearly beating out the previous default values
#define OVR_DEFAULT_GENDER                  "Unknown"
#define OVR_DEFAULT_PLAYER_HEIGHT           1.778f
#define OVR_DEFAULT_EYE_HEIGHT              1.675f
#define OVR_DEFAULT_IPD                     0.064f
#define OVR_DEFAULT_NECK_TO_EYE_HORIZONTAL  0.0805f
#define OVR_DEFAULT_NECK_TO_EYE_VERTICAL    0.075f

#define OVR_DEFAULT_EYE_RELIEF_DIAL         3


It is also good to see that the "neck model numbers were derived as an average of the male and female averages." However, the height here remains as it has in past SDK versions at 1.778f - the average height of an adult male in the US. I'm a little wary of this value as Oculus doesn't indicate how varied the user pool used to determine this value was. They simply say "a small user study." Was this a varied user pool comprised equally of men and women? How varied were the heights of those in the user pool? Without that information or further study, I can't be sure that these values don't introduce bias and it is something I will keep track of in my own user tests.

I've been keeping such a close eye on this issue because I feel that what the writer Octavia Butler said about science fiction applies here and now to VR.

 "There are no real walls around science fiction. We can build them, but they’re not there naturally."
-- Octavia Butler

 There are no real walls around VR. Let's do what we can to not build those walls. VR is for everyone.







Tuesday, July 1, 2014

Unity 4: Rift UI experiments

I have been experimenting with creating UIs for Rift applications using Unity 4. I figured it might save people some time to see the mistakes I've made so they can avoid them.

I started with a basic scene with a script that used the UnityGUI controls (OnGui with GUI.Box and  GUI.Button) to create a simple level loader. Here is what the scene looked like displayed on a typical monitor:




As a quick test to see how well this GUI would translate to the Rift, I used the OVRPlayerController prefab from the Oculus Unity Pro 4 Integration package  to get the scene on the Rift. And, well, as you can see, it doesn’t work at all.



The problem is that UnityGUI creates the GUI as a 2d overlay. Because the GUI isn't in 3d space, it doesn't get rendered properly for the Rift and therefore it can't be properly viewed on the Rift. To create the same GUI in 3d space, I used the VRGUI package found on the Oculus forums posted by boone188. This package creates a plane in 3d space where the GUI is then rendered. Following the examples in that package, I created the same basic menu as before, but now in 3d space. It looked like this:



This GUI works, but it doesn't feel right.  Having a GUI plane in between me and the world I created just isn’t very immersive. For an immersive experience, you need to integrate the UI into the world you are building. As this is a level selection menu and I’ve built it around an elevator, making the elevator buttons into the level selection buttons is a natural choice. Here’s the concept for the scene (and if this were drawn by a competent artist, I think it could look good.)



To select a level, the user just needs to look at it (Raycasting from the point between the two cameras is used to determine where the user is looking), the button will then turn green to show it has been selected and then the user can  confirm the selection using the gamepad. 

Thursday, June 26, 2014

Unity 4: Using the player eye height from the user's profile

I had been planning on writing a post on setting the default character eye height in Unity a while ago but I got side tracked and then Oculus put out the preview version of the 0.3 SDK and Unity Integration package. The first preview version didn’t work for the Mac so, although I had read there were some significant changes, I wasn’t able to test them out. Now that it is available on the Mac and I’ve had some time to play with it, I wanted to come back to the character default height question.

The big change  for character height from version 0.2 to 0.3 is that the player eye height set in the Oculus Profile tool is now used by the OVRCamraController script by default to set the character eye height.

Previously if you wanted to use the player's height as set in the user profile, you needed to go in to the Inspector for the OVRCameraController script (attached to the OVRCameraController prefab) and check the box for Use Player Eye Hght. As of version 0.3, this box is checked by default.



Assuming that the player has created a user profile, this makes it easy to have a character of the player’s actual height and, even if a profile isn’t found, the SDK provides values for a default profile. But as we looked at in another post, Working around the SDK default profile setting for gender,  there are issues with those default values, as they are for what Oculus terms the "common user" (a.k.a average adult male) and not for the average human.

If the profile is currently set to the default, the solutions we looked at there included documentation, changing the SDK values, and giving the user an option to set a profile. Like working directly with the SDK, when developing with Unity, emphasizing in the documentation that a profile needs to be set is still a big part of the solution (and a good idea anyway). And, like working directly with the SDK, changing the default values is still not a good solution.

But unlike working directly with the SDK, when working with Unity, checking for the profile used is problematic. When using the SDK directly, there is a call to get the name of the profile being used.

ovrHmd_GetString(hmd, OVR_KEY_USER, "")

Unfortunately, the Unity integration package does not appear to to include a similar call for getting the user profile name. The OVRDevice script provides the following functions to get profile data:

GetPlayerEyeHeight(ref float eyeHeight);
GetIPD(ref float IPD);
IsMagCalibrated();
IsYawCorrectionEnabled();


As you can see, there isn't a call to get the name of the profile in use. So, what to do? One suggestion I saw on the Oculus forums, was to at least warn the user that the headset has has not been calibrated. You can do that by checking if magnetic calibration and yaw correction are false using OVRDevice.IsMagCalibrated() and OVRDevice.IsYawCorrectionEnabled().

While this is a good start, it would be nice if there was an easy way to allow users to see which profile is in use and to allow them to switch between available profiles.