Showing posts with label user interface design. Show all posts
Showing posts with label user interface design. Show all posts

Thursday, April 2, 2015

Unity + Leap: Hand Selection UI Prototype

Immersion is definitely affected by how closely your avatar's hand looks like your own.  In the demo I am working on I want the user to be able to select the hands they have in the game before entering the game.  A prototype in-game UI for hand selection is seen in the video below.


To create this UI, I created a world space canvas and added buttons for each of the available hands. To each button, I added a box collider as a child object. A script attached to the box collider detects when a hand has collided with it.  To detect a hand, I used the Leap libraries and then checked to see if the collision object is a Leap HandModel.

In this prototype UI, I am using large buttons for two reasons. First, reading small text in the Rift can be difficult, and second, while using the Leap allows me to see my hands, in my experience, it does not track finger motion well enough for detailed interactions to be effective. In several of the tests I ran, the user's hand was generally in the right place but the fingers more often than not were at different angles than the user's actual hand. The effect was that my users seemed to have the fine motor skills of a toddler - they could reach out and touch everything but they didn't have a lot of control. On the positive side, when the user has hands in the game, it appears to be very natural for users to try to touch items with their hands. Even when users don't have visible hands in the scene, you'll often see them reaching out to try to touch things. While I have the start button say "Touch to Start," once users know to use their hand to affect the scene they get it right away and don't need prompting or other instruction.

Leap Motion has just released a "Best Practices Guide" and I'll be looking at incorporating many of the ideas documented there in future prototypes.

Friday, January 9, 2015

Unity 4.6: Creating a look-based GUI for VR

In a previous post, I talked about creating GUIs for VR using world space canvases. In that example,   the GUI only displayed text - it didn't have any input components (buttons, sliders, etc). I wanted to add a button above each thought bubble the user could click to hear the text read aloud.  



As I had used a look-based interaction to toggle the visibility of the GUI, this brought up the obvious question of how do I use a similar interaction for GUI input?  And, importantly,  how do I do it in a way that takes advantage of Unity's GUI EventSystem?

Turns out, what's needed is a custom input module that detects where the user is looking. There is an excellent tutorial posted on the Oculus forums  by css that is a great place to start. That tutorial includes the code for a sample input module and walks you through the process of setting up the GUI event camera.  (You need to assign an event camera to each canvas and one twist is that the OVR cameras don’t seem to work with the GUI.) By following that tutorial, I was able to get look-based input working very quickly.

Note that while look-based interactions are immersive and fairly intuitive to use, it is worth keeping in mind that look-based input won’t work in all situations. For example, if you have attached the GUI to CenterEyeCamera  to ensure that the user always sees the GUI, the GUI will follow the user’s view meaning the user won’t be able to look at any one specific option.

Friday, August 22, 2014

Using basic statistical analysis to discover whether or not the Oculus Rift headset is being worn

As we were getting ready for our talk next week at PAX Dev 2014, entitled "Pitfalls & Perils of VR Development: How to Avoid Them", an interesting question came up: how can you tell if the Rift is actually on the user's head, instead of on their desk?  It's a pretty common (and annoying) scenario right now--you double-click to launch a cool new game, and immediately you can hear intro music and cutscene dialog but the Rift's still on a table.  I hate feeling like, ack!, I have to scramble to get the Rift on my face to see the intro.

Valve's SteamVR will help with this a lot, I expect; if I launch a game when I'm already wearing the Rift, there'll be no jarring switch.  But I'm leery--half the Rift demos I download today start by popping up a Unity dialog on my desktop before they switch to fullscreen VR, and that's going to be an even worse experience if I'm using Steam.

So I was mulling over how to figure out programmatically whether or not the Rift is on the user's head.  I figure that we can't just look at the position data from the tracker camera, because "camera can see Rift" isn't a firm indicator of "Rift is being used".  (Maybe the Rift is sitting on my desk, in view of the camera.)  Instead, we need to look at the noise of its position.

I recorded the eye pose at each frame, taking an average of all eye poses recorded every tenth of a second.  At 60FPS that's about six positions per decisecond.  The Rift's positional sensors are pretty freaking sensitive; when the Rift is sitting on my desk, the difference in position from one decisecond to the next from ambient vibration is on the order of a hundredth of a millimeter.  Pick it up, though, and those differences spike.

I plotted the standard deviation of the Rift's position, in a rolling window of ten samples for the past second, versus time:


This is a graph of log(standard deviation(average change in position per decisecond)) over time.  The units on the left are in log scale.  I found that when the Rift was inert on my desk, casual vibration kept log(σ) < -10.5; as I picked it up log(σ) spiked, and then while worn would generally hover between -4.5 and -10.5.  When the Rift was being put on or taken off, log(σ) climbed as high as -2, but only very briefly.

I found that distinguishing between Rift that was being put on or lifted off from a Rift that was being worn normally was pretty hard with this method, but that the distinction between "not in human hands at all" and "in use" was clear.  So this demonstrates a method for programmatically determining whether the Rift is in active use or not.  I hope it's useful.

Sample code was written in Java, and is available on the book's github repo.  (File "HeadMotionStatsDemo.java".)