Tuesday, March 24, 2015

Unity: Mac Direct to Rift Plugin by AltspaceVR

When using the Rift on a Mac, Oculus recommends using extended mode for the monitor configuration as it provides better performance.  And while performance is indeed much better in extended mode (running the Tuscany demo in extended mode I get 75 FPS and in mirrored mode with the refresh rate set to 75 hertz I was get 46 FPS and with the refresh rate set to 60 I get 60 FPS), getting the app to run on the extended portion of the desktop can be a bit of a pain

To help provide a better Mac user experience, the folks at AltspaceVR have created a plugin that can be integrated into your Unity project so that your application can be launched seamlessly on to the Rift when using a Mac. They have kindly made this project available on github.

Friday, March 6, 2015

Quick Look: Unity 5 and the Oculus Unity 4 Integration Package 0.4.4

I downloaded Unity 5 yesterday and gave it a quick trial run with the Oculus Integration Package 0.4.4 for the DK2. To test it out, I first built a quick sample scene using assets found in the Unity standard asset packages. Using that scene, I then tried two methods for getting the scene onto the Rift:
  • Using the OVRPlayerController prefab 
  • Using the First Person Controller prefabs and scripts found in the Unity Standard Assets with the OVRCameraController prefab
Here’s how those experiments went.

Creating the sample scene

I created a similar sample scene to the one I’ve been using for previous tests - a beach scene using only Unity standard assets. Unity 5 includes a significant refresh of the standard asset packages which is very cool. And nicely for me, they still include palm trees and a sand texture. One change of note is that skyboxes are now set in Window ->Lighting instead of Edit -> Render Settings. Unity 5 comes with a single default skybox which is what I used in this scene. Unity 5 doesn’t include a skyboxes standard asset package, at least not that I found. I did try using the SunnySky box material from the skyboxes package in 4.6 but it does not render nicely.
Beach scene created using Unity 5 standard assets 
Notice how  much prettier the palm trees are compared to the 4.6 assets.


Now to get the scene running in the Rift.

Using OVRPlayerController

After downloading and importing the Unity 4 Integration Package 0.4.4, the first thing I tried was just dropping the OVRPlayerController prefab into the scene. The OVRPlayerController character height is 2, so when placing the prefab in the scene I made sure to set the Y value to 1 so it was not colliding with the beach plane. And unlike 4.6, the palm tree assets have colliders attached, so I also made sure my player was placed so that it was not colliding with a palm tree.*

However, before I could build the scene, I needed to address the two errors I was getting:

Assets/OVR/Scripts/Util/OVRMainMenu.cs(250,43): error CS0117: `UnityEngine.RenderMode' does not contain a definition for `World'
Assets/OVR/Scripts/Util/OVRMainMenu.cs(969,43): error CS0117: `UnityEngine.RenderMode' does not contain a definition for `World
'

To get the scene to build, I edited OVRMainMenu.cs and changed:

c.renderMode = RenderMode.World;
to

c.renderMode = RenderMode.WorldSpace;

in the two places where that line occurs. With that done, I was a able to build and run the scene on the DK 2.

Beach scene on the Rift

Running this on a MacBook Pro in mirrored mode I was seeing 60 or so FPS, and in extended mode around 75.

*Actually, I didn’t make sure of that on the first test and at the start of the scene the collision caused the scene to jitter around and it was very unpleasant.

Using the OVRCameraController with the first person character controller prefabs from the standard assets

My next test was to try to use the OVRCameraController prefab with the first person character controller from the standard assets. This did not go as well.  With 5.0, there are two First Person Player prefabs: FPSController and RigidbodyFPSController.


The FPSController prefab

The FPSController prefab uses the FirstPersonController.cs script. This script has a number of options, including footstep sounds, head bob and FOV Kick. These options can be great in traditional games but for VR, they can be rather problematic. Head bob and FOV Kick are particularly concerning as these types of motion can be severe motion sickness triggers for some user. Based on that, I didn’t want to spend too much time trying to adapt this script. Instead, I looked at the RigidBodyFPSController.

RigidBodyFPSController

The RigidBodyFPSController prefab consists of the RigidBodyFPSController object with the MainCamera as a child object.



Looking a the RigidBodyFPSController object, you can see that it has a RigidBody, a Capsule Collider and the Rigidbody first person controller script.




To adapt this prefab for use in VR, I first deleted the MainCamera child object and then added the OVRCameraRig in its place.




Note: The MainCamera had a headbob.cs script attached to it. Head bob isn’t something I want in my VR application, and the documentation says that script can be safely disabled or removed.

The Rigidbody First Person Controller script’s Cam variable had been set to the MainCamera. With the MainCamera removed, in the inspector for the script I set it to LeftEyeAnchor.



I then gave the it a test run.

I was seeing similar FPS as in the OVRPlayerController test but the scene was noticeably more jittery. This may be due to using LeftEyeAnchor as the camera but it would require more research to know what is really going on.


Update: March 30, 2015
The build errors appear to be fixed in the 0.5.0.1 Beta version of the Integration Package. When using  0.5.0.1 you need  make sure you have updated to the 0.5.0.1 Beta version of the Runtime Package for it to work. I was not able to build the my project until I had updated the Runtime Package as well.

Friday, February 13, 2015

A look a the Leap Motion: Seeing your hands in VR

In many VR demos you are just a floating head in space. For me, this breaks the immersion as it makes me feel like I am not really part of the virtual world. Demos that include a body feel more immersive, but they are also a bit frustrating. I want my avatar’s hands to move when my hands do. To experiment with getting my hands into the scene, I got a Leap Motion controller.

When using the Leap with the Rift, you need to mount it on the Rift itself using a small plastic bracket. You can purchase the bracket from Leap but they also make the model available  on Thingiverse so you can print one out yourself should you have a 3D printer. (I do and I thought that was very cool. I really felt like I was living in the future printing out a part for my VR system.)

Once I got the mount printed out and attached to my Rift and completed the Leap setup instructions, I gave some of the VR demos available a try. Seeing hands in the scene really made it feel a lot more immersive, but what really upped the immersion was seeing hands that looked almost like mine. The leap development package includes a nice variety of hand models (by their naming conventions, I’m a light salt) and that variety is greatly appreciated.

When running the demos, the biggest problems I had with the Leap were false positive hands (extra hands) in the scene, having my hands disappear rather suddenly, and poor tracking of my fingers. Two things that helped were making sure  the Rift cables were  not in front of the Leap controller and removing or covering reflective surfaces in my office (particularly the arm rest on my chair). Even with those changes, having the perfect office setup for the Leap is still a work in progress.

I’ve downloaded the Unity core assets and I’ll be talking more about developing for the Leap using Unity in future posts. Here’s a preview of what I am working on:

Wednesday, February 4, 2015

Unity 4.6: Silent conversation - Detecting head gestures for yes and no

One of the demos that I have really enjoyed is the “Trial of the Rift Drifter” by Aldin Dynamics. In this demo you answer questions by shaking your head for yes and no. This is a great use of the head tracker data beyond changing the user’s point of view. And it is a mechanic that I would like to add to my own applications as it really adds to the immersive feel.

As an example, I updated the thought bubbles scene I created earlier to allow a silent conversation with one of the people in the scene and this blog post will cover exactly what I did.



In my scene, I used a world-space canvas to create the thought bubble. This canvas contains a canvas group (ThoughtBubble) which contains an image UI object and a text UI object.

Hierarchy of the world space canvas  
I wanted the text in this canvas to change in response to the user shaking their head yes or no. I looked at a couple of different ways of detecting nods and head shakes, but ultimately went with a solution based on this project by Katsuomi Kobayashi.

To use the gesture recognition solution from this project in my own project, I first added the two Rift Gesture files (RiftGesture.cs and MyMath.cs) to my project and then attached the RiftGesture.cs script to the ThoughtBubble.

When you look at RiftGesture.cs, there are two things to take note of. First, you’ll see that to get the head orientation data, it uses:

OVRPose pose = OVRManager.display.GetHeadPose();
Quaternion q = pose.orientation;


This gets the head pose data from the Rift independent of any other input. When I first looked at adding head gestures, I tried using the transform from one of the cameras on the logic that the camera transform follows the head pose. Using the camera transform turned out to be problematic because the transform can also be affected by input from devices other than the head set (keyboard, mouse, gamepad) resulting in detecting a headshake when the user rotated the avatar using the mouse rather than shaking their head. By using OVRManager.display.GetHeadPose(), it ensures you are only evaluating data from the headset itself.

Second, you will also notice that it uses SendMessage in DetectNod() when a nod has been detected:

SendMessage("TriggerYes", SendMessageOptions.DontRequireReceiver);

and in DetectHeadshake() when a headshake has been detected:

SendMessage("TriggerNo", SendMessageOptions.DontRequireReceiver);

The next step I took was to create a new script (conversation.cs) to handle the conversation. This script contains a bit of setup to get and update the text in the canvas and to make sure that the dialog is visible to the user before it changes. (The canvas groups visibility is set by canvas groups alpha property.) However, most importantly, this script contains the TriggerYes() and TriggerNo() functions that receive the messages sent from the RiftGesture.cs. These functions simply update the text when a nod or headshake message has been received. I attached the conversation.cs script to the ThoughtBubble object and dragged the text object from the canvas to the questionholder so that the script would know which text to update.

Scripts attached to the ThoughtBubble canvas group

At this point I was able to build and test my scene and have a quick telepathic conversation with one of the characters.


Friday, January 9, 2015

Unity 4.6: Creating a look-based GUI for VR

In a previous post, I talked about creating GUIs for VR using world space canvases. In that example,   the GUI only displayed text - it didn't have any input components (buttons, sliders, etc). I wanted to add a button above each thought bubble the user could click to hear the text read aloud.  



As I had used a look-based interaction to toggle the visibility of the GUI, this brought up the obvious question of how do I use a similar interaction for GUI input?  And, importantly,  how do I do it in a way that takes advantage of Unity's GUI EventSystem?

Turns out, what's needed is a custom input module that detects where the user is looking. There is an excellent tutorial posted on the Oculus forums  by css that is a great place to start. That tutorial includes the code for a sample input module and walks you through the process of setting up the GUI event camera.  (You need to assign an event camera to each canvas and one twist is that the OVR cameras don’t seem to work with the GUI.) By following that tutorial, I was able to get look-based input working very quickly.

Note that while look-based interactions are immersive and fairly intuitive to use, it is worth keeping in mind that look-based input won’t work in all situations. For example, if you have attached the GUI to CenterEyeCamera  to ensure that the user always sees the GUI, the GUI will follow the user’s view meaning the user won’t be able to look at any one specific option.

Friday, December 12, 2014

Unity 4.6: Thought bubbles in a Rift scene using world space canvases

I’m really liking the new GUI system for 4.6. I had been wanting to play a bit with a comic-book style VR environment and with world space canvases,  and now is the time.


 

Here's a quick rundown of how I created the character thought bubbles in this scene using world space canvases.

Creating world space canvases

Canvases are the root object for all Unity GUI elements. By default they render to screen space but you also have the option of rendering the canvas in world space, which is exactly what you need for the Rift. To create a canvas, from the Hierarchy menu, select Create > UI > Canvas. When you create a canvas, both a Canvas object and an Event System object are added to your project. All UI elements need to be added as children of a Canvas. Each thought bubble consist of world-space Canvas, and two UI elements - an image and a text box. For organization, I put the UI elements in an empty gameObject called ThoughtBubble.





Note. Hierarchy order is important as UI objects are rendered in the order that they appear in the hierarchy.

To have the canvas render as part of the 3d scene, in the Inspector for the Canvas, set the Render Mode to World Space.




When you change the render mode to world space, you’ll note that the Rect Transform for the canvas becomes editable. Screen space canvases default to the size of the screen, however, for world space canvases you need to set the size manually to something appropriate to the scene.

Setting canvas position, size, and resolution

By default the canvas is huge. If you look in the Inspector, you'll see that it has Width and Height properties as well as Scale properties.  The height and width properties are used to control the resolution of the GUI.  (In this scene the Width and Height are set to 400 x 400. The thought bubble image is a 200 X 200 px image and the font used for the Text is 24pt Ariel.)  To change the size of the canvas you need to set the Scale properties. 



To give you an idea of the proportions, the characters in the scene are all just under 2 units high. and the scale of each canvas is set to 0.005 in all directions.  With the canvas a reasonable size, I positioned each canvas just above the character.

Rotating the canvas with the player's view

For the thought bubble to be read from any direction, I attached a script to the Canvas to set the canvas transform to look at the player .

using UnityEngine;
using System.Collections;

public class lookatplayer : MonoBehaviour {
    public Transform target;
    void Update() {
        transform.LookAt(target);
    }
}


Toggling canvas visibility

When you look at a character the thought bubble appears. The thought bubble remains visible until the you look at another character. There were two ways I looked at for toggling the menu visibility - setting the active state of the UI container gameObject (ThoughtBubble) or adding a Canvas Group component to the UI container gameObject and setting the Canvas Group's alpha property. Changing the alpha property seemed easier as I would not need to keep track of inactive gameObjects, so I went with that method.   There is a canvas attached to each character in the scene. The script below is attached to the CenterEyeObject (part of the OVRCameraRig prefab in the Oculus Integration package v. 0.4.4). It uses ray casting to detect which person the user is looking at and then changes the alpha value of the character's attached GUI canvas to toggle the canvas visibility.

using UnityEngine;
using System.Collections;

public class lookatthoughts : MonoBehaviour {
    
    private  GameObject displayedObject = null;
    private  GameObject lookedatObject  = null;


    // Use raycasting to see if a person is being looked 
    // at and if sodisplay the person's attached gui canvas
    void Update () {
        Ray ray = new Ray(transform.positiontransform.forward);
        RaycastHit hit;

        if(Physics.Raycast(rayout hit100)) {
            if (hit.collider.gameObject.tag == "person"){
                lookedatObject = hit.collider.gameObject;
                if (displayedObject == null){
                    displayedObject = lookedatObject;
                    changeMenuDisplay(displayedObject1);
                }else if (displayedObject == lookedatObject){
                    //do nothing
                }else{
                    changeMenuDisplay(displayedObject0);
                    displayedObject = lookedatObject;
                    changeMenuDisplay(displayedObject1);
                }
            }
        } 
    }

    // Toggle the menu display by setting the alpha value 
    // of the canvas group
    void changeMenuDisplay(GameObject menufloat alphavalue){

        Transform tempcanvas = FindTransform(menu.transform"ThoughtBubble");

        if (tempcanvas != null){
            CanvasGroup[] cg;
            cg = tempcanvas.gameObject.GetComponents<CanvasGroup>();
            if (cg != null){
                foreach (CanvasGroup cgs in cg) {
                    cgs.alpha = alphavalue;
                }
            }
        }
    }
    

    // Find a child transform by name
    public static Transform FindTransform(Transform parentstring name)
    {
        if (parent.name.Equals(name)) return parent;
        foreach (Transform child in parent)
        {
            Transform result = FindTransform(childname);
            if (result != nullreturn result;
        }
        return null;
    }
    
}

Wednesday, November 12, 2014

Unity 4: Knowing which user profile is in use

Previous versions of the Unity Integration package did not include a call for getting the user profile name. As of 0.4.3, it is now possible get the the user profile name. To know which profile is being used, you can use GetString()found in the OVRManager.cs script.

public string GetString(string propertyName, string defaultVal = null)

Below is a simple example script (report.cs) that uses this method to print out the name of the current user profile to the console. To use this script,  attach it to an empty game object in a scene that is using the OVRCameraRig or OVRPlayerController prefab. With the Rift connected and powered on, run the scene in the Unity Editor. If default is returned, no user profile has been found.


using UnityEngine;
using System.Collections;
using Ovr;

public class report : MonoBehaviour {
    void Start () {
     Debug.Log (OVRManager.capiHmd.GetString(Hmd.OVR_KEY_USER, "")) 
    }
}


The GetString()method found in the OVRManager.cs script method is used to get the profile values for the current HMD. The OVRManager.cs script gets a reference to the current HMD, capiHmd. The Hmd class, defined in OvrCapi.cs, provides a number of constants that you can use to get user profile information for the current HMD. In this example, I used OVR_KEY_USER to get the profile name. You could also get the user’s height (OVR_KEY_PLAYER_HEIGHT), IPD (OVR_KEY_IPD) or gender (OVR_KEY_GENDER), for example.