Bring your Unity VR app to a fully immersive space

Bring your Unity VR app to a fully immersive space

The next step Apple suggest is to learn about the Unity, lets check out when and where we should Unity. Here we learn about about Fully immersive space and Unity.

“Discover how you can bring your existing Unity VR apps and games to visionOS. We’ll explore workflows that can help you get started and show you how to build for eyes and hands in your apps and games with the Unity Input System. Learn about Unity’s XR Interaction Toolkit, tips for foveated rendering, and best practices.” -Apple

Unity has brought their engine and XR ecosystem to this new platform, making it simple for a Unity developer like yourself to easily bring your project over.
  • You’ll start by creating an Immersive Space with a Full Immersive style. This allows your app to hide passthrough and transport someone to another world.
    • In a fully immersive experience, Unity utilizes Compositor Services, and gives your app the power of Metal rendering capabilities.
    • Unity also takes advantage of ARKit to recognize your body position and the surroundings, including skeletal hand tracking.
    • Unity builds upon these technologies to provide the same services in the Unity Engine.
  • There are two main approaches for creating immersive experiences on this platform with Unity.
    • You can bring a fully immersive Unity experience to this platform, replacing the player’s surroundings with your own environments.
    • Or you can mix your content with passthrough to create immersive experiences that blend in with their surroundings.
  • First I’ll cover the workflow you’ll use to deploy content from Unity to the device. There are a few things you’ll need to keep in mind related to graphics on this platform.And finally, I’ll talk about how to adapt controller inputs to hand input, and some of the tools that Unity provides to help with this transition.

Build and run workflow

  • To start, there’s a build and run workflow that you should already be familiar with. We’ve built full support for this platform into Unity, so you can see your projects running on this device in just a few steps.
    • The first is to select the build target for this platform. Then, like you would for any other VR platform, enable the XR Plug-in.
    • If your app relies on native plug-ins, they’ll need to be recompiled for this platform.
    • On the other hand, if you are using raw source code or .mm files, you’re already good to go. Building from Unity will now generate an Xcode project, just like it does for an iOS, Mac, or Apple TV target.
    • Then, from within Xcode, you can build and run to either the device or the device simulator for faster iteration.

Prepare your graphics

  • The graphics pipeline you’ll use to transform someone’s surroundings into a fully immersive experience is likely to be familiar to you as well.
    • But there are a few new concepts that are important to understand.
    • One choice every project makes right in the beginning is which rendering pipeline to use.
    • The Universal Render Pipeline is an ideal choice. It enables a special feature unique to this platform called Foveated Rendering.

      • Foveated Rendering is a technique that concentrates more pixel density in the center of each lens where the eyes are likely to be focused, and less detail on the peripherals of the screen where the eyes are less sensitive to detail.This results in a much higher-quality experience for the person using the device.
        • When you use the Universal Render Pipeline, Static Foveated Rendering is applied throughout the entire pipeline.
        • And it works with all URP features, including post-processing, camera stacking, HDR, and more.
        • If you have custom render passes that would benefit from Foveated Rendering, there are new APIs in Unity 2022 that can take advantage of this technology.
        • Since rendering now happens in a nonlinear space, there are also shader macros to handle that remapping.
        • Taking advantage of Static Foveated Rendering means you’ll spend resources on the pixels that matter and produce a higher-quality visual experience.
    • Another way to optimize your graphics on this platform is by using Single-Pass Instanced Rendering.
      • In Unity, Single-Pass Instanced Rendering now supports the Metal graphics API, and it will be enabled by default. With Single-Pass Instanced Rendering, the engine submits only one draw call for both eyes, and reduces the overhead of certain parts of the rendering pipeline like culling and shadows.
        • This reduces the CPU overhead of rendering your scenes in stereo. The good news is, if your app already renders correctly on other VR platforms using Single-Pass Instanced Rendering, shader macros ensure it should work here as well.
    • There’s one last thing to consider. Make sure your app is writing to the depth buffer for every pixel correctly.
      • The system compositor uses the depth buffer for reprojection. Wherever the depth information is missing, the system will render an error color as an indication.
      • One example is the skybox which normally is infinitely far away from the user, so it writes a depth of zero with reverse Z.
        • This requires modification to appear on the device. We’ve fixed all of Unity’s shaders to write correct values to the depth buffer, but if you have any custom effects such as a custom skybox, or perhaps a water effect or transparency effects, ensure that some value is written to depth for each pixel.

Input options

  • Now that you’ve rendered your graphics to the device, it’s time to make them interactive. Interaction on this device is unique.
  • People will use their hands and their eyes to interact with content. There are a few ways you will be able to add interaction to your Unity apps on this platform.
  • The XR Interaction Toolkit adds hand tracking to make it easier for you to adapt existing projects. You can also react to the built-in system gestures with the Unity Input System.
  • And you can access the raw hand joint data for custom interactions with the Unity Hands Package. The XR Interaction Toolkit, also known as XRI, provides a high-level interaction system.
    • The toolkit is designed to make it easy to translate input into interactions. It works with both 3D and UI objects.
    • XRI abstracts away the type of input, like hand tracking, and translates that input into actions that your app can respond to.
    • This means your input code can work across platforms that accept different types of input.
    • XRI makes it easy to respond to common interactions like hover, grab, and select, both in 3D space and in the UI for 3D spatial worlds.
    • The toolkit also includes a locomotion system so people can travel through a fully immersive space more comfortably.
    • As people interact with your world, visual feedback is important for immersion. XRI enables you to define the visual reactions for each input constraint.
  • The core of XRI is a set of base Interactable and Interactor components.
    • Interactables are objects in your scene that can receive input. You define Interactors, which specify how people can interact with your Interactables.
    • The Interaction Manager ties these components together.
    • The first step is to decide which objects in the scene can be interacted with, and how to react when those interactions occur.
      • We do this by adding an Interactable component to the object. There are three built-in types.
        • Simple marks the object as receiving interactions. You can subscribe to events like SelectEntered or SelectExited with this component.
        • With Grab, when the object is selected or grabbed, it will follow the Interactor around and inherit its velocity when released.
        • Teleport interactables like TeleportArea and TeleportAnchor enable you to define areas or points for the player to teleport to.
        • And you can create your own custom Interactables.
    • Interactors are responsible for selecting or interacting with the objects you’ve tagged as Interactable.
    • They define a list of Interactables that they could potentially hover over or select each frame. There are several types of Interactors.
      • Direct Interactors select Interactables that are touching it. You would use one of these when you want to know when a person’s hands touch an interactable object, or when they are close to interactable objects.
      • Ray Interactors are used for interacting from far away. This Interactor is highly configurable with curved and straight lines, and customizable visualizations to help you adapt it to the visual style of your project. Once the user starts the interaction, you have options on how that interaction works. For example, if it’s a grab interaction, you may want to move the object to the user’s hand. And the Ray Interactor makes it possible to limit the degrees of freedom for the grab in order to match your gameplay needs.
        • A common interaction in a fully immersive experience is grabbing an object and placing it somewhere contextual to that object. For example, placing a battery in a socket.
      • The Socket Interactor shows the player that a certain area can accept an object. These Interactors are not attached to the hands. Instead they live somewhere in the world. With hand tracking or even controllers, a common type of interaction that users naturally want to perform is the poke interaction. This is similar to a direct Interactor, except that it includes direction filtering so that correct motion must be performed in order to trigger an interaction.
      • If you want people to interact by looking, the Gaze Interactor provides some extensions to the Ray Interactor to make gaze a bit easier to deal with.
        • For example, Gaze Interactors can automatically make the colliders larger for Interactables so that they’re easier to select.
    • To bring it all together, the Interaction Manager serves as a middleman between the Interactors and Interactables, facilitating the exchange of interactions.
      • Its primary role is to initiate changes in the interaction state within a designated group of registered Interactors and Interactables.
      • Usually, a single Interaction Manager is established to enable the possibility of all Interactors affecting all Interactables. A
      • Alternatively, multiple complementary Interaction Managers can be utilized, each with their own unique assortment of Interactors and Interactables.
      • These managers can be activated or deactivated to enable or disable specific sets of interactions.
        • For example, you may have a different set of Interactables per scene, or in your menus.
    • Finally, the XR Controller component helps you make sense of the input data you’ll receive. It takes input actions from the hands or a tracked device and passes it to the Interactors so that they can decide to select or activate something based on that input.

      • You will need to bind Input Action References for each of the XR Interaction States, such as Select. You’re not limited to just one XR Controller component per hand or controller, which gives us the flexibility to support both hands and controllers independently.
        • Sample code that is bundled with XRI shows you how you can do this.
        • In addition to the advanced features of XRI, you’ve also got the option of simply using the system gesture inputs directly from the Unity Input System.
        • You can then map the platform’s built-in interactions, like tap gestures, to your own interaction system. You can use the binding paths from the Unity Input System to access and respond to these system gestures.
        • The pinch gesture, for example, comes through as a value when active, with position and rotation.
        • These can be bound to input actions. Where the person is directing their focus comes through in the same frame as a pinch gesture, with position and rotation.
      • For even more flexibility, you can use the Unity Hands Subsystem to access all of the raw hand joint data from the system through the Unity Hands Package.
        • The Unity Hands Package provides access to low-level hand joint data that are consistent across platforms.
        • For example, you can write code to look at each joint and determine how close the pose is to a certain gesture, like a thumbs up or a pointing index finger, and translate those into gameplay actions.
        • This is powerful but can be challenging to get right since everyone’s hands are different sizes and people have a variety of range of motions.
        • This code defines a method which tells you if the index finger is extended.
           // Translate raw joints into gameplay actions
          
           static bool IsIndexExtended(XRHand hand)
           {
               if (!(hand.GetJoint(XRHandJointID.Wrist).TryGetPose(out var wristPose) &&
                     hand.GetJoint(XRHandJointID.IndexTip).TryGetPose(out var tipPose) &&
                     hand.GetJoint(XRHandJointID.IndexIntermediate).TryGetPose(out var intermediatePose)))
               {
                   return false;
               }
          
               var wristToTip = tipPose.position - wristPose.position;
               var wristToIntermediate = intermediatePose.position - wristPose.position;
               return wristToTip.sqrMagnitude > wristToIntermediate.sqrMagnitude;
           }
          
        • You can call this method from the OnHandUpdate event and pass in one of the hands.
          • First, get a few specific joints to check if the index finger is extended. If any of them are invalid, it will return false.
          • If all joints are valid, do a simple check to make sure that the index finger isn’t curled.
          • You can extend this logic to other fingers to start to implement some basic gesture detections.
    • Another use for the raw hand joint data is mapping it to a custom hand mesh visual. This can help make the hands fit more into the art style of your game.
      • For example, Rec Room used the raw hand joint data to show a stylized hand model that fits their visual style. They also show other player hand models for more immersion.
    • The Unity Hand package has some sample code to get you started if you want to explore more about raw hand joint access.
  • To get more information about Unity’s support for this platform and to sign up for early beta access, please visit unity.com/spatial.

Notes mentioning this note


Here are all the notes in this garden, along with their links, visualized as a graph.