Currently, we are working on ways to utilize the Clinical Encounters Writer in order to create a VR experience. However, that adaptation comes with some considerations that have to be made.

Approaching the Issue

Whenever we’re developing VR content, we have to put a lot more consideration into which platforms we are developing for as opposed to what content we will be utilizing. Console-makers have figured out they all need similarities in their controls in order to get developers to consider developing for multiple consoles simultaneously. Oculus, HTC, Sony, and Google are all competing right now for not just market share of VR, but also a vision of what VR can and should be.

At the moment, the makers can’t decide about what sort of controls the user should have, or if the user should sit or stand. These may sound like small differences, but they actually make a huge difference in design and development. However, despite all variations in technology and price points, there are essentially three levels of VR design:

  1. Full Immersive: Full Body Immersive VR involves external sensors that can track the position of your head and any other control devices in the play area. This tier is what we think about with the Vive and Rift. Until recently, we assumed this was what we were always developing for.
    • Examples: HTC Vive, Oculus Rift, PlayStation VR
    • UI: Headsets & handheld controllers with positions tracked externally using 6 degrees of freedom
    • Pros: Allows much of the user’s natural movements to be translated into intuitive and immersive forms of UI
    • Cons: Relies on expensive hardware that is owned usually by a niche audience of technophiles and generally requires plenty of physical space that is reserved for VR play
  2. Head Immersive: This type of VR has no external sensors, so it requires that the user remain seated or keep their body and shoulders generally in one position the entire time, otherwise it breaks immersion. Hand-held controls work, but the game cannot track their exact position. Instead, many controllers use accelerometers to approximate arm movements.
    • Examples: Oculus Go, Samsung Gear, Google Cardboard, Google Daydream, a ton of cheap knock-offs
    • UI: Headsets and controllers with orientation tracked internally using 3 degrees of freedom
    • Pros: Hardware that is relatively easy to afford and set up and requires only enough space to sit down or stand in place
    • Cons: Interactivity is quite limited to a simple “gaze, hover, and click” style interaction – anything that causes the individual to lurch or move their body can break immersion
  3. Non-Immersive: There are options that are compatible with VR, but are not quite immersed in virtual reality. Both Facebook and Youtube now have “360” features for photographs and videos. For an example, check out this video Google released a few months back, celebrating the life of Georges Méliès. If you view the video on a desktop, you can use your mouse to look left, right, up, or down in the space as if you were watching a stage production. If you open it on your smartphone it’s even more impressive as you simply have to tilt your phone to look around. Creating these videos from real-life events can be very expensive and challenging, but creating them from simulated or computer generated graphics is not beyond what we are capable of. To that purpose, we tracked down this Unity SDK that Google helped to create for this very purpose.
    • Examples: Various videos and photographs utilizing 360 viewpoints
    • UI: A mouse or touchscreen, sometimes internal orientation tracking for phones or headsets, found on most modern Android and iOS mobile devices or any desktop computer capable of running a current web browser
    • Pros: Almost anyone can see this because it’s built into most browsers, but it’s still relatively new. If you have a screen and an internet connection, you can probably receive the media
    • Cons: There is almost no interactivity and the nature of the medium promotes brevity instead of length, so this experience is hard to monetize as people expect to simply watch a 5-minute video for free

Making Different Experiences from the Same Materials

We have mostly been producing experiences with Full Immersive designs. These games and simulations are interesting, but if someone doesn’t already have an expensive gaming computer and top-of-the-line VR peripherals, then they can’t see what it is we’re offering. So, as we develop that Full Immersive experience, we want people to consider how we can turn those into head immersive and non-immersive 3D experiences as well.

For instance, in one of our other projects, we’re developing software that allows us to examine the brain using Full Immersive VR. We’re also considering how to design a version of the software that relies only on head-tracking for controls. The Head Immersive VR experience only allows users to interact with one option at a time, but instead of trying to redesign all of the content and interface to work for that we can simplify the program to just the parts that work with the simpler interface and offer the more basic program for $2 instead of $20.

Likewise, there are portions of our software that include short lectures with 3D animations as examples. Since these lecture segments don’t demand interactivity, we can release some of them as non-immersive 3d videos outside of the software. This non-immersive media has almost no barrier to entry, so it can help potential customers understand the quality and nature of the content that we’re teaching. Using this strategy, we hope that we can get the attention and interest of educators and experts instead of just VR enthusiasts.

You can read more about the Clinical Encounters platform at its website and follow along with our development on the blogs.