Submit your email to see more.

Enjoy the Case Study!
Oops! Try again!

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

"sfdf"sfdsfsdfsdf This is whay I'm looking for

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

VR User Interface Research

VR User Interface Research

"Exploration in the limitless canvas"

2015-2018

VR UI/UX is vastly different from the experiences we are used to on our screens. How can we translate many of the same interfaces we interact with on 2D displays and expand them into immersive spaces? This was my challenge and learnings from this research extended to many future projects.

Experience Design

User Interface Design & Simulations

User Interface Prototyping

The Challenge

The classic challenge of responsive design has two ends of the spectrum when it comes to display and interface format sizes. On one end are compact interfaces on smartphones or even smartwatch displays while the opposite end of that spectrum are large interfaces on displays we use desktop computers. Our challenge was to consider extending that spectrum to even beyond large format touch displays. What if the interfaces were so large that the user was literally inside of them? These are the types of interfaces Virtual Reality technology can enable. However, VR User Interfaces and experiences are vastly different from the experiences we are used to on our screens. How can we translate many of the same experiences we experience on 2D displays and expand them into immersive spaces? What anchors can we use to limit friction in the user experience, and create familiar yet seemingly limitless canvases? This was my challenge and learnings from this research extended to many ventures I have been a part of.

Summary of Results

  • Developed Design Systems and Design principles in early days of VR emergence into the mainstream
  • Defined hypothesis and justification for design experiments based on strategic insight on VR UX challenges and foreseeable constraints
  • Managed small team of 3 UX designers to develop design prototypes through iterative loops using 3D prototypes, animated simulation and user testing
  • Researched and developed novel navigation, gestures modalities and reticles for interactive environments
  • Developed modalities for Sandbox Environments using high constraint-driven VR input methods
  • Developed novel Stationary Locomotion principles utilized in subsequent projects and commercial products
  • Implemented learnings from large format display interfaces to create spatial interface modalities
  • Raised over $60,000 with several research grants in partnership with Canadian government for  non-dilutive grant funding to enable research, design and development initiatives
  • Research work performed here was the basis for several subsequent ventures and design patent portfolios commercializing successful experiments
No items found.

Strategy
& Insights

What’s possible with spatial interface design? What works and what doesn’t? These were the core questions that drove this research project. On top of all the fundamental UX considerations that are required as part of the traditional design process, VR requires additional considerations. Factors like ergonomics, simulator sickness, spatial awareness and display limitations all had equal impact on design decisions.

Key Principles

To create truly cutting edge design experiences, we wanted to focus on 3 key principles:

The Coupling of Environment and Interface As part of this exercise, we focused on extending traditional interfaces to take maximum advantage of the 3rd dimension creating what we call ‘Interface Environments’ as opposed to the dichotomy of Interface / Environment thesis that has dominated VR UX strategy. These ‘Interface Environments’ would allow for much more complex navigation systems that take advantage of physiological norms hardwired in human behavior.

The Decoupling of Bipedal Movement and Locomotion Because interfaces are now spatial and taking advantage of the 3rd dimension, movement or translation through that dimension is required. Typical methods of 6DoF head tracking are inefficient and create limitations around interface complexity. We needed to come up with alternative modes of locomotion to create efficiency and comfort in navigation.

Utilize Spaces in Peripheral Zones Spaces above, below and behind the user can be quite useful for orientation and added levels of complexity. Humans have physiological hardwiring that can be utilized to create familiar and comfortable interactions outside of the bounds of spaces in direct line of sight or in front of the user.

Technology
Invention

No items found.

Design & Experience

Efficiency via Stationary Locomotion

Coupling interface with environment creates an ‘interface environment’ that takes up space and requires navigation. Likewise 3rd person input on content creation also requires change in perspective, translation and navigation. Creating efficient translation and locomotion through these spaces is critical in a useful user experience. This of course needs to be done in a physiologically correct manner that doesn't cause motion sickness. This is the challenge.

To accommodate movement through large spaces, or to increase efficiency in translation through complex spatial interfaces we needed to develop modalities for stationary locomotion.

Unsuccessful Iterations

Z-Depth translation through space via acceleration sliders. This attempt proved to be unsuccessful due to the physiological disconnect between slider movement and forward translation for the user. This was an early prototype that was prone to causing motion sickness.

Z-Depth translation with physiological synchronization via ‘pull’ acceleration. The next locomotion modality was based on forward movement using hand motions similar to pulling oneself forward using ski poles. Typically skiers have limited mobility with their feet when attached to skies, hence the utility of utilizing poles to pull forward around physical anchors. This solved the need of requiring bipedal movement to translate across large spaces. The challenge with this modality was that the movement was often too complex and foreign for new users to understand, hence significantly increasing friction and onboarding time.

Z-Depth translation with sequential button triggers. This modality solved the UX problem of the previous ‘pull’ triggers, however the disconnection between physiological movement and virtual movement caused motion sickness. The issue was that forward movement was only triggered AFTER the button was pushed, creating a noticeable physiological confusion in the brain. Although users understood the task much easier, we need to solve the physiological problem.

Successful Iterations

Z-Depth translation with physiological synchronization via ‘push’ acceleration. Taking components of the previous iteration, we simplified the trigger movements to mimic the push of a virtual button. However, instead of forward movement being triggered via button push, the forward movement was actually directly tied to the hand acceleration AS the button was being pushed as opposed to AFTER the button is pushed. This in essence was a fake button, only acting as a visual indicator for the user when in fact pressing the button had no corresponding event tied to the experience. Again it was the forward hand acceleration that directly affected the virtual camera acceleration in Z-depth. Accompanied with footstep sound effects, this modality proved to be successful, easy to understand and physiologically appropriate for the user.

Z-Depth translation simulated via video playback. Taking what we know about translation through 3D space, we wanted to apply simulated movement through video playback. We started with 360 videos showing forward movement in space along a linear timeline. We then proceeded to build a video playback module that scrubbed via stored video frames in memory, and played them back based on hand acceleration in the z-axis. We placed our forward walking button UI, both forward and backward along a single axis in virtual space. When the user would face forward, and tap the walking button, the video would play forward along the timeline based on user hand acceleration. This created the sensation of moving forward, and likewise when the user would turn around 180 degrees and tap the walking button the video would scrub in reverse again simulating forward motion. This was an interesting simulation that could be useful in many use cases.

Navigation & Spatial Interface Design

Virtual reality products are all about 3 dimensions, and the user interface is no longer restricted to a flat rectangular screen or surface. We thought about how the design is in the real world and apply it to the virtual world while considering design elements such as sightlines. How do you organize the key components in any interface when the canvas size is unlimited? We can utilize some key physiological rules corresponding to directional gazing as anchors in creating interfaces that are both natural and functional.

Forward Gazing and Room Based Organization  When coupling interfaces and virtual environments, we can personify information architecture as rooms. Sections can be laid out in front of the user as doors to move forward through. The deeper the forward, or Z movement is, the further down the user moves through the information architecture tree.

Backwards Gazing and Breadcrumbs  Likewise, turning back to see where you’ve come from is a useful physiological anchor that can be tied to the breadcrumb modality, or going back. Creating a simple link between forward and backward movement in Z with movement along the information architecture tree can be quite useful.

Upwards Gazing and Abstraction There are useful positions within the virtual environment that may be outside the immediate line of sight. These include spaces above and below the user. Research argues that upwards deviations of the eyes activates creative activity in the brain encouraging us to reach nearby time and space into the infinite and eternal. This physiological hardwiring can be anchored to menu systems that require movement into unrelated or abstracted information that is not directly corresponding to what is currently in front of the user.

Downwards Gazing and Organization  Gazing downward is a clear psychological anchor related to organizing assets we currently have. Animals often bury valuable items in the ground, humans organize files on a desk and we usually keep valuables in our pockets. All of these physiological anchors can be used as anchors to the interface design. When it comes to storage, asset accumulation and archiving it makes the most sense to utilize Z space below the user for this exact purpose.

Gestures & Reticles

How to represent your hands in VR? That is the critical question. Often we see our hands as the most natural ‘pointing’ devices available, however when hand tracking is limited and often error-prone, how do you represent your hands in a crude yet useful manner? We tried various iterations, from lidar based hand tracking projected as simple silhouettes, to converting the hand into an actual pointer with corresponding reticles to a combination of the two.

The challenge with realistic hand representation is the uncanny valley. Many technologies available today are not yet mature enough to fully capture the hands in their full fidelity. This can result in a disconnect between what is expected and what is actually projected in the virtual environment. To address this limitation, we proceeded with a solution that provided 80% of the result with only 20% of the computational effort. Combining 4DoF, including XYZ and Roll was sufficient in providing most of the necessary capabilities to manipulate virtual objects. This made pressing, grabbing, rotating all possible with one hand while allowing special gestures to be performed with both hands in combination. The visual representations seen here are all variations of hand abstraction - the more realistic the hand the higher the expectation of interaction fidelity.

Interface Components

Successful Interactions

There are several input modalities we explored in spatial interface design. The most critical interface components were microinteractions with buttons and anchors. Creating buttons that visually represented simplicity yet delighted the user with deep, life-like behavior when engaged was important in setting expectations with the user. Likewise allowing virtual UI components to overlay on top of 3D objects was a great advantage in VR environments.. Anchors provided additional functionality for more precise manipulation that was not available in real-world interactions with physical objects.

Unsuccessful Iterations

We noticed interfaces related to keyboards and canvas interactions were quite tedious and unsatisfying experiences in VR. Interfaces then could not rely on keyboards in VR as the experience oversaturated the user with too many input options making voice dictation and voice control much more suitable. Likewise drawing on flat canvases was unsatisfying and impractical compared to line drawing in 3 dimensions.

Don’t expect users to read text

Textual instructions don’t perform well in VR spaces for several reasons – a lot of text can cause eye strain, and it also breaks a sense of immersion. When it comes to VR design, it’s always better to use short text sentences or audio instructions instead of long blocks of text.

Complex visual input modalities don’t work

Virtual user experiences bring us closer to converging the virtual and real world. Thus, it makes more sense to utilize voice as much as possible instead of virtual keyboards and other input modalities that create bottlenecks in the experience. We also realized that 1st person input in modeling and building environments was still less efficient that 3rd person input modalities we are used to in 3D modeling and illustration software.

Work around the uncanny valley

Pushing technological capabilities is critical in getting VR to mass adoption. However, technical limitations still exist and managing user expectations is critical in successful VR user experiences for today’s generation of hardware. Often we can get 80% of the benefits with 20% of the effort and we need to follow these rules in all aspects of experience design including the  abstraction of input modalities.

Product Showcasing

As commerce begins to expand in virtual reality, it was interesting to explore how products would be showcased in these spaces. Grabbing objects, rotating and manipulating them as part of the showcase experience was important - but solving how supporting information would be displayed was also a key part of the experience. We experimented with mimicking retail showcasing in brick and mortar environments that utilized contrast in depth and environmental supporting elements to immerse the user in the product experience.

Expanding virtual environments to mimic a full brick and mortar shopping experience was also something we experimented with. Personifying shopping spaces as virtual showrooms was an interesting concept that still has many stones unturned, most notably how shopping can be elevated with multi-user or group shopping experiences.

Sandbox Environments

We experimented with various interfaces for users to build and explore spaces. Building spaces in VR in 1st person proved to be far less effective than building in 3rd person. This made perspective toggling critical in VR when it came to sandbox environments. Although 3rd person interactions were far more productive in building spaces, fine tuning and adjustments were equally as effective in 1st person.

The most useful use case in VR sandbox environments had to be multi-user collaboration in abstract brainstorming activities. This coupled with efficient locomotion and perspective toggling proved to be a very interesting use case that proved to be more effective than any other brainstorming activity currently available.

No items found.

Launch & Showcase

PinchVR® Smartphone Case