When is the time for immersive technology users to give up their smartphones and switch to glasses, AR/VR headsets? That’s how to interact with digital. Input technologies are constantly evolving, from keyboards, mice to touch screens… how will we interact with technology in the future? In this Tech Town article, let’s explore today’s most advanced input tools for interacting with virtual reality (VR) and augmented reality (AR).

How do we interact with AR/VR now?

VR and AR are technologies that connect people and machines in a more direct way than anything else. Of course, one could define VR as something as narrow as “a VR headset”. But if we look at the development landscape, AR/VR is really “a biotech input/output system”. We’ll expand on this concept a bit.


Let’s take vision as an example, this is our main cognitive input and is the easiest to visualize. A virtual reality headset with a camera (image sensor) that takes input from the way we move in the environment (mimicking what our eyes normally do) – and transforms it into an output in the form of location tracking, gives us a sense of physical presence and freedom of movement in a synthetic 3D world.


Similarly, VR/AR tries to take ALL input from our physical senses and activities and deliver it back in the form of experiences and activities, opening up a world of digital dreams or possessions. Super power. (The movie “The Matrix” is a good example.)


Our hands are the primary input mechanism for influencing the world around us. We are used to using keyboards, touch screens, mice or various gaming controllers to interact with digital content. In VR we have advanced controls like 3D hand gestures and other sets of buttons/triggers for input. As we all know, the virtual reality industry is mainly focused on gaming – the area where these controllers are ideally adapted.


However, at Oculus Connect 6, Mark Zuckerberg revealed a lot about the way forward for Facebook (now Meta). On the one hand, we know that they are actually working on AR glasses, and on the other hand, the upcoming Oculus Quest will allow hand gesture tracking via the camera on the headset, with a focus on business and public use. department. And of course, the goal of AR glasses will be to replace our smartphones as a device to use for EVERYTHING, all day long.


In the long run, the gaming trend will likely become a sub-segment of VR and AR, and motion controllers like the Oculus Touch are mainly used in that segment. In all other cases, the user will be able to use other types of input methods.


But how? For the rest of this article, we’ll look at what input technologies are available today, along with how much control we have in the digital environment of the future.

New input types for VR/AR

Now let’s see other input types being developed. Note that these are not devices you can buy right now, most of them are in beta.


Tobii from Sweden is one of the leaders in this field and offers eye-tracking technology on HTC’s Vive Pro Eye. Both Apple, Microsoft, and Meta have made numerous acquisitions of other eye-tracking companies in recent years. For both VR and AR, eye tracking will definitely be part of future products.

Voice Recognition (Voice)

All VR headsets and most AR glasses have microphones and AI that support voice recognition, which is being developed at a rapid pace. At the same time, the interface model based on “conversational UI” (Conversational User Interface) is also accelerated by the use of chatbots in customer service, smart speakers and other applications. Surely we all want an AI assistant to effectively control the technology around us, like Tony Stark’s Friday.

Optical body tracking

In VR, we want the whole body movement and expression to be shown, instead of just the head and hands. If that were to be the case, we could go dancing, go sightseeing, or exercise in ways we wouldn’t be able to do in the real world. This can be achieved by placing sensors on the body, which are precisely tracked from the outside on the same principle as motion-capture suits for special effects in movies. This case is actually used today, but it’s too complicated and there’s no way to “daily use” in the future.


Microsoft has been developing Kinect – a kind of 3D body-tracking camera that is effective in Xbox games and beyond. The new even more capable Azure Kinect has just been released. Meta has also revealed some of their research on body tracking that can be done using a regular smartphone camera.


But how will this play out in our daily lives? Are we really satisfied with being monitored by cameras at all times? Or will we instead have our own drones with cameras attached to be able to transport our 3D avatars to other locations in real time? No, the more likely scenario is that we will have some form of indoor fixed camera. We can use that camera when we need to step into VR with a higher level of physical presence.


While gloves can give us a more robust experience and physical feel when interacting with things, unfortunately the form factor doesn’t really catch up with today’s technology. For example, Haptx has developed this heavy duty glove. Meta is also working on a lighter version of the glove. Maybe something light and wearable all the time will become the new standard for VR.

Optical hand gesture tracking

When it comes to low interaction thresholds in VR and AR, optical hand tracking is a promising direction. Leap Motion has been around for a long time with their “North Star” AR/VR. Microsoft is also leaning heavily on hand tracking with Hololens 2. Oculus Quest also released a version of hand tracking in 2020.


According to early tests, the accuracy of the solution from Oculus is not quite on par with Leap Motion, but we can expect them to work hard to make it better. For simple games and corporate computer use, hand tracking means “minimum friction”.

Muscle/space tracking

Around 2014, Canadian startup Thalmic Labs released a type of bracelet – the Myo – that could sense muscle movement and interpret gestures as computer input. The company changed its name to North and changed its focus to smart glasses, and in 2018 launched the Focals-style AR product. They sold the original technology behind Myo to another company…

Brain-Computer Interfaces (BCI)

Control computer with brain, that is our real VR dream. Elon Musk with his company Neuralink demonstrated a vision of an “AI integrated with humans” through an implantable device behind the ear to receive and transmit signals from the brain to the computer.


It’s easy to imagine such a device providing ultimate “remote” control in VR or AR (and of course, in the long run, it could even render headsets obsolete because the digital world can be transmitted directly to the visual center of the brain). The big question about technology like Neuralink is: how many people are willing to have brain surgery to become “enhancers”?


Here’s how CTRL Labs brings it to life. With the “CTRL Kit” prototype, they claim to have achieved the ability to interpret “motor intention”/intent from the brain by interpreting neural signals at a granular level to individual neurons. . And the CTRL Kit is not a device that requires surgery to be inserted into the brain, it is simply a bracelet with sensors pointing inward.


Do you remember Myo mentioned above? Their muscle sensing technology was acquired by Ctrl Labs in early 2019 to reinforce the technology they are working on. By reading individual nerve signals in the muscle, the user doesn’t even need to move the arm or finger to get the desired movement in VR/AR.


CTRL Labs was acquired by Meta for an unconfirmed price of approximately one billion dollars. Meta has previously described how they look at neural interface capabilities (non-invasive, unlike Neuralink), which is entirely in line with what CTRL Labs allows.


The future of neural interfaces will be spectacular when it is closely related to VR/AR.

Haptic feedback

Even if we can control the virtual environment with our thoughts, there is still something missing regarding haptic feedback. Meta is working on this. Project Tasbi was introduced as a hand wrap, with a motor that can vibrate or apply pressure to the biceps muscle to simulate tactile stimuli to the fingertips. We will be able to hold and feel digital objects, that will happen soon.

Superpowers in an AR/VR . environment

What is the ultimate goal of these input technologies? We are trying our best to achieve “perfect” interaction in AR or VR. That really is no less than being able to live in the magical digital world where we use our will to manipulate the digital any way we want. In AR, we will be able to communicate with computers and other people in a very simple way. When AR glasses allow us to see through walls (because everything has been scanned and put AR data in a worldwide cloud), we will have real superpowers. And in VR, we can tweak, bend, and transform the world to our liking. As Dr.Strange, that is the true ultimate strength.


And on the journey to that possibility, we will have a lot of digital tasks to do, which still has a long way to go.


Hopefully the information Tech Town brings above will be useful for businesses. If your business is looking for a reputable AR VR application development company, a team of highly qualified engineers with reasonable costs, Tech Town is confident to become the right choice for your business.


Tech Town is a technology company from Vietnam, with representative offices in the United States, Japan, Canada, the Netherlands,… We provide AR & VR application development services for businesses, optimize content delivery with immersive technologies, enhance the performance of decentralized systems, enhance customer experience, and delight next-generation users. In more than 4 years of operation, Tech Town has become a reputable technology partner trusted by startups and enterprises from many countries around the world such as the US, Canada, the Netherlands, Japan, the UK and other countries. other developers.


Contact us if you have any technological challenges.



Leave a comment

Your email address will not be published.