Oculus Rift-Based System Brings True Immersion to Telepresence Robots

posted in: Robotics News | 0

Remote presence robots, as the name implies, act as your stand-in at a distant location, letting you move around and see and hear through a robotic surrogate. Space agencies, researchers, and the military have developed high-end telepresence systems that offer an immersive experience, but these units can cost millions of dollars. Consumer telepresence robots (like the Double or Beam), on the other hand, cost much less but can’t create a sense of immersion—you’re staring at a video feed on a computer screen, after all.

Now a team of roboticists at the University of Pennsylvania is using affordable sensors and actuators and virtual reality technologies like Oculus Rift to build a platform that offers the same capabilities of a high-end telepresence system at a reasonable cost. DORA (Dexterous Observational Roving Automaton) attempts to bring true immersion to teleoperated robots, by precisely tracking the motion of your head in all six degrees of freedom and then duplicating those motions on a real robot moving around the real world. Their goal is making the experience so immersive that, while operating the robot at a remote place, you’ll forget that you’re not actually there.

When you put on a virtual reality system like the Oculus Rift, you’re looking at a virtual world that’s being rendered inside of a computer just for you. As the sensors on the system track the motions of your head in near-real time, software updates the images displayed in front of your eyes at a frame rate higher than you can detect. When this works, it works pretty well, and (in my experience) it’s not at all difficult to convince yourself that you’re temporarily inside a simulation or game.

However, if all the pieces of the system aren’t working together flawlessly, it’s very easy to tell that something is off. Best case, it breaks the immersion. Worst case, it breaks the immersion and you get sick. The problem here is that our eyes and brains and inner ears and all of the other sensors that we use to perceive the world have incredibly high standards for realism, and they’re very good at letting us know when something isn’t quite right.

This is the reason that immersive telepresence is so difficult: it has to be close to perfect. It’s not particularly difficult to create a telepresence robot on a pan/tilt head that can stream video and mimic your basic movements, but that level of sophistication isn’t going to fool your brain into an immersive experience. For immersion, you need something a lot more complicated, and that’s what DORA is trying to accomplish.

“DORA is based upon a fundamentally visceral human experience—that of experiencing a place primarily through the stimulation of sight and sound, and that of social interaction with other human beings, particularly with regards to the underlying meaning and subtlety associated with that interaction. At its core, the DORA platform explores the question of what it means to be present in a space, and how one’s presence affects the people and space around him or her.”

Er, yeah. What they said.

Let’s take a look at how DORA works. The Oculus headset tracks both orientation (via an IMU) and position (using infrared beacon tracking). Positional data are sent wirelessly to Arduinos and Intel Edison microcontrollers on the robot, which responds in pitch, roll, yaw, and x, y, and z dimensions.

DORA’s cameras each stream back 976 x 582 video at 30 frames per second, which is a bit below what the Oculus can handle in both resolution and frame rate. This is mainly a budgetary constraint at this point (DORA is still a prototype, remember), but if the researchers upgrade their cameras, the overall system would have no trouble handling it.

With an immersive system like this, one of the biggest concerns is latency: when you move your head, you expect what you see with your eyes to change. If the delay between these two things is too large, it can result in an experience that can be anywhere between uncomfortable and full-on puketastic.

With a VR system like the Oculus, sensors have to register that you’ve moved your head and send that information to your computer, your computer has to calculate how much your view has changed and how and then render the next frame of animation, and then the screen on the headset has to display that imagery. Oculus cites early VR research to suggest that 60 milliseconds is “an upper limit for acceptable VR… [but] most people agree that if latency is below 20 ms, then the lag is no longer perceptible.”

DORA’s main challenge is that it not only has to deal with a wireless connection, but it also has to deal with the constraints imposed by the movement of mechanical parts. Its creators say that in typical use they’ve measured a latency of about 70 milliseconds. This is still slightly high, the team admits, but it’s not bad, and we’d expect that there’s additional optimization that could be done.

img
Photo: DORA Platform

At the moment, DORA is operating over a radio link with a line of sight range of about 7 kilometers. This is what DORA was designed for, but if it ends up working in museums or as a remote platform for first responders, it’ll obviously have to transition to Wi-Fi or a 4G connection. This will of course introduce additional latency, but the DORA team expects that as wireless infrastructure improves over the coming years, it won’t take long for it to reach a point where DORA could be accessed from nearly anywhere.

DORA’s creators, a team of UPenn seniors (John C. Nappo, Daleroy Sibanda, Emre Tanırgan, and Peter A. Zachares) led by robotics professor Vijay Kumar, say they’ve tested the system with 50 volunteers, and only three reported some sort of motion sickness, which seems about on par with a typical Oculus experience.

We haven’t tried out DORA for ourselves (yet), but Emre describes the experience of telepresence immersion like this:

You feel like you are transported somewhere else in the real world as opposed to a simulated environment. You actually get to see people interacting with you, as if you were actually there, and it’s difficult to recreate the same experience in a virtual environment right now. You could argue that real time computer graphics used for VR will soon catch up to the level of realism of our world, but another key difference is the fact that everything is live. There is an inherent unpredictability with a system like this where you could go somewhere in the real world and not know what could possibly happen when you’re there.

We’ve been told that there’s potential for a crowdfunding campaign and formal product launch, but only that the team doesn’t yet have any concrete plans. Initial markets would likely include relatively controlled, semi-structured environments like museums, followed by applications for specialists like emergency responders, and eventually DORA would become available for consumers to play with, and that, of course, is what we’re selfishly most interested in.

Telepresence companies like Suitable Technologies are already testing out remote presence experiences, like museum visits for people with disabilities. With DORA, you could have a similar experience except while feeling as if you were really there, instead of just looking at a moving picture inside of a box. It may not be able to replace the real world (not yet), but it’s getting closer, and a system like DORA is the way to do it.