Inside Oculus’ Quest to Design an Invisible VR Controller

This prototype embodies a trifecta of roads not traveled: it's worn rather than held; it employs a centered thumbstick and no buttons; and rather than a conventional trigger button, it opts for a rotary scroll wheel. JONATHAN SPRAGUE


the perfect time for getting some last-minute things done at the office, maybe finishing up holiday shopping. If you’re feeling particularly brave, you might even fight your way through airport crowds to visit your family. On December 22 in 2012, though, Nirav Patel was in China. A couple of months before, the young engineer had left Apple for a little company called Oculus, and now he was checking out production facilities that could help manufacture his new employer’s virtual-reality headset. Nirav being Nirav, he had a pocket notebook with him, and on this particular Christmas Eve Eve Eve, he sat down and started drawing.
Soon, he had sketched out two different views of the same object. From the top, it looked like a lima bean. In profile, it was the spitting image of a cyborg walrus with a tiny chef’s hat on. Scribbled around the drawing were annotations describing the various buttons and shapes festooning the object—“jog/scroll,” “vibe motor,” “piezo element,” “clicking analog”—and at the top of the page, in a space marked Project, Patel wrote the word “Controller.” As long as Oculus was making a VR headset, he reasoned, the company might as well think about the best way to play games in that headset.

Nearly four years later, Oculus has produced a pair of devices that share some key features in common with Patel’s sketch. But the Oculus Touch, which goes on sale today, is much more than a set of controllers—they are, in effect, your hands. And by being your hands, they provide the first glimpse of what virtual reality is fast becoming: a social universe.


Oculus engineer Nirav Patel's 2012 sketch of what a VR input device might look like.

A New Level of Immersion

Patel’s 2012 sketches proved to be a bit premature. For the next year, pressure to make the Rift headset a reality would turn the controller into something Patel and other employees worked on during off hours—“not like a side project,” the company’s VP of product Nate Mitchell said to me in spring of 2014, “but that’s all the time people could find for it.” The company had outsourced early exploration work to Seattle firm Carbon Design Group. It was only in early 2014, after Facebook bought Oculus, that work on the controller began in earnest.
Several new people, including all of Carbon, came on board, along with chief architect Atman Binstock. Internally, teams working on the controller called it Super Action—a reference to a huge, intricate, Coleco Vision controller from the early ’80s. (Its tagline: “The Most Sophisticated Video Game Controllers Yet!”) Deep-cut jokes notwithstanding, though, Oculus had long since graduated from thinking about VR as just a gaming platform. The technology delivered insanely immersive video games, sure—but, as the Facebook acquisition had affirmed, it also had the potential to bring people together in an unprecedented way.
To deliver on that promise, the Rift’s handheld complement needed to be much more than a capable controller. The most compelling VR experiences hinge on what people in the industry call presence. It’s what happens when all the technical elements of the hardware and software come together to convince your brain that what you’re virtually experiencing is actually real. On its own, Oculus’ Rift could enable presence, but the illusion felt incomplete; while the Rift could teleport you into another world, your hands didn’t make the trip. Instead, you were stuck using an Xbox controller or a small remote control to navigate that world.
Oculus wanted to develop VR input that could enhance users’ sense of immersion. Binstock calls this hand presence. “Could you reach out and grab something?” he asks by way of explaining. “Could you bring your intentionality in? Could we come up with something that gave you more intuitive natural interaction than a traditional controller or just a wand?”
By that time, Oculus knew that other companies were working on VR systems, and at least one of them would be using a wand-shaped controller that could be tracked in space. But Oculus had decided that wands were limited. “Part of the magic of hand presence is feeling like the virtual hand feels in the same place as your real hand, and in the same pose—this innate kinesthetic sense of where all your joints are in your body even with your eyes closed,” Binstock says. “The more that mismatches, the more your brain says, ‘that’s not my hand.’”
Super Touch, then, had to be more than comfortable to hold—it had to disappear in your hand. It also had to be trackable in space, regardless of how you were holding it. And it had to let you do various things with your hands without letting go of the device, so that it could translate your finger movements into VR. There was no easy precedent for what that input device might look like, but like Patel with his early sketches, people were ready to figure it out. “Can you look out into the unknown from where we are in VR and get excited instead of scared by that?” Patel asks. “That’s what we hire for.”


Oculus-Touch-6-1.jpg

Enter Toybox Zero

The Super Action teams spent the rest of 2014 exploring both form and function. Leading that charge were creative director Peter Bristol, who had come from Carbon, and software engineer Matt Alderman. While Bristol and his team explored various form factors, Alderman would build them prototype experiences on which to try their designs. One tested a grip trigger by having you grab a nail gun and use it to fasten blocks of wood together. Another just let you pick up an object and throw it. (There was also plenty of, as Alderman puts it, “sitting around with a fake controller and pretending to play games with it.”) Eventually, many of those experiences found their way into an internal demo that Oculus called Toybox Zero. If you had a question about how a control schema worked in virtual space, you took it for a spin in Toybox Zero.
One by one, the questions found answers. How could the device be trackable while preventing hand occlusion? (Answer: a small halo of infrared-translucent plastic, housing a constellation of 24 LEDs, that encircled the back of your hand.) Should the device be held, or worn? (Answer: Wearing something lets you open your hand fully without dropping it, but getting the second one on—while wearing a headset, no less—made that a nonstarter. “At some point,” Bristol says, “it started feeling like a wearable thing was not going to provide as much value as it was a hindrance.”) Did it need conventional gaming controls on its face, or just a single thumbstick? (Answer: Somewhere in between; while some gaming staples—like the D-pad that Mitchell pushed for—ended up on the cutting-room floor, the team knew that many of the developers making experiences for Touch would be coming from a gaming background, and it would be easier to preserve certain conventions.)
I’ve brought people in who are not gamers to do demos, and it’s incredibly fast. You’re like, here are your controllers, this is what this does. Go!CAITLIN KALINOWSKI
Eventually, in early 2015, the Super Action project reached an inflection point in a 3-D printed mockup. The weight of the tracking ring was on one side of your hand, and the weight of the handle was on the other, which made the device feel perfectly balanced. The face would support a thumbstick and two buttons, with the thumbstick positioned closer to the crook of the thumb, to take advantage of the digit’s natural range of motion.
But while standard “gullwing” controllers, the kinds you’d use with an Xbox or a Playstation, can be held in various ways based on user preference, Bristol’s team had realized that grip needed to be constrained. “To bring your hands into VR, you really want that relationship between hand and controller to be consistent,” he says, “so you want people to lock into one spot.” So the face of the Super Action mockup naturally fit between the middle knuckle of the thumb and the base knuckle of the index finger—an area that aligns perfectly across nearly everyone. (Seriously, look at the back of your hand.) The more people tried it out, the more excited they got. “By that point we’d been working for a while and we didn’t have the answer,” Bristol says. “Over a day of passing it around, we got hyper-focused, like ‘we gotta do it!'”
Meanwhile, Patel and other engineers at Oculus had to figure out how you might actually, y’know, make this thing. “We had the same challenge on the Rift: to take this crazy thing that we invented and make it manufacturable,” Patel says. The tracking alone was a nightmare: While the most fastidious designs can ensure a margin of error of around .2 mm, those LEDs needed to be placed with an order of magnitude more precision. “The actual emission source is like a speck of dust,” Patel says, “and that’s the thing that we need to have exact placement on.” For Caitlin Kalinowski, who had moved from elsewhere at Facebook to Oculus to become head of product design engineering, it was almost too much. “I was looking at their list of specs like, ‘you want to pack all this stuff in?'” she says, “I had to take a minute. But without those features, you don’t get hand presence.”
By June 2015, Oculus had turned the mockup into a usable prototype, known internally as Half Moon—brought it to the E3 gaming show, as well as an updated version of Toybox Zero, which was now just known as Toybox. For the first time in a Rift, people could look down and see their hands in virtual space; hell, they could use their hands. Pick things up, point, wave hello. The buttons and triggers were now capacitive, registering contact with your fingers, so when you gave a thumbs up in real life, the controller would translate that into a VR gesture. Granted, just because you take your thumb off a button doesn’t mean you’re trying to give a thumbs-up. To get a little more technical, the controller interpolates various hand poses over time in order to give what Binstock calls “our best approximation” of where your hands are. “Part of that is, if we’ve detected your finger comes up, we say that virtually should be your finger pointing, because that’s what people are going to want to do with this,” he says. Also, the triggers are pressure-sensitive, allowing for a range of hand openness; thus, if you want to wave, you release the triggers completely and wave.
More importantly, Toybox was more than a place to learn how to use your hands in VR. It was a place to learn alongside other people. Inside Toybox, you’d get a tutorial on how to do everything—play ping-pong, set off rockets, throw blocks—not from a friendly AI character, but from a real person who was also inside your toybox (but actually down the hall in a headset of their own). It was the first time Oculus had shown a glimpse of the interpersonal power of VR. “We hadn’t done a real demo where you had what we called ‘social presence’ ever before,” says Mitchell. “It went from ‘hey, I’m playing a game,’ to ‘oh, my God.’ That was mindblowing.” Other people thought so too; indeed, Zuckerberg started using Toybox to entertain visiting foreign dignitaries. The prime minister of Singapore, it turned out, particularly liked playing ping-pong.

Extending the Magic

Fast forward a year and a half, and the company’s bet on social presence has only intensified. When the final Touch arrives on people’s doorsteps this week, it will include Toybox, as well as a number of creative, and decidedly non-gaming, experiences (Medium allows you to create VR sculptures, and even 3-D print them; Quill is an animation tool from Oculus Story Studios, the company’s interactive filmmaking division). And many of the marquee games have a pronounced social component, whether it’s the Old West-style, multiplayer shootouts of Dead & Buried or the collaborative construction made possible by Fantastic Contraption.
We want to see a future with even more tracked input.NATE MITCHELL
Starting today, there are 50 Touch-enabled titles, with more to come. Now that the Rift has a way to bring your hands into VR, the company is looking to extend the magic of its technology to the millions of non-gamers who might have stayed on the sidelines until now. “I’ve brought people in who are not gamers to do demos,” says Kalinowski, “and it’s incredibly fast. You’re like, ‘here are your controllers, this is what this does, go!’”
Of course, in the eight months since the Rift first went on sale, the VR landscape has become a bit more crowded—and a lot more competitive. The HTC Vive and Playstation VR, both of which launched with wand-style controllers, are high-end alternatives to the Rift, while Google’s new Daydream platform has made a splash in the world of mobile-driven VR. But while PSVR is poised to dominate holiday sales, Oculus has its sights set well past 2016. “Now we’ve got all the fundamentals in place,” Mitchell says. “2017 is going to be an incredible year.”
And while Touch is just the beginning of Oculus’ quest for the perfect human-VR interface—“we want to see a future with even more tracked input,” Mitchell says—it’s far from the only option. Just yesterday, for instance, the gestural interface company Leap Motion announced a tiny new sensor that could give any headset the ability to track your hands in space, without controllers. (“I think it’s going to enable a lot of interesting social experiences, but it’s not the answer for interaction,” says Patel of hand tracking. “You really need to have that tactility.”) But for now, the company is doing what it’s done from the very beginning with the Rift: waiting to see what developers do with their new tools. And someone, somewhere, is sketching what the future might look like.