Facebook this evening announced it intends to buy Ctrl-labs, a New York-based startup developing a wristband that translates musculoneural signals into machine-interpretable commands. The acquisition hasn’t yet closed, and the terms weren’t revealed publicly. (CNBC pegs its size at approximately $1 billion, half the amount Facebook paid to acquire virtual reality company Oculus VR in 2014.) But the Menlo Park company said it plans to fold Ctrl-labs into its Reality Labs division, whose principal work concerns virtual and augmented reality technology.
Ctrl-labs CEO and cofounder Thomas Reardon will join Facebook, as will other employees who opt to do so. Prior to the acquisition, Ctrl-labs raised $67 million from investors including GV (Google’s venture capital arm), Amazon’s Alexa Fund, Lux Capital, Spark Capital, Matrix Partners, Breyer Capital, and Fuel Capital.
“We know there are more natural, intuitive ways to interact with devices and technology. And we want to build them,” Facebook AR/VR VP Andrew Bosworth wrote in a post announcing the deal. “It’s why we’ve agreed to acquire Ctrl-labs. They will be joining our Facebook Reality Labs team where we hope to build this kind of technology, at scale, and get it into consumer products faster.”
The deal comes months after Ctrl-labs announced it had purchased patents associated with Myo, a wearable created by North (formerly Thalmic Labs) that enables control of robotics and PCs via gestures and motion. At the time, Ctrl-labs chief strategy officer Josh Duyan said they’d bolster Ctrl-labs’ developer tools and lay the cornerstone of a surface EMG control industry standard ahead of its developer kit’s expanded availability.
Ctrl-labs was founded by Reardon, Patrick Kaifosh, and Tim Machado, who received their PhDs in neuroscience from Columbia University. (Prior to enrolling at Columbia, Reardon spearheaded a project at Microsoft that became Internet Explorer.) The company’s prototype — Ctrl-kit — comprised two parts: an enclosure roughly the size of a large watch that’s packed with wireless radios, and a tethered component with electrodes that sits further up the arm. The accompanying SDK shipped with built-out JavaScript and TypeScript toolchains and prebuilt demos, and programming was largely done through WebSockets.
The final version of Ctrl-kit was to be in one piece, and it wouldn’t have been an entirely self-contained affair. The developer kit has to be wirelessly tethered to a PC for some processing, but the goal was to get to the point where overhead is such that it could run on wearable system-on-chips.
Ctrl-kit leverages EMG to translate mental intent into action. Sixteen electrodes monitor the motor neuron signals amplified by the muscle fibers of motor units, from which they measure signals, and with the help of AI algorithms trained using Google’s TensorFlow distinguish between the individual pulses of each nerve.
The system works independently of muscle movement; generating a brain activity pattern that Ctrl-labs’ tech can detect requires merely the firing of a neuron down an axon, or what neuroscientists call action potential. That puts it a class above wearables that use electroencephalography (EEG), a technique that measures electrical activity in the brain through contacts pressed against the scalp. EMG devices draw from the cleaner, clearer signals from motor neurons, and as a result are limited only by the accuracy of the software’s machine learning model and the snugness of the contacts against the skin.
It’s not difficult to imagine Ctrl-labs’ tech complementing that which Facebook is actively developing. Earlier this year, Facebook provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs — Facebook’s Pittsburgh-based division devoted to augmented reality and virtual reality R&D — described a prototypical system capable of reading and decoding study subjects’ brain activity while they speak.
But video games long topped the list of apps Ctrl-labs expected its early adopters to build, particularly virtual reality games, which the company believed was a natural fit for the sort of immersive experiences EMG can deliver. (Imagine swiping through an inventory screen with a hand gesture, or piloting a fighter jet just by thinking about the direction you want to fly.) And not too long ago, Ctrl-labs demonstrated a virtual keyboard that maps finger movements to PC inputs, allowing a wearer to type messages by tapping on a tabletop with their fingertips.
“Technology like this has the potential to open up new creative possibilities and reimagine 19th century inventions in a 21st-century world,” wrote Bosworth. “This is how our interactions in VR and AR can one day look. It can change the way we connect.”
Ctrl-labs earlier this year joined the nonprofit consortium Khronos Group’s OpenXR working group, which seeks to create a royalty-free API and device layer for virtual reality and augmented reality apps. A provisional version (version 0.9) of the standard was released in March, with companies including AMD, Arm, Google, Microsoft, Nvidia, Mozilla, Qualcomm, Samsung, Valve, Unity, LG, Epic Games, HP, HTC, Intel, MediaTek, Razer, and Unity Technologies contributing to its development and implementation.