Read: What Amazon thinks you’re worth
And each of these companies failed to tell users they had a hot mic on their wrists or in their living rooms. In April, a Belgian news site, VRT, played clips back to stunned Google Assistant owners, who said they had no idea they were being recorded. Apple contractors told The Guardian they listened to audio collected when Siri was triggered accidentally, including during drug deals, private conversation with doctors, and one incident when an Apple Watch called Siri during sex. One Microsoft contractor said he overheard phone sex between couples and reviewed recordings of users asking the voice assistant Cortana to search for porn. Some Amazon contractors believe they listened to an accidental Alexa recording that included a sexual assault.
And Facebook makes five. Which is to say, this isn’t so much a series of “scandals” surrounding human review as the results of a user base becoming minimally aware of how voice-assistant technology actually works. Our listening devices did what they were designed to do. We just didn’t realize who was listening.
The AI sausage that voice technology relies on gets made in a feedback loop: The products perform well enough, voice data from customers are collected and used to improve the service, more people buy in to the product as it improves, and then more data are collected, improving it further. This loop requires a large customer base to sustain itself, which raises the question: Would as many people have bought these products if they knew that Romanian contract workers would listen to them, even if they didn’t deliberately trigger their devices? A Facebook spokesperson confirmed that contractors only transcribed audio from users who opted in to having the voice chats transcribed, but it’s not clear whether users could’ve used the voice-transcription feature at all without opting in to potential human review.
Read: The AI supply chain runs on ignorance
Because the complex of AI tools and human review exists in this feedback loop, the stakes only get higher as companies improve voice assistants, asking us to embed them deeper into daily life. Amazon has patented technology that would allow its speakers to assess users’ emotional states and adjust their responses accordingly. Google filed a patent that would enable its speakers to respond to the sounds of users brushing their teeth and eating. Voice assistants are already being tested in police stations, classrooms, and hospitals.
The effect is that our tools will know more and more about us as we know less and less about them. In a recent article for The New Yorker on the risks of automation, the Harvard professor Jonathan Zittrain coined the phrase intellectual debt: the phenomenon by which we readily accept new technology into our lives, only bothering to learn how it works after the fact. Essentially, buy first, ask questions later. This is the second feedback loop grinding onward alongside the first, a sort of automation-procrastination complex. As voice assistants become an integral part of health care and law enforcement, we accrue more intellectual debt in more aspects of life. As technology gets smarter, we will know less about it.