Real-Time Surveillance Will Test the British Tolerance for Cameras


CARDIFF, Wales — A few hours before a recent Wales-Ireland rugby match in Cardiff, amid throngs of fans dressed in team colors of red and green, and sidewalk merchants selling scarves and flags, police officers popped out of a white van.

The officers stopped a man carrying a large Starbucks coffee, asked him a series of questions and then arrested him. A camera attached to the van had captured his image, and facial recognition technology used by the city identified him as someone wanted on suspicion of assault.

The presence of the cameras, and the local police’s use of the software, is at the center of a debate in Britain that’s testing the country’s longstanding acceptance of surveillance.

Britain has traditionally sacrificed privacy more than other Western democracies, mostly in the name of security. The government’s use of thousands of closed-circuit cameras and its ability to monitor digital communications have been influenced by domestic bombings during years of conflict involving Northern Ireland and attacks since Sept. 11, 2001.

But now a new generation of cameras is beginning to be used. Like the one perched on the top of the Cardiff police van, these cameras feed into facial recognition software, enabling real-time identity checks — raising new concerns among public officials, civil society groups and citizens. Some members of Parliament have called for a moratorium on the use of facial recognition software. The mayor of London, Sadiq Khan, said there was “serious and widespread concern” about the technology. Britain’s top privacy regulator, Elizabeth Denham, is investigating its use by the police and private businesses.

And this month, in a case that has been closely watched because there is little legal precedent in the country on the use of facial recognition, a British High Court ruled against a man from Cardiff, the capital of Wales, who sued to end the use of facial recognition by the South Wales Police. The man, Ed Bridges, said the police had violated his privacy and human rights by scanning his face without consent on at least two occasions — once when he was shopping, and again when he attended a political rally. He has vowed to appeal the decision.

“Technology is driving forward, and legislation and regulation follows ever so slowly behind,” said Tony Porter, Britain’s surveillance camera commissioner, who oversees compliance with the country’s surveillance camera code of practice. “It would be wrong for me to suggest the balance is right.”

Britain’s experience mirrors debates about the technology in the United States and elsewhere in Europe. Critics say the technology is an intrusion of privacy, akin to constant identification checks of an unsuspecting public, and has questionable accuracy, particularly at identifying people who aren’t white men.

In May, San Francisco became the first American city to ban the technology, and some other cities have followed. Some members of Congress want to limit its use across the United States, with Representative Jim Jordan of Ohio, the top Republican on the House Oversight Committee, comparing the technology to George Orwell’s “1984” and a threat to free speech and privacy. A school in Sweden was fined after using facial recognition to keep attendance. The European Commission is considering new restrictions.

READ ALSO  iPhone 11, Apple Watch 5 and more: The final rumors

Britain’s use of facial recognition technology is nowhere close to as widespread as that used in China, where the government uses it in a variety of ways, including to track ethnic Muslims in the country’s western region. Opponents of the software say its use in a democratic country needs to be more carefully considered, not left to the police to determine.

But the British public has already grown accustomed to the use of surveillance cameras. The roughly 420,000 closed-circuit television cameras in London are more than in any other city except Beijing, equaling about 48 cameras per 1,000 people, more than Beijing, according to a 2017 report by the Brookings Institution. A recent government poll showed a mixed reaction to facial recognition, with about half of the people surveyed supporting its use if certain privacy safeguards were in place.

The Metropolitan Police Service in London tested facial recognition technology 10 times from 2016 until July of this year. Officers were often stationed in a control center near the cameras monitoring computers with a real-time feed of what was being recorded. The system sent an alert when it had identified a person who matched someone on the watch list. If officers agreed it was a match, they would radio to police officers on the street to pick up the person.

During one deployment near a subway station in London, officers detained a person intentionally seeking to obscure his face from the cameras to avoid detection. He was released after being ordered to pay a fine. In other instances, researchers found that the system flagged people who had been wanted for a past crime that had already been dealt with by the legal system.

Daragh Murray, a researcher at Essex University who spent time observing the use of facial recognition technology by the London police, said officials discussed integrating the technology in cameras around the city, including on buses.

“They were seeing it as the first step in a much bigger deployment,” said Dr. Murray, who published a 128-page report in July on use of the technology in London. He added, “The potential for really invasive technology is very high, but it can also be incredibly useful under certain circumstances.”

The technology has been most widely used by the South Wales Police after it received funding for systems from the Home Office, the agency that oversees domestic security across Britain. The police force uses the cameras about twice per month at large events like the Wales-Ireland rugby match, which was held at a stadium that fits more than 70,000 fans. At the national air show in July, more than 21,000 faces were scanned, according to the police. The system identified seven people from a watch list — four incorrectly.

READ ALSO  Iranian Hackers Target Trump Campaign as Threats to 2020 Mount

In Cardiff, the largest city in Wales, vans carrying facial recognition cameras have become a common sight over the past year. On game days, the vehicles have taken the place of vans the police used to detain fans causing trouble, said Stephen Williams, 57, who volunteers for the Socialist Party at a table nearby. “On most occasions, if it’s a busy event, you’ll see a van there,” he said.

The South Wales Police said the technology was necessary to make up for years of budget cuts by the central government. “We are having to do more with less,” said Alun Michael, the South Wales police and crime commissioner. He said the technology was “no different than a police officer standing on the corner looking out for individuals and if he recognizes somebody, saying, ‘I want to talk to you.’”

The police said that since 2017, 58 people had been arrested after being identified by the technology.

New questions are being raised about facial recognition’s use extending beyond the police to private companies. This month, after a report was published by the Financial Times, a large London property developer acknowledged that it used the technology at Kings Cross, a commercial and transit hub.

Critics say there has been a lack of transparency about the technology’s use, particularly about the creation of watch lists, which are considered the backbone of the technology because they determine which faces a camera system is hunting for. In tests in Britain, the police often programmed the system to look for a few thousand wanted people, according to a research paper published in July. But the potential could be far greater: Another government report said that as of July 2016, there were over 16 million images of people who had been taken into custody in the country’s Police National Database that could be searchable with facial recognition software.

Silkie Carlo, the executive director of Big Brother Watch, a British privacy group calling for a ban on the technology’s use, said the murky way watch lists were created showed that police departments and private companies, not elected officials, were making public policy about the use of facial recognition.

“We’ve skipped some real fundamental steps in the debate,” Ms. Carlo said. “Policymakers have arrived so late in the discussion and don’t fully understand the implications and the big picture.”

Sandra Wachter, an associate professor at Oxford University who focuses on technology ethics, said that even if the technology could be proven to identify wanted people accurately, laws were needed to specify when the technology could be used, how watch lists were created and shared, and the length of time images could be stored.

“We still need rules around accountability,” she said, “which right now I don’t think we really do.”



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com