Amazon and Microsoft face scrutiny in report about killer-robot tech – GeekWire


Boston Dynamics’ four-legged SpotMini robot may look scary as it shares the stage with company founder and CEO Marc Raibert at Amazon’s re:MARS conference in Las Vegas in June. But a report published this month praises Boston Dynamics’ owner, SoftBank, for confirming that it won’t develop technologies that could be used for military purposes. (GeekWire Photo / Alan Boyle)

Dutch activists are voicing concerns about technologies that could open the way for lethal autonomous weapons – such as AI software, facial recognition and swarming aerial systems – and are wondering where several tech titans including Amazon and Microsoft stand.

So are some AI researchers in the United States.

A report issued by Pax, a Dutch group that’s part of an international initiative known as the Campaign to Stop Killer Robots, calls out Amazon, Microsoft and other companies for not responding to the group’s inquiries about their activities and policies in the context of lethal autonomous weapons.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” the report’s lead author, Frank Slijper, said this week in a news release. “Many experts warn that they would violate fundamental legal and ethical principles and would be a destabilizing threat to international peace and security.”

Pax’s report raises concerns about bids made by Amazon and Microsoft for a $10 billion Pentagon cloud-computing project known as the Joint Enterprise Defense Infrastructure, or JEDI, and about the facial recognition software developed by both those companies. The report says other technologies that are potentially relevant to robotic weapon systems include Amazon’s drones as well as Microsoft’s HoloLens augmented-reality system, which is currently being used for military training.

“Besides Amazon and Microsoft … AerialX (Canada), Anduril, Clarifai and Palantir (all U.S.) emerge in this report as working on technologies relevant to increasingly autonomous weapons and did not reply to numerous requests to clearly define their position,” Pax said.

READ ALSO  Annual Conference on Pervasive and Ubiquitous Computing, London, September 11-13

In all, Pax gave 21 companies around the world a “high concern” rating.

On the flip side, Pax singled out four companies for their public stands against the use of their technologies to develop or produce lethal autonomous weapons:

  • Google published its AI Principles in 2018, which state that Google will not design or deploy AI in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
  • Vision Labs told Pax that it does “not develop or sell lethal autonomous weapons systems” and added that the company’s contracts “explicitly prohibit the use of VisionLabs technology for military applications.”
  • SoftBank, the owner of Boston Dynamics, said it will not develop lethal autonomous weapons and has “no intention to develop technologies that could be used for military purposes.”
  • Animal Dynamics provided a statement from CEO Alex Caccia saying that “under our company charter, and our relationship with Oxford University, we will not weaponize or provide ‘kinetic’ functionality to the products we make.”

We’ve reached out to Amazon and Microsoft for comment, and will update this story with anything we can pass along. In the meantime, it’s worth noting that Microsoft has set up an advisory board known as Aether to address ethical questions raised by AI applications.

In April, Microsoft President Brad Smith said the company refused to sell facial recognition software to a California law enforcement agency that planned to run a face scan “anytime they pulled anyone over.” He said Microsoft also ruled out use of the technology on cameras placed in public spaces in an unnamed foreign city.

READ ALSO  Facebook suspends tens of thousands of apps following Cambridge Analytica scandal

Amazon, meanwhile, is responding to concerns about law enforcement use of its cloud-based facial recognition technology, known as Rekognition. In June, Amazon Web Services CEO Andy Jassy said he’s open to federal regulation of the technology.

Max Tegmark, an MIT physicist who is also the co-founder and president of the Future of LIfe Institute, favors having an international treaty that stigmatizes lethal autonomous weapons, as part of an approach he calls “beneficial AI.” For years, diplomats have conducted meetings to discuss drawing up such a treaty – including a session held this week in Geneva – but the idea hasn’t gotten very far to date.

In an email to GeekWire, Tegmark said Pax’s report raises valid concerns:

“My $0.02 as an AI researcher: The most critical expertise needed to build these weapons isn’t in government labs, but in large tech companies. Tech companies need a clear policy explaining where they draw the line between acceptable and unacceptable use of AI, because their employees, investors and customers are demanding to know.”

In a follow-up email, Oren Etzioni, CEO of Seattle’s Allen Institute for Artificial Intelligence, seconded that view – and raised the ante with a provocative question:

“I agree that tech companies need a clear policy explaining where they draw the line between acceptable and unacceptable use of AI.  We should also distinguish between basic research and classified project work for the DoD [Department of Defense].  Finally, I would ask: If our tech companies refuse to work with the DoD but tech companies in China, Russia and elsewhere make the opposite decision – where does that lead us over time?”





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com