Google’s DeepMind Starts Ethics Group to Examine AI’s Impact on Society


Google is finally taking steps to ensure that its rapid development in the field of AI will only bring about positive change for the whole of humanity. London-based company DeepMind, a subsidiary of Google parent firm Alphabet, has formed a new research unit called “Ethics & Society,” tasked to steer the group’s AI efforts.

“Our intention is always to promote research that ensures AI works for all,” DeepMind explains in a blog post. Promising to “help technologists put ethics into practice,” DeepMind Ethics & Society group outlined the principles that will guide its future endeavors: social benefit, being rigorous and evidence-based, transparency and diversity.

The group is comprised of thinkers and experts from a variety of disciplines. They include Nick Bostrom (Oxford University philosopher), Diane Coyle (economist from University of Manchester), Edward W. Felten (computer scientist from Princeton University) and Christiana Figueres (Mission 2020 convener) to name a few, Gizmodo reported. The group lists some of the key issues it will address including AI risk management, setting up standards of AI morality and values as well as lessening the economic disruption AI will likely bring when it replaces real people in the workforce.

It still remains to be seen just how persuasive DeepMind Ethics & Society will be in terms of imposing its recommendations on Google’s AI ambitions. A clash between the two groups is likely to happen in the future considering that Google’s thrust of churning out potentially profitable AI-powered products may run counter to the Ethics & Society’s goals and principles.

The rapid development of artificial intelligence is a rather divisive issue even among industry titans. One of the most vocal opponents of unregulated research on AI is Tesla CEO Elon Musk who view artificial intelligence as a potential threat to mankind, calling for a proactive stance in its regulation.

“AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said earlier this year.” Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilization.”

[Featured Image via YouTube]



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com