Sponsored By

DeepMind wants to answer the big ethical questions posed by AI

Google's DeepMind artificial intelligence (AI) division has established a new research group to learn more about the ethical questions posed by the dawn of AI.

Chris Kerr, News Editor

October 4, 2017

2 Min Read
Game Developer logo in a gray background | Game Developer

Google's DeepMind artificial intelligence (AI) division has established a new research group to learn more about the ethical questions posed by the dawn of AI.

The British artificial intelligence outfit was acquired by Google in 2014, and often uses video games as part of its projects.

For instance, back in 2016 the company partnered with Blizzard to create an API tailored for research environments based in StarCraft II, and prior to that the DeepMind team developed an artificial agent capable of learning how to play Atari 2600 games from scratch. 

Now, the DeepMind Ethics & Society unit hopes to unravel some of the biggest ethical quandaries posed by the creation of artificial intelligence to pave the way for "truly beneficial and responsible AI." 

"We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work," reads a blog post on the DeepMind website. 

"The development of AI creates important and complex questions. Its impact on society -- and on all our lives -- is not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.

"As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work. At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes."

DeepMind isn't the only institution asking looking into this area. Other research projects, such as Julia Angwin's study of racism in criminal justice algorithms, and Kate Crawford and Ryan Calo's examination of the broader consequences of AI for social systems, have also begun to peel back the curtain. 

For DeepMind, the hope is that its new unit will achieve two primary aims: to help technologists puts ethics into practice when the time comes, and to ensure society is sufficiently prepared for the day AI becomes part of the wider world.

About the Author

Chris Kerr

News Editor, GameDeveloper.com

Game Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like