Sure, we nearly constantly warn the Internet about the impending threat of our eventual robot overlords, but who wants to listen to a bunch of bloggers making nervous Terminator jokes? Now there’s an open letter signed by actual experts in artificial intelligence that urges everyone to make sure that they build their AI to factor “Is this good for humans?” into its decisions.
The letter’s signatories include Elon Musk and Stephen Hawking, both of whom have recently expressed concern over the future of AI and whether or not we’re going to Tony Stark ourselves right into an Ultron situation. Other big names include DeepMind co-founders Demis Hassabis, Shane Legg, and Mustafa Suleyman; Microsoft research director Eric Horvitz; Yann LeCun, head of Facebook’s Artificial Intelligence Laboratory; tons of university computer science professors; and more.
It comes from the Future of Life Institute and makes the important distinction between programming AI to make the most logically sound decision above all else and programming it to make the most societally beneficial decision. They have an in-depth attachment detailing the research priorities that AI firms should consider moving forward so that computers make the latter call in a given situation instead of the former.
That way, we’re a lot less likely to wind up with artificial intelligence that decides the human race is an unnecessary liability in balancing its internal logic. Not only that, but when we have computers making decisions involving things like global financial institutions, it’s important that they consider the impact on society as a whole and not just what’s right in front of them, so to speak.
We’re still trying to get human beings to think the same way, so AI researchers have their work cut out for them.
(via cnet)
Are you following The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?
Published: Jan 12, 2015 07:00 pm