Maybe it's all those Terminator movies, sci-fi novels, and post-apocalyptic video games, but it just makes sense to be wary of super-intelligent AI.
Thankfully, two major firms in the study of artificial intelligence have convened to work on ways to keep AIs from tackling problems improperly — or unpredictably.
OpenAI, a group founded by techno-entrepreneur Elon Musk and DeepMind, the team behind the new reigning Go champ, AlphaGo, have teamed up to find ways of ensuring an AI solves problems to human-desirable standards.
While it's still faster sometimes to let an AI solve problems on its own, the team-up found that humans need to step in to add constraints upon constraints to train the AI to handle a task in an expected fashion.
Less cheating, fewer Skynets
In a paper published by DeepMind and OpenAI staff, the two firms found that human intervention is critical to informing AI when a job is performed both optimally and correctly — that is to say, not to cheat or cut corners getting the quickest results.
For example, telling a robot to scramble an egg could result in it just slamming an egg onto a skillet and calling it a job (technically) well done. Additional rewards have to added to make sure the egg is cooked evenly, seasoned appropriately, free of shell shards, not burnt, and so forth.
As you can guess, setting proper reward functions for AI can be a major amount of work. However, researchers at OpenAI and DeepMind consider the effort an ethical obligation going forward, especially as AIs become more powerful and capable of greater responsibility.
There's still a lot of work left to go but if nothing else, you can take solace that AI's top researchers are working to make sure they don't go rogue — or at least not ruin breakfast.
Via Wired
from TechRadar - All the latest technology news http://www.techradar.com/news/two-of-ais-biggest-names-team-up-to-teach-robots-to-not-cheat
No comments:
Post a Comment