However scientists from Google Brain, Open AI, Stanford University and the University of California, Berkeley are predicting that the proliferation of AI will come at a price.
In a paper titled, "Concrete Problems in AI Safety," the researchers examined the potential impacts AI has on poorly programed systems.
"The authors believe that AI technologies are likely to be overwhelmingly beneficial for humanity, but also believe that it is worth giving serious thought to potential challenges and risks."
Concerns have arisen surrounding privacy, security, fairness, economic and military implications of AI-controlled bots and autonomous systems, "as well as concerns about the longer-term implications of powerful AI," and "the problem of accidents in machine learning systems," the authors said.
I'm really excited to have co-authored this paper "Concrete Problems in AI Safety" — https://t.co/v5puocaFTv
— Chris Olah (@ch402) June 22, 2016
"Small-scale accidents seem like a very concrete threat, and are critical to prevent."
Many tech websites have used the smashing of a vase by a robot, programed to clean your house, as an example of a "small-scale" accident.
"How do we ensure that the cleaning robot doesn't make exploratory moves with bad repercussions?" the researchers ask.
And the authors believe it is important to keep an eye on the minutiae. That is, the potential for more mundane AI episodes, like a cleaning bot breaking stuff as it does the ironing due to some-sort of malfunction.
Brain Scanning Just Got Very Good—and Very Unsettling https://t.co/9c1BEqH9kc #Robots #AI pic.twitter.com/rIBKJfECbM
— WT VOX (@wtvox) June 21, 2016
This way, the scientists hope to be able to scale up the approach to AI "health and safety" and prevent accidents as AI is set to be more powerful and prolific in people's lives, which has the potential to make dreams a reality — or become a nightmare.