The Boston-based FLI chose 37 teams — out of 300 applicants — and announced the results of the grant program this week, to "keep AI ethical, robust and beneficial." Research teams from Stanford, Berkeley, Oxford, Cambridge, Harvard and MIT are among the winners, called on to conduct research in computer science, law, and economics.
Facing projects like Google's DeepMind, which aims to teach machines to read, the Tesla CEO, together with Microsoft founder Bill Gates and theoretical physicist Stephen Hawking, have repeatedly voiced concerns that AI systems are developing faster than ever, expecting that one day they will be out of control.
"We need to be super careful with AI. Potentially more dangerous than nukes," said Musk, also CEO of Space Exploration Technologies Corp. (Spacex), stressing that the only benefit from AI would be in "drudgery… or tasks that are mentally boring, not interesting."
In opposition to Musk, Facebook founder and CEO Mark Zuckerberg, in a Q&A on his profile, said he believes "more intelligent services will be much more useful" to consumers.
The grants, financed by Musk's fund and by the Open Philanthropy Project and ranging from $20,000 to $1.5 million, will be used to build AI safety constraints and to answer many questions such as the deployment of autonomous weapons systems.
Expected to begin in August, grant funding will last up to three years.