The letter, presented Monday at the International Joint Conference on AI in Buenos Aires by Future of Life Institute, warns about the high stakes of modern robotic systems that have reached a point at which they are to be feasible within just years, and that "a global arms race is virtually inevitable."
"This technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter states.
While AI may promise to be truly beneficial to humanity in many ways, it has to be kept under strict controls, and perhaps even banned, the letter suggests, while warning that lethal autonomous weapons systems — or more simply, killer robots — which engage targets without human intervention, are on par with various weapons of mass destruction.
Important letter from Future of Life Institute. Perhaps the most important letter you will read. See endorsements: http://t.co/eA5aeOhxlB
— Brian Roemmele (@BrianRoemmele) July 27, 2015
Among the letter’s signatories are theoretical physicist Stephen Hawking, Tesla, SpaceX and PayPal founder Elon Musk, linguist and philosopher Noam Chomsky, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn and Human Rights Watch arms division head Stephen Goose.
The United Nations debated a global ban on killer robots earlier this year.
"The FLA project will program tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. CODE aims to develop teams of autonomous aerial vehicles carrying out 'all steps of a strike mission — find, fix, track, target, engage, assess’ in situations in which enemy signal-jamming makes communication with a human commander impossible," Russell wrote.
Earlier this month, Elon Musk, through Future of Life Institute, granted some $7 million to 37 global research projects on the opportunities and dangers of AI.
"Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering," Skype co-founder Tallinn, also one of FLI’s founders, commented.
The debate shouldn't be about whether AI carries risks or not. It should be about what research needs to be done to inform that decision.
— Future of Life (@FLIxrisk) April 2, 2015
Grant funding, expected to begin in August, will last up to three years. The awards, ranging from $20,000 to $1.5 million, will be used to build AI safety constraints and to answer many questions pertaining to the deployment of autonomous weapons systems.