Militaries the world over are excited by the prospect of autonomous drones — but many leading figures in science and technology find the prospect deeply troubling, among them famed theoretical physicist Stephen Hawking, Tesla Motors and SpaceX founder Elon Musk, Microsoft founder Bill Gates and Apple cofounder Steve Wozniak, who have all urged a preemptive prohibition on the use of autonomous weapons or artificial intelligence in warfare.
As yet, there is no universally agreed or legal definition of the term autonomous drone. Industry uses the "autonomy" label extensively, often for futuristic marketing purposes — in truth, there are no truly autonomous devices or machines in existence, capable of directing themselves entirely and switching themselves on and off. Human involvement of some degree is always required at some stage as it gives an impression of very modern and advanced technology.
However, some countries do have individual definitions — for instance, the UK Ministry of Defense's 2011 paper, The UK Approach to Unmanned Aircraft Systems, describes autonomous vehicles as "capable of understanding higher level intent and direction." Any country that uses partially autonomous technology adds the prefix of "remotely piloted" to such equipment, underlining that they operate under the direct control of human beings.
It's almost unquestionable however that "autonomous" one day will mean drone systems that can act independent, based on their own initiative. Such technology is already largely developed — but as far as is publicly known at least, no system is fully operational as of July 2017. What has limited full development, and indeed deployment, are major legal and ethical question marks hanging over whether lethal machines able operate without direct human control should be used, if not exist outright.
One of the greatest challenges for the development and approval of autonomous drones is that it's extremely difficult to develop satisfactory validation systems, ensuring the technology is safe and acts like humans would in comparable scenarios. Truly sophisticated drones would necessitate programming for an almost innumerable amount of potential courses of action, requiring the use of machine learning and artificial intelligence to learn and develop modus operandi.
Autonomous drones used in wars would be de facto subject to the general principles and rules of the Law of Armed Conflict, just as any other weapon, system or platforms — as such, they can only be directed at lawful targets (military objectives and combatants) and attacks must not cause excessive collateral damage. In attack decisions, military commanders must take all "feasible precautions" to "verify" that the attack is not directed at a protected person or object, and the attack is not expected to violate the principle of proportionality.
However, many of the Law's restrictions and obligations are somewhat elastic, giving military commanders "wiggle room" to interpret and decide what certain principles mean in practice.
Autonomous drones will likely never be capable of reasoning, in the human sense of the term, as they lack consciousness — how would such a device ever be able to use its "discretion" to interpret the practical meaning of battle regulations, and how would it?
Legal and ethical considerations may well be separate questions, given the two are rarely inextricably linked, and legislation is invariably several steps behind emergent technologies.
That this risk exists is unlikely to deter Western military powers, in particular the US, from pursuing the goal of fully autonomous drones — particularly as it is arguable that using robotic soldiers in war is perhaps morally preferable to human ones. Autonomous drones could complement human military activity by scouting dangerous areas and performing high-risk tasks such as bomb disposal, and lessen the risk of human casualties in the process.
After all, super high tech autonomous drones would be able to process incoming sensory information at a greater rate and more rationally and effectively than human soldiers, and thus make better battlefield decisions. An autonomous drone that lacks human emotions such as fear, rage and vindictiveness could even potentially reduce the risk of war crimes.
It would almost certainly be preferable to develop strong legal and ethical frameworks governing autonomous weapon usage before they become a permanent fixture in theaters of war — particularly as it's impossible to know with any certainty when that point will come.