A report published by the UK Royal United Services Institute (RUSI) has strongly recommended the usage of artificial intelligence (AI) by British spies.
RUSI conducted an in-depth consultation with various stakeholders from the British national security community as well as legal and academic experts, private sector firms, and civil society representatives, and the resultant study concludes AI offers many opportunities for the UK national security community to strengthen the efficiency and effectiveness of existing processes.
Moreover, the document suggests the UK’s adversaries abroad - which go unnamed - will “undoubtedly”seek to leverage AI technology in future to launch attacks against the country. For instance, overseas governments and/or hostile groups could use deep fake video and images in targeted campaigns to influence public opinion during elections in Britain, or cyber-attackers could could disrupt systems handling confidential and sensitive data.
A new report from the Government's Central Communications Office (GCHQ) reveals that UK spies need artificial intelligence to combat cyber-attacks#Cibersecurity #hacking #hacked #privacy #dataprivacy #dataprotectionhttps://t.co/JahUMB8WQW
— Chamat Abogados (@chamatabogados) April 29, 2020
The British intelligence community thus try to innovate AI-based defence measures to counter such threats, RUSI recommends - but AI technology will not be able to predict if an adversary is about to conduct a serious operation, therefore is unlikely to replace human judgement, according to experts. It is also likely to create intense debate about privacy and require fresh guidelines be written.
"While AI offers numerous opportunities for UKIC to improve the efficiency and effectiveness of existing processes, these new capabilities raise additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. Addressing these concerns is a high priority for the national security community,” the report cautions.
There are said to be particular threats in algorithmic profiling, which could be seen as unfairly biased and needs safeguards in internal processes, and in the ‘black box’ nature of some AI methods, where inputs and operations are not visible to users. The latter can undermine accountability in decision-making, and there’s said to be a need to design systems so non-technically skilled users can interpret and critically assess key technical information.
Moreover, despite a proliferation of ethical principles for AI, it remains unclear how these should be employed in practice, suggesting the need for additional sector-specific guidance. The study doesn’t prescribe any specific or detailed solutions, but says it is crucial for the intelligence community to engage with external stakeholders in developing its policy for the use of AI, and draw on lessons from other sectors.
“An agile approach within the existing oversight regime to anticipating and understanding the opportunities and risks presented by new AI capabilities will be essential to ensure the UK intelligence community can adapt in response to the rapidly evolving technological environment and threat landscape,” the report concludes.