World

UK Government 'Failing' in Openness Over AI Use Which May 'Undermine Human Rights', Parliament Says

A UK parliamentary committee, headed by the former head of the British Security Service, has published its review of the use of Artificial Intelligence on 10 February. This followed three roundtable seminars, 19 written submissions, focus group and think tank research, and fifty meetings with individual stakeholders.
Sputnik

The British government is “failing” in its need to be open with respect to the implementation of Artificial Intelligence (AI) by public services, according to an in-depth assessment by the parliamentary Committee on Standards in Public Life.

The report, Artificial Intelligence and Public Standards, was published on 10 February 2020 and it acknowledges that AI offers many potential benefits to the delivery of public services from welfare to policing. However, it also warns that “lack of transparency” in the use of AI in combination with surveillance technology such as automatic facial recognition, “have the potential to undermine human rights”.

The lack of transparency noted in the report is “particularly pressing” in the fields of policing and human rights. Of the seven Nolan Principles guiding the delivery of good public services, the committee found that three – Openness, Accountability, and Objectivity – are challenged in particular by the use of AI.

​There is a lack of transparency in terms of whether and how AI is being used by different public bodies, according to the report. The report says the government must establish one clear set of guidelines which is available to all and easy to read and understand. There isn't enough information yet to know whether Accountability is being properly observed by authorities when using AI. But it does warns that “data bias” is a matter of “serious concern” which can affect Objectivity in the delivery of of public services.

“The prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice”, the report concludes.

Therefore the problems of the lack of Openness and the risk posed by data bias, “are in need of urgent attention in the form of new regulation and guidance”.

The committee recommends that impact assessments should be mandatory for any public or private body using AI to deliver public services. The government should also use its power to ensure that private tech companies produce AI technology which satisfies all seven Nolan Principles and addresses anti-discrimination and human rights concerns.

While the report does not recommend the creation of an AI regulator it does say that central government and the Centre for Data Ethics and Innovation must play a leading role in ensuring consistent and wholesale application of clear guidelines. “The public needs to understand the high level ethical principles that govern the use of AI in the public sector”, the report says. Something which they concluded the government is thus far failing to ensure.

The parliamentary committee was chaired by Lord Jonathan Evans of Weardale, the former Director General of the British Security Service (MI5).

“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government”, Lord Evans contended following the publication of the report.

On 27 January 2020 the Metropolitan Police Service announced that it was officially rolling out the use of Live Facial Recognition technology. The decision was met with strong opposition from rights groups and questions regarding both its accuracy and invasiveness.

Discuss