All of Elon Musk’s worst fears are slowly becoming reality.
The Telegraph reports that robocop-type policing techniques are set to replace beat cops within a decade as the use of artificial intelligence becomes more widespread in investigating crimes.
Thames Valley Police said AI computers – which mimic humans by making decisions themselves – could be used to answer 999 calls, detect crimes and identify offenders.
But in one chilling detail that will hopefully be resolved within the next decade or so before robocops become a routine presence in London’s streets, the TVP warned of “bias” in the AI software and a concern that AI computers “might be unable to reason with a human”.
We don’t find further clues about the subject of these biases until much later in the story, when the leader of a civil liberties group – which have been predictably alarmed by plans to automate police forces – pointed to a study showing the machine’s algorithms have shown racial biases, according to the Telegraph.
Civil liberties groups were alarmed by the plans. David Green, the director of the Civitas thinktank, warned that the AI computers could unfairly target ethnic minority groups.
He said: “Robocop policing has now arrived in England. This Orwellian reliance on automated decisions has been found to undermine the most basic precepts of the justice system when it has been tried in America.
“An experiment in Fort Lauderdale, for example, found that the algorithm reflected human prejudices, including racial bias.”
However, the news comes as ministers prepare to publish the first ever review into how AI will change Britain over the coming decades. AI is already used by Scotland Yard to recognise faces at London’s Notting Hill Carnival. Durham Constabulary is also planning to use AI for deciding whether to keep suspects in custody.
In a submission to a Parliamentary inquiry into the Implications of Artificial Intelligence, Thames Valley said there are “even at the lowest level AI could perform many of the process driven tasks that take place in the police”.
AI could be used to assist “investigations by ‘joining the dots’ in police databases, the risk assessment of offenders, forensic analysis of devices, transcribing and analysis of CCTV and surveillance, security checks and the automation of many administrative tasks”, it said.
To be sure, Thames Valley police noted that even once robocops are ready for widespread use, they will still require “a high level of human oversight and clear justification”.
Their report showed that “recent tests of AI in policing indicate there is a risk of bias perpetuation in AI outputs, therefore engagement with Privacy and Civil Rights groups will be necessary to persuade the public that everything possible is being done to mitigate this whilst doing our best to keep them safe.
“Of utmost importance is that any AI process that involves an ethical issue, must have a high level of human oversight and clear justification.
The automation of processes also introduces a risk of being unable to reason with a human when events occur outside expected parameters.”
Dubai earlier this year became the first major city to deploy robocops, saying that it expects them to comprise 25% of the United Arab Emirates’ police force by 2030.
Musk has repeatedly warned that humans are underestimating the dangers of AI, claiming that these technologies warrant strict government oversight so that robots don’t transform into a threat to humanity.
…just in case you forgot
If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) August 12, 2017
* * *