Would you trust PC Brother?
The use of unreliable facial recognition technology is growing without sufficient scrutiny or accountability
This article is taken from the May 2024 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.
The expansion of facial recognition technology (FRT) in the UK is happening at a rapid rate, with very little public debate, scant parliamentary discussion and, critically, without any clear laws to govern it. It’s long past time for the public to be put in the picture about what technology without proper regulation could lead to.
Chris Philp, the policing minister, wants police forces to ramp up their use of FRT. Up to now, some forces have been trialling live FRT only at specific events, such as music festivals, demonstrations and the Coronation. However, the police have been using FRT to identify people in a retrospective context for some time. Since 2014, they have been employing it on CCTV images of criminal acts, drawing on the 12 million custody images held in the Police National Database.
Live facial recognition (LFR) takes the technology a step further, using it in real time to identify criminals. Many will see this as efficient, whilst some will consider the loss of individual anonymity the worst consequence of facial tracking in public spaces. But the potential harm goes much deeper. To properly understand the implications, it is necessary to place it in the context of the move towards predictive policing and the fallibility of machine programs.
Seeing them seeing you
Both the private and public sectors place faith in the future of behavioural analytics technology. Surveillance capitalism, underpinned by the profiling of individual behaviour, is central to how the internet has developed and matured. Technological applications are used retrospectively on data to detect patterns in human behaviour. This may have created effective targeted advertising, such as recommending a television series, but using it to make future predictions in a law enforcement context is far more controversial.
The algorithms used and developed by the police are looking at the likelihood that someone will commit an offence in the future, as opposed to examining evidence where a crime has already taken place. The police have good reason to want to use these predictive tools: they want to reduce crime rates, but also to provide an alternative to people entering the criminal justice system. Durham Constabulary used a predictive analysis tool (“HART”) for their out-of-court disposal system. The system allowed people who committed minor offences to avoid a criminal record. HART was used to assess who was likely to go on to commit a serious offence and therefore could not qualify.
This shift towards predictive policing should cause concern. It is important to understand that algorithms of this kind do not tell you whether a person will in fact do something. Algorithms cannot predict the future. They simply label human behaviour and tell you whether a person shares the same characteristics as another person and, on that basis, conclude that they will behave in a similar fashion. The algorithm will select particular factors as causative of an outcome but will only produce a probabilistic output. It cannot provide certainty.
Police forces are now creating systems that involve not just policing data but will incorporate information from other public sector departments, such as social care, local authorities and education. They also access the UK passport database for FRT.
Police algorithmic systems are also believed to draw on information bought from data brokers, such as Experian’s Mosaic database, which has profiled around 50 million people in the UK. Police databases contain information harvested from social media, including contentious data such as whether you listen to a particular type of music (for example “drill”, which supposedly makes you more likely to be a gang member).
The potential pitfalls of predictive algorithms in criminal justice are substantial. The science underpinning the algorithm must be provably correct. It is not possible to codify all the elements that go into making decisions about humans in nuanced contexts. There is also the ethical problem of singling out people who have never committed an offence on the basis that other people with similar “characteristics” have done so. The characteristics selected and the reasons for their selection must be objectively justifiable and proven to be causative. The data used by the algorithm must be up-to-date and accurate at the point of application. There are currently no explicit standards for scientific validation of algorithmic outputs or best practice for the use of algorithms in the criminal justice system.
Grim predictions
The trend towards predictive analytics in policing makes it almost inevitable that it will be combined with the use of new technologies, with FRT and LFR an obvious starting point. These algorithms are likely to be applied to compile the “watchlists” that are fed into the FRT systems. Watchlists are fundamental in LFR use. The police put together — at present, manually and laboriously from multiple databases — a list of persons that they are interested in finding. Note that in the LFR trials conducted by the police to date, it is not only people with outstanding arrest warrants that are on the watchlists. They have also included “persons of interest” — that is, anyone that the police want to find for whatever reason.
The College of Policing’s Live Facial Recognition Authorised Professional Practice, published in March 2022, sets out guidance for the use of live FRT. The guidance has not been objectively evaluated, and the police can target any person associated with suspects, witnesses or victims of crimes. This is a very wide remit, even if the common law grants the police authority to prevent crime to “maintain the King’s peace without fear or favour”.
The watchlists have been steadily growing in size: from fewer than 50 people in earlier deployments, to 6,800 in the Met’s watchlist in July 2022. On average, very few arrests are made. There is dispute over how the efficacy of LFR should be measured, and this has resulted in wildly different statistics being cited by the police, privacy organisations and researchers.
When the FRT system scans faces in the crowd against the watchlist and there is an alert of a match, a (human) operator will evaluate the match to decide whether it is likely to be correct. If affirmed, a police officer will then stop that person to confirm whether they are in fact that person (a “true” match).
Privacy organisations argue that the measure of accuracy should be the number of “true” matches against the total number of matches suggested by the FRT system. On this calculation, the number of verified “true” matches from past trials varies from around 19 to 30 per cent; most matches suggested by the FRT systems are false. The police use a different metric. They do not consider the operational true matches but a “seeded” rate using planted persons in the system. For false matches, they measure a false match against the total number of faces scanned by the system. Given the large number of faces scanned, this of course brings the rate down from around 70–80 per cent to 0.1 per cent.
Added to the difficulty in calculating accuracy is the fact that it is not possible to ascertain whether the matches discounted as false by the operator were in fact true, and it is impossible to know whether the system failed to pick up people on the watchlist at all despite their faces being scanned.
The accuracy rate is only one element to consider. The watchlists themselves also bring difficulties. The creation of the watchlists is incredibly time-consuming, and they are only as good as the various databases that populate them. National and local police databases are updated at different times so watchlists can be out of date by the time they are used. This has led to people being stopped who are no longer wanted by the police.
Black and white footage
The use of the technology compounds the problem of the over-policing of ethnic minorities, in particular black people, already eight times more likely to be stopped by the police. The National Institute of Standards and Technology analysed 189 facial recognition algorithms from 99 developers, and it found people with dark skin were misidentified up to 100 times more than those with white skin. The technology is improving, but there is still a bias that impacts ethnic communities. One LFR match, which turned out to be incorrect, involved five plain clothes officers stopping a 14-year-old schoolboy.
The creation of misleading data produces watchlists divorced from reality
Factors such as the number of times someone has been arrested are likely to be fed into predictive analysis databases. But the fact that someone has been arrested does not necessarily indicate guilt. If bias is involved in the decision to arrest someone, then the data becomes skewed. If a person is repeatedly picked out by FRT because of automated bias, then the problem is going to be compounded. The creation of misleading data that is then fed into pre-crime algorithms will then produce watchlists that become divorced from reality.
The police claim that the use of new technologies at protests helps the public “safely undertake their assembly”. Civil liberties campaigners feel otherwise and see LFR as having a chilling effect on the right to free speech and public assembly. Dr Edward Bridges brought a legal challenge against the South Wales Police for scanning his face with LFR at a demonstration at an arms fair in Cardiff in 2018. The case reached the Court of Appeal and although it succeeded, the court’s conclusions were focused on the public sector’s equality duty rather than the right to public protest and the right to privacy.
There is no substantive law that governs FRT, LFR or the use of algorithms in the public sector. A patchwork of laws can be applied but do not provide any coherent guidance. The argument that data protection law can fill the void and provides enough regulation to prevent misuse (intentional or otherwise) is incorrect because of the costs and difficulty in bringing data protection claims. Data protection places the onus on the individual to know to bring a claim and then do so. This is ill-suited where societal harms occur. The way the procedural rules are structured on collective litigation in the UK makes it almost impossible to bring a class-action lawsuit based on data protection breaches.
Wasting police time
Whether the law on privacy can provide the necessary safeguards against abuse of FRT is uncertain. The law permits overt surveillance regardless of whether it is targeted at a particular individual, so long as there are various safeguards in place and the public is made aware of it, but there are more restrictions where targeted surveillance is covert.
The police argue that LFR is the same as CCTV. It is not. An individual not wanted by the police on a warrant would not know that they are on a watchlist. Whilst the camera is public, the FRT system used is not. Privacy claims are context-specific, and any challenge to LFR will therefore depend on where, when and why the system was deployed and what the consequential impact was on the person that engages their right to privacy.
Litigation is a blunt tool, and cases against the public sector are ultimately paid for by taxpayers. Even where a case succeeds, it does not guarantee change. Although the Court of Appeal found in favour of Dr Bridges, it has not stopped the deployment of LFR because a court of law cannot adjudicate on a possible future event. It declined to consider any “hypothetical scenarios”.
Police forces using predictive algorithms and LFR say they merely support or enhance decision-making, but statistical predictions can prejudice human decision-making. There is a tendency to view technology as likely to be more accurate than a person. Where there has been significant financial investment in a system, there may be a sense of obligation in using it. Nothing illustrates this more clearly than the Post Office scandal and the insistence that the Horizon accounting system was sound.
The police may claim that the use of the technology creates efficiency, but this can only be judged if the actual hours spent creating the watchlists and verifying them is revealed. If time is subsequently spent verifying many false positives, this too takes up police time. To put this in context, in 58 of the deployments by South Wales Police, there were 3,140 matches generated by the LFR system, of which 315 were true matches and 2,825 were false. There is a danger in relying on technology when traditional policing methods may be more effective.
Public truths
Objection to the use of LFR is often framed as a privacy issue. However, this is not the most important issue at stake. It is the impact that improperly designed or flawed identification systems will have on the criminal justice system. It is only a matter of time before the consequences of the labyrinthine decision-making processes that are being created, and the prolific use of FRT, become apparent.
In its reform of data protection laws, the Government is planning to abolish the role of the Biometrics and Surveillance Camera Commissioner and with it, the Surveillance Camera Code of Practice. This was the only independent code of practice that oversaw FRT and advancing technologies and the guidance that the police relied on in the Bridges case to justify its use. Parliamentary committees have urged legislation and independent scrutiny for FRT and LFR. The public too should be demanding transparency and clear laws to guide its use. Blindly trusting the police to use the technology wisely and in the best interests of all of us would be as unwise as blind faith in their machines.
Enjoying The Critic online? It's even better in print
Try five issues of Britain’s most civilised magazine for £10
Subscribe