Permanent facial recognition cameras – Balancing Civil Liberties and Public Safety in the Use of Live Facial Recognition technology
Last month, the Met Police announced that permanent live facial recognition technology (LFR) is due to be implemented for the first time, in Croydon, London.
LFR works by mapping a person's unique facial features, which can be matched against faces on police “watch-lists”. Up until now, LFR has only been used by the police in an overt manner, with specially equipped cameras on a marked police van being used to monitor public spaces and crowds to identify people in real time by matching their faces against Police databases. The announcement of permanent LFR installations appears to mark a turning point in the relationship between public space and state surveillance.
Prior to this announcement, a two-year trial was conducted in the local area, whereby police vans fitted with the LFR patrolled the streets matching members of the public to its database of suspects or criminals. Following the Met’s announcement, LFR cameras will be attached to “street furniture” such as lampposts or buildings in Croydon. It is clear that if it proves successful, the pilot project could be expanded citywide, representing a substantial increase in the Police’s surveillance capabilities in public spaces.
The absence of a comprehensive legal framework governing the police’s use of live facial recognition technology, combined with a lack of formal consultation with residents in affected areas, has heightened existing concerns about potential gross invasions of privacy and human rights breaches. In addition, these concerns are further compounded by the well-documented trend of disproportionate surveillance in communities already affected by systemic inequalities and over-policing.
HOW DOES LFR WORK
Live Facial Recognition technology operates by scanning the faces of individuals in public spaces using surveillance cameras. The system captures live video footage, from which the software detects human faces, extracts distinguishing facial features, and converts them into biometric templates. These templates are then compared in real time against a police-generated watchlist.
Prior to deployment, the police compile this watchlist, which may include both police-originated images - such as custody photographs from national databases - and non-police-originated images, including those obtained from publicly available sources or shared by other public bodies. These watchlists can include individuals wanted by the police or the courts, as well as those assessed as posing a risk of harm to themselves or others. This approach appears to significantly broaden the scope of LFR surveillance beyond its stated crime prevention objectives, potentially engaging individuals who are not suspected of any criminal activity.
The permanent installation of LFR, marks a significant shift away from established UK policing norms designed to safeguard individual rights and civil liberties. Prior to the introduction of permanent LFR systems, individuals were not obligated to identify themselves to law enforcement unless there was reasonable suspicion of involvement in criminal activity. LFR seemingly undermines this protective principle by subjecting all individuals to mass identity checks, risking the perception of treating all individuals as potential suspects and therefore eroding the presumption of innocence.
EXISTING LEGAL FRAMEWORK
Other forms of biometric processing, such as fingerprinting and DNA testing, are subject to strict controls and oversight under specific legislation, but the legal framework governing the use of live facial recognition (LFR) technology remains largely inadequate. The police use their ‘common law powers’ as the basis for using LFR. Existing legislation such as the Data Protection Act 2018, Human Rights Act 1998, and the Surveillance Camera Code of Practice provide some oversight, but they were not specifically designed to address the complexities of LFR.
Police forces that deploy LFR often cite the GDPR and the Equality Act as legal justifications. These statutes impose broad obligations on public authorities but cannot be considered enabling legislation, as they do not provide specific provisions or regulatory frameworks for Police use of live facial recognition technology.
In 2020, the Equality and Human Rights Commission (EHRC) criticised the legal basis for police use of live facial recognition (LFR), describing it as “insufficient” and urging the UK government to suspend its use. In a submission to the UN Human Rights Committee, the EHRC questioned whether LFR complies with Article 17 of the International Covenant on Civil and Political Rights (ICCPR), noting its reliance on common law powers and the absence of explicit statutory regulation. The Commission also raised concerns about privacy infringements and the disproportionate impact on marginalised groups. In April 2024, it reiterated its warnings, highlighting the risks posed by mass-scale surveillance and calling for robust legal safeguards.
In R (Bridges) v South Wales Police [2020], the Court of Appeal ruled that the police’s use of live facial recognition (LFR) had been unlawful due to shortcomings in the legal and policy framework underpinning its deployment. The case was brought by civil liberties campaigner Ed Bridges, who had been scanned during two deployments between 2017 and 2018, raising significant concerns around privacy, data protection, and equality law. The Court held that the interference with Mr Bridges’s rights was not “in accordance with the law,” citing the absence of clear guidance on where LFR could be used and who could be included on watchlists, which left officers with “overbroad discretion”. This case highlighted significant gaps in oversight and accountability and the ruling underscores the need for clearer safeguards and greater transparency.
DISPROPORTIONATE IMPACT ON MINORITY COMMUNITIES
Even prior to the announcement of permanent LFR installations, the Met has been criticised for disproportionate deployment of facial recognition cameras in London, contributing to the ongoing over-policing of marginalised communities. During the “trial” period, LFR was found to have been deployed mostly in boroughs with a higher-than-average Black population. The areas included Croydon’s Thornton Heath, where 40% of residents are Black, Haringey’s Northumberland Park (36%), and Lewisham’s Deptford High Street (34%) - all significantly higher than London’s total Black population (13.5%).
The Met has previously claimed that its LFR technology does not exhibit the same racial bias as found in other forms of facial recognition, but there are concerns regarding the “troubling assumptions” that are reinforced by the disproportionate deployment of LFR. For example, Green Party London Assembly member Zoe Garbett notes:
“The Met’s decision to roll out facial recognition in areas of London with higher Black populations reinforces the troubling assumption that certain communities, such as those in Croydon, Lewisham and Haringey, are more likely to be criminals… The Met claims live facial recognition has been a success in London, but how is treating millions of Londoners as suspects be considered as a success? The arrest figures are low, and it’s really just subjecting us to surveillance without our knowledge, with Black Londoners being disproportionately targeted”.
The police’s claim regarding the reduced bias exhibited by the technology (compared to other facial recognition technology), is largely based on a report from the National Physical Laboratory (NPL) commissioned by the Met Police to assess the accuracy of their live facial recognition (LFR) systems. The Home Office and police forces have relied on the report to claim there is no significant performance disparity across demographics. However, the study tested only one algorithm, while different forces use various LFR systems with differing levels of accuracy. Notably, the report found that the software was less accurate for women and people of colour - an issue that persists at lower accuracy thresholds previously used by police, with no safeguards in place to prevent future use. At the commonly used 0.60 threshold, 13 Black and Asian individuals were misidentified, while no white individuals were falsely flagged. These concerns are exacerbated in real-world settings, where large-scale deployments can result in hundreds of false matches, raising serious questions about the reliability and fairness of LFR technology.
The risk of discriminatory outcomes was illustrated in February 2024 when, Shaun Thompson, a Black community worker campaigning against knife crime, was wrongly flagged by the Metropolitan Police’s live facial recognition system near London Bridge. Despite presenting multiple forms of identification, he was detained for nearly 30 minutes, during which officers demanded his fingerprints and threatened arrest. Mr Thompson is now pursuing legal action against the Met, arguing that the incident amounted to an unlawful interference with his right to privacy under Article 8 of the European Convention on Human Rights.
The Met have attempted to reassure the public in response to such concerns claiming that LFR "is a direct result of listening to community concerns about serious violence and other issues” and that they “continue to engage with our communities to build understanding about how this technology works, providing reassurances that there are rigorous checks and balances in place to protect people's rights and privacy.”
However, Human Rights groups such as Liberty and Big Brother Watch, as well as several local councillors, have expressed their concerns regarding the lack of community engagement surrounding the announcement of permanent LFR installations in Croydon, London. Croydon isn’t an isolated case: in December 2024, community consultation documents released to Computer Weekly under Freedom of Information (FoI) laws revealed that despite the Met’s assertion that its LFR deployments in Lewisham are “supported by the majority of residents” there had been minimal, if any, engagement with the local community.
Furthermore, concerns have been raised regarding the potential use of millions of unlawfully retained custody images for facial recognition purposes by UK police forces. This issue was underscored by the outgoing Biometrics Commissioner for England and Wales, Tony Eastaugh, in a report that formed part of a broader warning about the unchecked expansion of surveillance technologies.
The ethical implications of such practices are significant and are further compounded by the findings of the Casey Review, which concluded that the Metropolitan Police is institutionally racist, sexist, and homophobic. These findings raise serious and urgent questions about the risks of embedding discriminatory practices into surveillance-based policing, particularly when the underlying data and technologies lack transparency, oversight, and meaningful accountability.
THE NEED FOR LEGAL SAFEGUARDS IN BIOMETRIC SURVEILLANCE
The Information Commissioner's Office has recently emphasised the need for restraint in the use of live facial recognition technology, calling for a statutory code of practice to ensure its use is necessary, proportionate, and fair. This caution is underscored by broader ethical concerns, particularly in light of the aforementioned Casey Review findings on institutional bias within the Metropolitan Police. Without a clear legal framework and robust oversight, LFR risks reinforcing discriminatory patterns of over-policing and eroding public trust.
Both Parliament and civil society have consistently called for comprehensive legislation to govern law enforcement’s use of biometric technologies. This includes an official inquiry by the House of Lords Justice and Home Affairs Committee into the police’s use of advanced algorithmic tools, as well as repeated warnings from Biometrics Commissioners.
Given this sustained pressure, it is striking that no dedicated legal framework has yet been introduced. In the absence of clear statutory safeguards, the expansion of LFR risks undermining public trust, eroding civil liberties, and enabling discriminatory surveillance practices. To navigate the delicate balance between public safety and civil liberties, any deployment of such surveillance tools must be transparent, accountable, and rooted in law.