Remember meForgot password?
    Log in with Twitter

article imageMajor concerns with facial recognition used by police

By Tim Sandle     Jul 6, 2019 in Technology
London - Several police forces around the world are using facial recognition technology. But how accurate is this imaging analysis? To what extent is this based on how the assessment is made?
The facial recognition software used by the U.K.'s biggest police force - the Metropolitan Police - has come in for criticism, based on an independent report. The report finds that 81 percent of potential 'suspects' flagged by the police force's facial recognition technology innocent. This contrasts with the Metropolitan Police's own assessment, which places the error rate at a much lower level - just one in one thousand.
The independent assessment comes from researchers based at the University of Essex. The academics were granted access to six live field tests conducted by the Metropolitan police in London. According to The Guardian, the researchers uncovered technology that regularly misidentified people who were subsequently wrongly stopped and questioned by police officers. The researchers also issue a general warning of “surveillance creep”, which they indicate is the technology being applied find people who are not wanted by the courts.
READ MORE: Time to ban facial recognition technology, says Liberty
The researchers, Professor Fussey and Dr Murray, told Sky News (who worked with The Guardian on supporting the research), stated that the police's use of facial recognition during these trials lacked "an explicit legal basis" and failed to take into account how this technology infringed fundamental human rights. Professor Fussey states: "Our report conducted a detailed, academic, legal analysis of the documentation the Met Police used as a basis for the face recognition trials. There are some shortcomings and if [the Met] was taken to court there is a good chance that would be successfully challenged."
This is reflective of the fact that training machines to "see" - or to recognize and differentiate between faces - is very difficult. Much comes down to how well the algorithm has been trained, and no facial recognition system on the market is error-free.
READ MORE: If you want 'surveillance as a service', Amazon's offering it
In response to the report, Duncan Ball, the Metropolitan Police's deputy assistant commissioner, says: "We are extremely disappointed with the negative and unbalanced tone of this report... We have a legal basis for this pilot period and have taken legal advice throughout. We believe the public would absolutely expect us to try innovative methods of crime fighting in order to make London safer."
That the Metropolitan police are keen on facial recognition technology as a means of detecting known trouble makers in a crowd, and for picking out retrospectively those who have caused a disturbance with a view to identifying them later, is captured in a statement made up Ken Marsh, who is chairman of the Metropolitan Police staff association. Marsh is championing China as a success story.
Marsh's comments come as the campaign organization Human Rights Watch (an international non-governmental organization, headquartered in New York City) describes the facial recognition system as "China's algorithms of repression". The organization also describes the technology used as flawed: "There is also a facial recognition component, as the screen shows the extent to which the person’s ID photo matches the photo of that person."
More about Facial recognition, Police, Crime, Surveillance
More news from
Latest News
Top News