Most Britons oppose police use of facial recognition tech: poll

Hikvision security cameras monitor a pedestrian shopping street in Beijing, Tuesday, Oct. 8, 2019. AP Photo/Mark Schiefelbein

Almost two in three Britons disagree with police using artificial intelligence such as facial recognition technology to identify suspects, according to a survey released on Friday.

Public bodies and employers could face a popular backlash against “tech creep” in sectors from recruitment to policing, found the poll, commissioned by Britain’s Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA).

“An increasing amount of decision making – in our public services, the job market and healthcare – is taking place via ever more opaque processes,” said Asheem Singh, the RSA’s acting head of tech and society.

READ MORE: British case challenges facial recognition in the United Kingdom, home of the security camera

“We need an open conversation about AI and other forms of decision making, driven by the principles of transparency and accountability.”

Story continues below advertisement

There has been growing debate over the use of facial recognition technology by some police forces, with supporters saying it allows smarter policing, while critics say it is intrusive and often inaccurate.

The technology uses surveillance cameras equipped with facial-recognition software to scan passers-by in public spaces and uses artificial intelligence to compare them to watch lists of people being sought by police.

If a suspect is identified, they can be stopped on the spot.

Click to play video: 'UK lawsuit questions use of facial recognition by police' UK lawsuit questions use of facial recognition by police
UK lawsuit questions use of facial recognition by police – Aug 6, 2019

Pollsters YouGov surveyed more than 2,000 British adults for the RSA, which gathered a “citizen jury” of about 25 people to debate computer decision-making.

The panellists looked at its use by police for automated facial recognition and to help decide when to prosecute people who have been arrested.

Story continues below advertisement

They concluded it was possible that machines would be more objective than people, but raised concerns that technology could reflect human biases and said there should be human oversight and accountability over decisions.

READ MORE: Microsoft warns that facial recognition tech brings ‘human rights risks’

The RSA called for greater public engagement by tech firms to educate the public about how technology is being developed and to help shape how artificial intelligence is used.

California in September banned police from using body cameras with facial recognition software, while some privacy campaigners have developed make-up or clothing designed to prevent the technology from identifying them.

Sponsored content