Explained-Olympics-How France plans to use artificial intelligence to keep Paris 2024 safe

Authors Juliette Jabkhiro and Julien Pretot

PARIS (Reuters) – France tested artificial intelligence-driven video surveillance technology to be deployed during the Olympics at a Depeche Mode concert this week, deeming the exercise a success.

French legislation passed in 2023 allows the use of AI video surveillance during a trial period covering the Games to detect abnormal events or human behavior at large-scale events.

The technology could be key to preventing attacks like the 1996 Atlanta Olympics bombing or the 2016 Nice truck attack, officials say.

Rights activists warn that technology poses a threat to civil liberties.

WHAT IS AI-POWERED SURVEILLANCE?

Algorithmic video surveillance uses computer software to analyze images captured by video surveillance cameras in real time.

Four companies — Videtics, Orange Business, ChapsVision and Wintics — have developed AI software that uses algorithms to analyze video streams coming from existing video surveillance systems to help identify potential threats in public spaces.

Algorithms are trained to detect predetermined “events” and abnormal behavior and send alerts accordingly. Human beings then decide whether the warning is real and act on it.

WHAT WILL THE ALGORITHMS LOOK FOR?

The law allows artificial intelligence monitoring software to flag eight different “events” during the Games, which include: an increase in the number of people; abnormally large crowds; abandoned buildings; the presence or use of weapons; person on earth; outbreak of fire; violation of traffic routing rules.

Within these categories, specific thresholds (number of people, type of vehicle, weather, etc.) can be set manually to meet each individual event, location or threat.

WHO WILL USE AI-POWERED SURVEILLANCE?

National and local police, firefighters, public transport security agents will have access to AI-powered surveillance.

The software, developed by Wintics and tested at a Depeche Mode concert, will be installed in the Paris region and on public transport.

Paris police chief Laurent Nunez described the trial as largely successful.

“Everything went relatively well, all lights are green (for future use),” he said.

WILL FACIAL RECOGNITION BE USED?

It should not be. The new law still bans facial recognition in most cases, which French authorities say is a red line that should not be crossed.

Nevertheless, human rights campaigners are concerned about the danger of the mission creeping up.

“Software that enables AI-powered video surveillance can easily enable facial recognition. It’s simply a choice of configuration,” said Katia Roux of Amnesty International France.

The legal framework governing facial recognition remains too vague, and technical and legal protections are insufficient, according to Amnesty International.

Wintics co-founder Matthias Houllier said his software’s algorithms are not trained to recognize faces.

“There is no method of personal identification in our algorithms,” he said. “Technically it’s off.”

HOW WILL PRIVACY BE PROTECTED?

The French interior ministry has set up an evaluation commission to monitor civil liberties during the trial period.

Headed by a high-ranking official within France’s top administrative court, the board also includes the head of the country’s privacy watchdog, the CNIL, four lawmakers and the mayor.

(1 dollar = 0.9162 euros)

(Reporting by Juliette Jabkhiro and Julien Pretot; Editing by Richard Lough and Toby Davis)

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *