Israeli army has reportedly deployed a facial recognition network across the Gaza Strip, scanning ordinary Palestinians as they move around the devastated territory, trying to escape the ongoing bombing and looking for food for their families.
The program relies on two different facial recognition tools, according to the New York Times: one built by Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered by Google Photos. An anonymous Israeli official told the Times that Google Photos works better than any alternative facial recognition technology, helping Israelis create a “hit list” of alleged Hamas fighters involved in the October 7 attack.
Mass surveillance of Palestinian faces resulting from Israeli attempts to identify Hamas members has ensnared thousands of Gazans since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they were brutally interrogated or tortured. In its story on facial recognition, the Times pointed to Palestinian poet Mosab Abu Toha, whose arrest and beating by the Israeli military began with the use of facial recognition. Abu Toha, who was later released without being charged with any crime, told the newspaper that Israeli soldiers told him that his arrest using facial recognition was a “mistake”.
Accuracy issues aside — facial recognition systems are notoriously less accurate on non-white faces — using Google Photos’ machine learning-powered analysis features to put civilians under military control, or worse, is against the company’s clearly stated policies. Under the heading “Dangerous and illegal activities”, Google warns that Google Photos cannot be used to “promote activities, goods, services or information that cause serious and immediate harm to people”.
“Facial recognition surveillance of this kind undermines rights enshrined in international human rights law.”
Asked how banning the use of Google Photos to harm people was compatible with the Israeli military’s use of Google Photos to create “hit lists,” company spokesman Joshua Cruz declined to answer, saying only that “Google Photos is a free product that is widely available to the public that you helps organize photos by grouping similar faces, so you can tag people to easily find old photos. It doesn’t reveal the identity of unknown people in photos.” (Cruz did not respond to repeated subsequent attempts to clarify Google’s position.)
It’s unclear how such bans — or the company’s longstanding public human rights commitments — apply to Israel’s military.
“It depends on how Google interprets ‘serious and imminent harm’ and ‘illegal activity,’ but facial recognition surveillance of this kind undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, the right of assembly and more,” he said. Anna Bacciarelli, Associate Technical Director of Human Rights Watch. “Given the context in which this technology is being used by Israeli forces, amid the widespread, ongoing and systematic denial of human rights to the people of Gaza, I hope that Google will take appropriate action.”
Doing good or doing Google?
In addition to banning the use of Google Photos to harm people in its terms of service, the company has for years claimed to embrace various global human rights standards.
“Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” Alexandria Walden, the company’s global head of human rights, wrote in a blog post in 2022. “That’s why our products, business operations and decision-making about new technologies are based on to our Human Rights Program and deep commitment to increasing access to information and creating new opportunities for people around the world.”
This deep commitment includes, according to the company, adherence to the Universal Declaration of Human Rights — which prohibits torture — and the UN Guiding Principles on Business and Human Rights, which note that conflicts over territory produce some of the worst rights abuses.
The use of a free, publicly available Google product like Photos by the Israeli military raises questions about these corporate human rights obligations and the extent to which the company is willing to act on them. Google says it supports and embraces the UN’s Guiding Principles on Business and Human Rights, a framework that calls on corporations to “prevent or mitigate negative impacts on human rights that are directly related to their business, products or services in their business relationships, even if they are not contributed to those impacts.”
Walden also said Google supports Due Diligence for Conflict-Sensitive ICT Companies, a voluntary framework that helps technology companies avoid misuse of their products and services in war zones. Among the document’s many recommendations is for companies like Google to consider “the use of products and services for state surveillance in violation of international human rights law that causes an immediate impact on privacy and physical security (ie, to locate, arrest, and detain someone). ” (Neither JustPeace Labs nor Business for Social Responsibility, which co-authored the due diligence framework, responded to a request for comment.)
“Both Google and Corsight have a responsibility to ensure that their products and services do not cause or contribute to human rights violations,” Bacciarelli said. “I would expect Google to take immediate action to end the use of Google Photos in this system, based on this news.”
Google employees participating in the No Tech for Apartheid campaign, a labor-led protest movement against Project Nimbus, called on their employer to prevent the Israeli military from using Photos’ facial recognition to prosecute the war in Gaza.
“That the Israeli military is even weaponizing consumer technology like Google Photos, using embedded facial recognition to identify Palestinians as part of its surveillance apparatus, indicates that the Israeli military will use whatever technology is available to it — unless Google take steps to ensure their products do not contribute to ethnic cleansing, occupation and genocide,” the group said in a statement shared with The Intercept. “As Google workers, we demand that the company immediately abandon Project Nimbus and cease all activities that support the genocidal agenda of the Israeli government and military to decimate Gaza.”
Project Nimbus
This wouldn’t be the first time Google’s alleged human rights principles have come up against its business practices — not even just in Israel. As of 2021, Google has sold advanced cloud computing and machine learning tools to the Israeli military through its controversial “Project Nimbus” contract.
Unlike Google Photos, a free consumer product available to everyone, Project Nimbus is a software project tailored to the needs of the Israeli state. However, both Nimbus and Google Photos face matching skills are products of the company’s vast machine learning resources.
Selling these sophisticated tools to a government so regularly accused of human rights abuses and war crimes goes against Google’s AI principles. The guidelines prohibit the use of artificial intelligence that could cause “harm”, including any applications “whose purpose is contrary to generally accepted principles of international law and human rights”.
Google has previously suggested that its “principles” are actually far narrower than they appear, applying only to “custom AI work” and not to the general use of its products by third parties. “This means that our technology can be used quite widely in the military,” a company spokesperson told Defense One 2022.
It remains unclear how, or if, Google will ever translate its executive blogging pledges into real-world consequences. Ariel Koren, a former Google employee who said she was forced out of her job in 2022 after protesting Project Nimbus, placed Google’s silence on the photos within a broader pattern of shirking responsibility for how its technology is used.
“To say that aiding and abetting genocide is a violation of Google’s AI principles and terms of service is an understatement,” Koren, now an organizer for No Tech for Apartheid, told The Intercept. “Even in the absence of public comment, Google’s actions made it clear that the ethical principles of a public AI company have no influence or weight on Google Cloud’s business decisions, and that even complicity in genocide is no obstacle to the company’s relentless pursuit of profit. at any cost.”