India’s adoption of facial recognition technology could have serious ramifications

On February 23, 2020 riots erupted in East Delhi after right-wing Bharatiya Janata Party (BJP) supporters clashed with those who were peacefully protesting against the Citizenship (Amendment) Act of 2019 (CAA). Mob violence during the next seventy-two hours was the most serious seen in Delhi since the 1984 carnage against Sikhs following Prime Minister Indira Gandhi’s assassination. The 2020 riots left fifty-three dead, seventy-nine houses damaged, 327 shops gutted, four mosques burned, and thousands homeless. A German news agency found many people complaining that the police were “turning a blind eye to violence in Muslim neighborhoods,” and the Indian Supreme Court also criticized the police for their delay in responding to the violence. This alleged bias of Delhi police against Muslims was widely covered by media in India and abroad.

In the aftermath of the riots, Union Home Minister Amit Shah reported that 1,100 persons were identified through twenty-five computers using “facial recognition software” as having been responsible for the violence. “Biometric” announced on the same day that India would set up the largest facial recognition system in the world for police use.

Amit Shah’s announcements almost certainly over-estimate the efficacy of current Facial Recognition Technologies (FRT) and raise concerns about its application for individual privacy and civil liberties. Security experts doubt whether FRT, which is successful in identifying individuals in controlled conditions like border crossings, is accurate when applied to crowds, especially in cases such as the Delhi incidents where rioters were widely wearing masks.

Shah later told the parliament that 700 crime complaints were registered for causing the Delhi riots and 2,647 were detained or arrested. On the same day an online daily published a report highlighting three cases in which the Delhi police had “ignored” complaints and “scuttled” fair investigation. Apart from the opaque manner of the investigations, there is a serious concern about the validity of using FRT, which has yet to prove itself.

Globally, early law enforcements of FRT cast doubt on the reliability of the technology to correctly identify individuals in crowd situations. In Israel, where FRT has been rolled out for law-enforcement purposes, Omer Laviv of the Mer Security & Communications—which markets Israeli company Any Vision’s products to law enforcement agencies around the world—told National Public Radio (NPR) on August 22, 2019 that facial recognition technology is “a few decades away from being able to locate a suspect by scanning crowds in real-time.”

In the United States, the Federal Bureau of Investigation’s (FBI) much-acclaimed facial recognition system failed to identify the Tsarnaev brothers from street video after the 2013 Boston Marathon bombing, despite their images being in police records. FBI and Boston Police had to fall back on conventional methods like amateur video images, public appeals, and surveillance videos to catch the brothers. An unnamed FBI official revealed on April 23, 2013 that authorities were able to identify a black-capped individual on tape as Tamerlan Tsarnaev only after getting fingerprints after a shootout with the assailant.

FRT also appears to be prone to the similar flaws and biases as that of police departments worldwide. On July 26, 2018, the American Civil Liberties Union (ACLU) alleged that Amazon’s “Rekognition” software—which was used by some US law enforcement agencies— incorrectly matched twenty-eight members of Congress as arrested persons.  The ACLU investigation followed a Congressional Black Caucus (CBC) letter to Amazon founder Jeff Bezos on May 24, 2018 protesting that the new software would lead to negative consequences on “African Americans, undocumented immigrants and protestors” as “communities of color are more heavily and aggressively policed than white communities.”

A widely cited 2018 paper jointly written by MIT Media Lab researcher Joy Buolamwini and Timnit Gebru of Microsoft Research was quoted by the ACLU to prove the inaccuracies of machine-learning algorithms which tend to reach wrong conclusions “based on classes like race and gender.” They applied the Fitzpatrick skin type classification system on four subgroups: darker females, darker males, lighter females, and lighter males. They found that all “classifiers” performed best for lighter individuals and males. Hence, they recommended that “further work should explore intersectional error analysis of facial detection, identification and verification.”

In the Indian context, these concerns are amplified by the absence of strong individual privacy protections and checks on government infringement on civil liberties. Although privacy has been recognized as “Fundamental Right” by Indian Supreme Court, law enforcement at both the state and central level have exhibited a growing tendency to flout court rulings in the absence of legal protections of personal privacy and data. Pending legislation to guard individual privacy provides the central government with broad access to individual data and does not establish institutional checks on government use of emerging technologies with implications for individual privacy. Already, government application of FRT has exceeded government assurances that it would only be used with respect to criminals on its crime tracking system, missing persons searches, or unidentified bodies.  

Coupled with India’s slow judicial process and weak constraints on arrest of individuals, the use of FRT raises serious concerns about both individual privacy and protections from excessive law enforcement usage. NCRB (National Crime Record Bureau) statistics reveal that by the end of 2016, more than half (68 percent) of the 433,033 people in prisons were “under-trial”. This includes not only those who are charged and waiting for trial but also those arrested on suspicion whose investigations are not complete. Amnesty International says that India’s under-trial number is among the highest in the world and may be the result of unnecessary arrests and ineffective legal aid during remand hearings. Individuals falsely identified by facial recognition technologies face potential years of imprisonment before the legitimacy of their arrest is examined in court.

The application of facial recognition technologies in India would almost certainly aid the country’s stretched law enforcement units and may prove useful in future incidents of public rioting or unrest. Given the state of current technologies, however, observers and government officials in India need to critically examine the reliability of this new platform and its potential to wrongfully infringe on the rights of innocent individuals.

Vappala Balachandran is an author and columnist on security issues. He was a member of the two-man High Level Committee appointed by the government of Maharashtra to enquire into the police response to the Mumbai 26/11 Terror Attack.

Further reading

Image: A worker installs a security camera in front of the historic Red Fort on the eve of India's Independence Day celebrations in Delhi August 14, 2014. REUTERS/Anindito Mukherjee