From governments to cops to schools, everyone seems to be obsessed with face recognition in India, even as a consensus emerges across the globe on the potential dangers of the controversial technology.
Are you a woman having a bad day? If you’re in Lucknow, smile, you’re on camera. According to media reports, the city’s police department is using surveillance cameras powered by artificial intelligence and facial recognition technologies to track and ‘help’ women in ‘distress’. According to Lucknow cops, the initiative comes as part of Uttar Pradesh government’s Mission Shakti programme. Apparently, Lucknow is being developed as a safe city for women and the surveillance initiative is one of the many hi-tech measures being envisaged for ensuring the safety of women in the city.
How exactly does the system work? Tweets from Lucknow’s police department says at least five areas have been spotted and marked as potentially unsafe for women, where the policy is installing the cameras. The cameras will be tracking the crowd and will analyse the facial expressions of the women in real-time. If any of them ‘look’ tense or anxious, it could be read as a sign of the person being stalked or chased. The camera will snappily send an alert to the nearest police station, which will act upon the alert.
Sounds bizarre? Well, you’re not the only one cringing. A clutch of digital rights activists have already raised vocal opposition towards the move to spy on women in the name of surveillance and safety, which they allege is a plank used by authorities across the globe to micromanage and control their populations, especially vulnerable communities such as women, ethnic minorities and children. A report in Scroll quotes IIT Bombay academic and rights activist Anupam Guha as saying the idea is “cartoon level stupid” because facial expressions say nothing about the internal mental state of humans. Machine learning requiring facial data violates constitutional rights, Guha says.
Face, the new weapon in surveillance
Interestingly, the Lucknow move comes close on the heels of a similar initiative being unleashed by the Central Board of Secondary Education (CBSE), India’s leading school board that runs a network of more than 22,000 schools. Reports say the CBSE is using facial recognition technology to provide access to its digital documents to students. Media reports revealed a few months ago that when a CBSE student tries to access the digital documents, an application will scan their face and try to match with the database.
The real-time image of the student will be captured and cross-checked against his or her photo in the school identity card, which is stored in the CBSE database. The app will grant access to the documents once a positive result pops up. You can download the app from CBSE’s DigiLocker facility and its academic repository Parinam Manjusha.
As expected, the move faced widespread criticism from digital rights activists and privacy experts. A Right to Information request filed by the Delhi-based non-profit, Internet Freedom Foundation, didn’t elicit any useful response from the CBSE, which simply stated that it was “not using facial recognition” but “face matching” technology. But the body didn’t explain the difference and seemed to have sulked away from giving further details on the project.
The Foundation says the use of such technologies on students at a time when there are no meaningful regulations on their use (and abuse) is “extremely concerning” as it involves the fundamental right to privacy. The advocacy group urged the CBSE to “cease” the use of facial recognition, which it calls face matching.
There’s more from India
The Lucknow police and the CBSE are not alone in this game. India’s National Crime Records Bureau (NCRB) is currently toying with the idea of building a nation-wide collection of faces from its records, the Automated Automated Face Recognition System. The body says such a system would help immensely in fighting crimes. Recently, an interesting event organised by the NGO Carnegie India, titled ‘Facial Recognition in India’, revealed alarming facts about India’s face recognition plans. Experts who spoke at the event said that facial recognition had become a buzz among the policy circles in the past two years but awareness around the use and abuse of such technologies continues to be limited.
In 2018, the Delhi police became one of the first law enforcement agencies in the country to start using the technology. The Internet Freedom Foundation recently released a list of some of the on-going face recognition plans in India. They are the TSCOP + CCTNS in Telangana; the Punjab Artificial Intelligence System (PAIS); Uttar Pradesh’s Trinetra initiative; the Police Artificial Intelligence System of Uttarakhand; AFRS in Delhi; the Automated Multimodal Biometric Identification System (AMBIS) in Maharashtra and FaceTagr in Tamil Nadu.
Telangana’s TSCOP app, which was launched in 2018, has a database of fingerprint and facial data of several criminals. Under the TSCOP + CCTNS (Criminal Tracking Network and Systems) programme, the cops can ask any suspect to offer them their biometric data including facial recognition data to verify their identity. Uttar Pradesh’s Trinetra is an application that hosts a database of five lakh criminals. The app is equipped with AI and face recognition features. The police will run the app at a crime scene and will spot criminals using AI and face recognition technologies. In Uttarakhand, the app will not only match the facial features of a suspected criminal but will also recognise “comprehensive bone structure” of a criminal. The cops say such a feature will help them identify criminals using their old photos or videos.
A controversial technology
What makes the Indian experience with facial recognition technologies is the regulatory vacuum that exists in the space. India’s still debating a draft data security law. The country does not have proper legal provisions to challenge the use of facial recognition and surveillance technologies on civilians.
Interestingly, India’s tryst with facial recognition technologies happens at a critical juncture in the history of identification technologies. Globally, face-based biometric systems and technologies are triggering heated debates around their misuse, especially in countries such as the US and China, which are the largest consumers of biometrics. The global market for facial recognition today stands at about $9 billion and is expected to cross $12 billion by 2025.
Considering the global criticism against the use of facial recognition technologies, companies such as Microsoft, IBM and even Amazon have announced their plans to stop fiddling with the lucrative business facial recognition products market. For instance, in June 2020, IBM’s CEO Arvind Krishna said in a letter to the US Congress that the technology giant would no longer provide facial recognition or analysis software for general purpose. Some time ago, Microsoft had urged governments to regulate facial recognition applications and systems. It has also said it won’t supply facial recognition to police until there is a national law in place which respects human rights.
Just don’t rush it, please!
Now, there is general agreement among policy experts and geeks across the globe that face recognition algorithms are nowhere from being perfect and the software is not able to respect crucial factors such as diversity, plurality and equality in the society. Several reports have confirmed clear biases in the use of facial recognition technologies, towards women, people of colour, people from marginalised communities, and more. For instance, the use of facial recognition by the Delhi police following the riots in the capital city last year had drawn criticism for its alleged bias towards a particular community. Globally, China is accused of using face recognition tech to suppress dissent.
It is in this context that the city of New York recently banned the use of facial recognition tech at its schools. Recent legislation banned schools from buying and using biometric identifying technology until July 1, 2022, or until a study documenting whether the tech’s use is appropriate in schools is finished. Looks like the CBSE can take a leaf out of the New York plan and wait it out before going ahead with rash usage of such a controversial and alarmingly dangerous technology.