News, On Campus

Panelists Discuss Benefits, Risks, and Legality of Facial Recognition Software

Facial recognition technology poses privacy and security threats to everyone, according to Kade Crockford, director of the Technology for Liberty Program at the American Civil Liberties Union. 

“The way that it can facilitate mass surveillance and centralized control is a sort of perfect tool of authoritarianism and poses many threats to privacy and civil rights and civil liberties,” Crockford said.

On Wednesday, Boston College Law School’s Rappaport Center for Law and Public Policy hosted a panel to unpack privacy, profiling, and safety concerns regarding facial recognition technology.

According to Crockford, the legality of using facial recognition, for both the government and privately-owned companies, is ambiguous.

“You don’t even necessarily have to be suspected of a crime to be subjected to one of these [facial recognition] searches,” Crockford said. “That’s concerning to us because it’s such a low standard that really anyone who’s tangentially related to a criminal investigation could be subjected to one of these searches.”

Massachusetts laws regarding the use of facial recognition are limited to residents of the state, which also poses problems, according to William G. Brooks III, chief of the Norwood Police Department and one of the panelists.

“What if we have a getaway vehicle that’s got Rhode Island plates?” Brooks said. “I can’t run my photo against people in Rhode Island? I can only run them against people from Massachusetts? What sense does that make?”

Former Massachusetts Supreme Judicial Court Justice Elspeth B. Cypher, another panelist, said current facial recognition software presents racial and gendered biases, which can result in falsely identifying perpetrators of a crime.

“The algorithm, which is usually trained on white people, and usually they are going to make mistakes on women, Africans, and African Americans. These were not people who were included in the datasets when the algorithms were being created.”

Still, facial recognition can and has been used positively, Brooks said.

He recounted an incident in Massachusetts where an unconscious individual was sent to a hospital without any forms of identification. Without identification or a locatable medical history, the doctors were unable to assess or give treatment to the individual, Brooks said.

“Police came in, took a photograph, sent it to the state police, ran it against the gallery for facial recognition, identified the patient, and contacted his family,” Brooks said. “So there’s community caretaking in emergencies that is envisioned by the statute.”

Nevertheless, facial recognition must be used in conjunction with human judgment and decision making, the panelists agreed.

“You’ve got to have a human being in the loop,” Cypher said. “You cannot just let the machines work their magic. So when the match is made, some human being has got to double check and really look at it.”

Without this standard in place, careless mistakes are made, Crockford said. They discussed a shoplifting incident in which the store sent low-quality images from their surveillance cameras to the police and an innocent individual was arrested.

“They arrested him on the basis of [facial recognition] alone,” Crockford said. “That is terrible police work … it is human error as much as it is error in the system that leads to those negative outcomes.”

According to Crockford, many private companies are collecting biometric data from users without their consent, which is also harmful. They said private companies should aim to receive meaningful consent from users for protective purposes.

“That doesn’t mean you click ‘yes’ to agree to a long terms of service that you didn’t read,” Crockford said. “It’s actual, meaningful consent that’s separate from other terms, that says, ‘Yes Apple, you can collect my biometric information.’”

February 22, 2024