When Esteban Wood was summoned to a Zoom meeting with the University of Miami’s dean of students to discuss a recent “die-in” protest, he wondered how the dean knew that he and the other eight students copied on the message had participated.
The September 22 email was confusing and unnerving, he said. The protesters, many of them dressed in black, lying on the ground, and holding fake gravestones over their heads, had been criticizing the university’s response to the Covid-19 pandemic, which they felt had been inadequate. The students wondered whether they were going to be written up or punished for their part in the protest, which attracted about 25 students, faculty, and staff members, according to the student newspaper.
The dean, Ryan C. Holmes, reassured the students that they weren’t in trouble and that he just wanted them to be aware that they needed to register and reserve space for a protest. When they asked how they’d been identified as participants, the dean’s answer left them with the impression that the campus police had used facial-recognition software — a contention the police chief vigorously denies. Holmes did not respond to a request for comment.
But suspicions about how the campus police had identified the students intensified when anti-surveillance activists helped uncover statements by the campus police chief, David A. Rivero, that seemed to support the use of facial-recognition software. It became the latest flashpoint in a yearslong debate on campuses around the country over whether technology intended to keep people safe does more harm than good.
“There’s been an instinctive pushback from a lot of people when it comes to biometric information being collected,” said Amelia Vance, director of youth and education privacy at the Future of Privacy Forum, a think tank focused on data privacy. “The biggest question here is that we don’t know how this technology is going to be used next.”
Another problem, she said, is that much of the existing software is prone to falsely identify women and members of racial-minority groups, “which of course could have devastating consequences, if a student is pulled off campus in handcuffs if they were misidentified as a burglary suspect,” for example.
Rivero, who spent 26 years with the Miami Police Department before becoming the university’s police chief, in 2006, denied, in an interview with The Chronicle this week, that the campus police had used facial-recognition software to identify the students who had taken part in the protest. He said they had been identified through “basic investigative techniques,” which he declined to describe.
A campaign, Ban Facial Recognition, pointed out the following passage in Rivero’s résumé, which boasted of a security system at the University of Miami capable of using that kind of technology.
“One of the largest security projects added during Chief Rivero’s tenure was the creation of the new university-wide camera system (CCTV),” his résumé says. “The system now includes 1,338 cameras, recording 24 hours a day, and featuring video analytics, which is the use of sophisticated algorithms applied to a video stream to detect predefined situations and parameters such as motion detection, facial recognition, object detection, and much more.”
Rivero said that he wrote the résumé about 12 years ago and that he was describing the capabilities of the system. The facial-recognition feature wasn’t set up — a point his résumé didn’t make clear, he said. That would require expensive upgrades as well as a vast database of photos to compare video footage to, he said.
The university reiterated that point in a statement on Friday that said, “The university does not utilize facial-recognition technology in any manner.”
It will, however, work in some criminal investigations with the Florida Department of Law Enforcement to help identify a suspect by running a photo of a potential suspect through the state department’s database of arrest photos, the statement said.
‘A Tool for Surveillance and Control’
Caitlin Seeley George, campaign director for Fight for the Future, an activist group that opposes surveillance technology, said the police had used facial-recognition technology to identify Black Lives Matter protesters in Miami and New York.
“These examples show that facial recognition isn’t safe, that it’s a tool for surveillance and control that law enforcement will use to crack down on dissent, and this is why our lawmakers need to ban it,” she wrote in an email.
Colleges that use facial-recognition software describe it as an important tool to keep people safe and catch criminals.
At Grand Valley State University, in Michigan, Kourosh Khatir is captain of a department of public safety that uses facial-recognition technology through a video-security system called Avigilon. The project began in 2018 and expanded in 2019, with cameras installed around student housing areas and parking lots.
The cameras are low enough to the ground that they can capture photos of people’s noses and cheekbones, not just grainy images shot from the corner of a building’s roof, which often can’t distinguish facial features. The system works by comparing an image taken from footage of a suspect to other individuals recorded by the cameras; Khatir said officers could not upload a separate photo into the system to track that person.
The department has used the technology to nab a person suspected of using a stolen identification card and another who was allegedly involved in a hit-and-run accident, Khatir said. The approach, he said, saves time because officers need not comb through hours of footage, hoping to glimpse a similar image of a suspect or his or her vehicle. Footage is almost always deleted every 30 days, according to university policy. That reassures people that their images are not going to be stored “as part of a massive database,” Khatir said.
A spokeswoman for Florida International University said the institution believes the technology “can be an important law-enforcement tool,” but she declined to say how it was being used. Other campuses, including Southern Methodist University, tried the software but decided against using it.
Wood, the Miami student, said he was still disturbed that the university had not explained how it had tracked him down. “Even if facial recognition wasn’t used — and that’s still up for debate, given the police chief’s contradictory statements,” he said, “the fact that we were surveilled, tracked down, and called in sends the message that ‘we’re watching you.’”
Lindsay Ellis contributed reporting.