It can run up to seven miles per hour, and swim. It can climb steps and scale hills at a 40-degree gradient. It can be outfitted with sensors, night vision, arms, and deployable drones.
It is a robotic dog — a “quadruped” platform developed by Ghost Robotics and enhanced by AT&T that, to date, has been used to patrol military zones. Now, the telecommunications giant is pitching a new use for this AI-friendly technology: campus safety.
“We started thinking outside the box,” said Arthur Hernandez, a principal technology program manager at AT&T who’s been in the U.S. Navy Reserves for more than 20 years. The team thought, “‘Why can’t we use this in other types of scenarios?’”
There may be a limit, though, to the tech applications that higher ed can stomach — at least for now. Many faculty members and students who spoke with The Chronicle said the proposed functions — which include 24/7 perimeter patrol, spotting “unidentified” personnel, and dispersing unruly protests — felt Orwellian, not to mention financially impractical.
“We are not a military base. We’re a college,” said Alexandra Stinson, a Ph.D. student in political science at Michigan State University. “This is where our lives happen.”
Even academics who said they support continued innovation in robotics and AI weren’t keen on security robodogs, pointing to concerns about privacy, discrimination, and accountability.
“These are tricky spaces, and these are very fragile spaces that we’re walking into,” said Renée Cummings, an AI and data ethicist and criminologist who’s also a professor of the practice in data science at the University of Virginia. “Technology is always more ahead of the law. So we’ve got to think about that.”
AT&T confirmed that it hasn’t yet persuaded any colleges to start using the robotic dog for campus safety. So far, Hernandez said, he’s given demos at some 15 colleges ”up and down” the East Coast — most of them private institutions. AT&T also recently brought its robotic dog to Educause, an annual conference for IT and tech professionals in higher ed.
Why court colleges to begin with? Companies like AT&T are vying for new customers in a competitive tech market. Colleges pride themselves as hubs of innovation, and leaders face pressure to explore the latest technological advancements, more recently in artificial intelligence. They are also grappling with how to respond to, and prevent, campus-based violence, such as the mass shootings this year at Michigan State and Morgan State Universities.
Companies like AT&T are vying for new customers in a competitive tech market.
The solution that AT&T is offering is a four-legged robot — 100 pounds and made of an alloy steel — that can be controlled remotely, or function fully or semi-autonomously if trained on third-party machine-learning algorithms. (AI capabilities do not come with the robot dog itself). Placed on a campus, it could patrol the perimeter, feeding video 24/7 to an operator. It could walk down bustling hallways and be programmed to put the campus on lockdown, or contact the police, if it comes across “unauthorized” personnel. It could be equipped with long-range acoustic devices rivaling the sound of a jackhammer to communicate with, or disperse, crowds if a protest turned violent.
(One nonacademic customer has reportedly used the latter function to ward off wild boars.)
Not New, but Different
It wouldn’t be robotic dogs’ first foray onto campuses; a number of them from companies like Boston Dynamics and Unitree Robotics are already aiding in research and teaching. There’s one at the University of Florida that uses sensors to construct 3D models of the room it’s in, capturing people, furniture, and even colors — functions that could aid first responders like firefighters. Another robodog, at the State University of New York’s University at Buffalo, is testing perception algorithms — which help robots carry out commands like, “Go into the house and get me a beverage” — and robot-to-robot collaboration. Auburn University is using its robotic dog, nicknamed Mac, to research AI applications for construction sites; for example, having a robot autonomously collect data to track and manage inventories of materials.
Faculty researchers taking those mechanized critters on campus promenades say reactions are mixed. Some people get excited, and whip out their phones to snap selfies. Others are more timid, and hesitant to interact.
In an interview with The Chronicle, AT&T officials expressed optimism that colleges would warm to the idea of using robodogs for campus safety. Students and faculty members “are very accepting of technology in the classroom” nowadays, said Andrea Huguely, a spokeswoman.
Whether that enthusiasm will carry over, though, is iffy. On Reddit, for example, dozens of users responding to three posts from The Chronicle overwhelmingly denounced the idea. Some pictured the robodog ending up as a side table in a fraternity’s basement, beheaded, or at the bottom of a river. One considered whether Silly String or paintballs could serve as viable “defense measures” against it.
Jokes aside, practical questions abounded. How would a robotic dog identify “unauthorized” personnel on sprawling — and public — campuses? How would it tell a hug from an attack? How many would you actually need?
“To me, this sounds like a solution in search of a problem,” Benjamin Horowitz, a sophomore math and computer-science major at Tulane University, said.
At a time when cash-strapped colleges are slashing budgets and programs, it’s also not a particularly cheap solution. One college reported paying about $25,000 for its robotic dog, not including add-on tech or maintenance. Another, $70,000. A third, $102,000. (The two vendors cited were Unitree and Boston Dynamics).
Hernandez at AT&T declined to provide a baseline cost or range for the company’s robodog. He did say that AT&T’s “costs more because it does more,” noting its battery life and ability to swim and operate in “extreme” outdoor conditions. “The quality and capabilities vary greatly between the product offerings” on the market, he wrote.
Ethical Questions
Beyond practicality, the most persistent questions were about data privacy, discrimination, and the accountability gap for AI systems, as innovations continue to outpace federal regulations.
The robotic dog is equipped with five cameras — two facing forward, one on each side, and one facing backward — that, Hernandez says, enable a 360-degree view while on patrol. This means the ability to ingest a lot of information, including video of nonuniversity folks, such as parents, potential students, and visiting sports fans.
“They’re not just collecting data when things go wrong,” said Will Fleisher, an assistant professor of philosophy at Georgetown University, and a research assistant professor in the university’s Center for Digital Ethics. “We have to think about all of the potential people who could be caught in this dragnet.”
The Chronicle asked AT&T to clarify the kinds of data the robot dog would collect when patrolling a campus or dispersing a protest; who would see and have access to that data (including at the company itself); and how that data would be protected. AT&T did not answer these questions directly, and referred The Chronicle to its privacy policy. The Chronicle also asked whether the robodog’s reliance on a wireless signal poses a cybersecurity risk. Hernandez replied via email that the company’s cellular FirstNet network — if a customer chooses to sign up for the service — uses “a secure wireless gateway.” Conversely, if a customer is operating the dog using private or public Wi-Fi, that connection “is as safe, or vulnerable, as any other wireless device connected to the network.”
To be sure, many universities are already outfitted with robust camera systems, metal detectors, and other tech-based security. But campus sources The Chronicle spoke with said a robot dog would add an extra layer of “surveillance,” and would feel more intrusive. Stinson, the Ph.D. student at Michigan State, fears the robodog could even be triggering to students who’ve experienced campus violence, such as a school shooting.
They’re not just collecting data when things go wrong. We have to think about all of the potential people who could be caught up in this dragnet.
“I’m worried that this dog would be a physical reminder that we have endured this trauma … that we’ve endured something that ‘requires’ this,” Stinson said. A gunman opened fire on Michigan State’s campus in February, killing three students and critically injuring five others.
There could also be more at stake, sources said, because unlike a camera, the robot dog could be trained to make autonomous decisions, should a university customer decide to add AI capabilities. That could include a decision to deploy force, such as a long-range acoustic device (LRAD). An “acoustic device,” as Fleisher noted, is just “a nice way of saying ‘weapon system.’” (Generally speaking, LRADs can be loud enough to induce vomiting and cause ear damage.)
Horowitz at Tulane, where recent protests over the Israel-Hamas war have roiled campus, believes a robotic dog would only escalate already emotionally charged gatherings.
A robot “is depersonalized,” he said. People are less likely to positively respond “to something that’s not human.”
Others fear the opposite — a chilling effect on students and faculty just as many college leaders have publicly recommitted to safeguarding free speech and expression on campuses.
This chilling effect could be a particular problem for communities of color, Cummings added. Facial-recognition systems are known to be less accurate at identifying people with darker skin tones — often because their machine-learning algorithms are trained on a disproportionate number of images of white people. (One 2018 study reported error rates of up to 34.7 percent, with darker-skinned females the most at risk of misidentification.)
That raises additional ethical questions: What happens if the robotic dog, if programmed to act autonomously, is wrong? If it, say, misidentifies a person?
AI systems “can’t be punished. They’re not people,” Fleisher said. And identifying who is responsible can be nearly impossible for numerous reasons: For one, AI systems are often designed, developed, and deployed in a complex, multistage process that involves a large number of people. Existing AI systems are also limited in their ability to explain to humans how they’ve reached a decision. (Countries in the European Union are working to develop accountability frameworks, but similar efforts in the U.S. are in nascent stages. )
That accountability gap is why even AI and robotics evangelists like Karthik Dantu, an associate professor in computer science and engineering at the University at Buffalo, oppose giving AI systems — including any used to operate robotic dogs — that much power. “I would be comfortable” with a robotic dog inspecting something and reporting anomalies to a human, he said, but “full autonomy, where it’s making some decisions, is where I’d be a little more nervous.”
Hernandez at AT&T wrote in an email that if the company’s robotic dog made a mistake while operating autonomously — misidentifying a person, for example — accountability “would lie with the provider, trainer, and operator” of the AI system that was purchased separately. While the dog can be programmed to function autonomously in “nearly any use case,” he wrote, the customer “should exercise judgment and responsibility.”
Current Strategies
Not every use of robodogs for campus safety makes students and academics worry. The Chronicle spoke with a few who said they could envision a robotic dog escorting a student home at night, or responding to a bomb threat — an idea Hernandez at AT&T mentioned as well. Yaz Ozbek, a physics Ph.D. student at Michigan State, said she wouldn’t mind having a robotic dog that checks for dangerous chemicals or gases in the basement of the Biomedical and Physical Sciences Building, where she works up to 40 hours a week on research.
“Sometimes I’ll smell something down here, and I’m like, ‘Am I being poisoned right now?’” she said. “So it actually would make me feel a little safer.”
The Chronicle reached out to six institutions that in the past year have either suffered a campus shooting or experienced violent protests, asking if they were interested in obtaining a robotic dog for safety purposes.
Most didn’t respond, or declined to comment. At Morgan State University, where five people — most of them students — sustained injuries in a campus shooting in October, the spokesman Larry Jones wrote in a statement that robot-dog technology is “not part of the university’s current security strategy.” He added that the university police department is focused on tech that “meets Morgan’s immediate safety and security needs,” including metal detectors.
Erin Spandorf, a spokeswoman for the University of North Carolina at Chapel Hill, similarly indicated that robotic dogs “are not a technology UNC police is considering,” adding that the department is looking to expand and acquire technologies such as expanded lighting, more security cameras, and license-plate readers.
For Stinson, at Michigan State, investing in a robotic dog would feel like an ineffective and “hollow” effort to bolster campus safety. She’d rather the university allocate available funds to hire more mental-health counselors, and offer emergency-response trainings for the staff.
Cummings agrees. But if civil-use robots are destined for more campuses, she’s more likely to get behind concierge bots. Ones that might sort library books, host a campus tour, or bring students their Grubhub orders.
“We saw that on The Jetsons, right?” she said, referring to the futuristic sitcom from the 1960s. “Those are the great things, the things that make our lives easier. The funny things. Those are the things we want.”