At the California Institute of the Arts, it all started with a videoconference between the registrar’s office and a nonprofit.
One of the nonprofit’s representatives had enabled an AI note-taking tool from Read AI. At the end of the meeting, it emailed a summary to all attendees, said Allan Chen, the institute’s chief technology officer. They could have a copy of the notes, if they wanted — they just needed to create their own account.
Next thing Chen knew, Read AI’s bot had popped up in about a dozen of his meetings over a one-week span. It was in one-on-one check-ins. Project meetings. “Everything.”
The spread “was very aggressive,” recalled Chen, who also serves as vice president for institute technology. And it “took us by surprise.”
The scenario underscores a growing challenge for colleges: Tech adoption and experimentation among students, faculty, and staff — especially as it pertains to AI — are outpacing institutions’ governance of these technologies and may even violate their data-privacy and security policies.
That has been the case with note-taking tools from companies including Read AI, Otter.ai, and Fireflies.ai. They can integrate with platforms like Zoom, Google Meet, and Microsoft Teams to provide live transcriptions, meeting summaries, audio and video recordings, and other services.
Higher-ed interest in these products isn’t surprising. For those bogged down with virtual rendezvouses, a tool that can ingest long, winding conversations and spit out key takeaways and action items is alluring. These services can also aid people with disabilities, including those who are deaf.
But the tools can quickly propagate unchecked across a university. They can auto-join any virtual meetings on a user’s calendar — even if that person is not in attendance. And that’s a concern, administrators say, if it means third-party products that an institution hasn’t reviewed may be capturing and analyzing personal information, proprietary material, or confidential communications.
“What keeps me up at night is the ability for individual users to do things that are very powerful, but they don’t realize what they’re doing,” Chen said. “You may not realize you’re opening a can of worms.“
The Chronicle documented both individual and universitywide instances of this trend. At Tidewater Community College, in Virginia, Heather Brown, an instructional designer, unwittingly gave Otter.ai’s tool access to her calendar, and it joined a Faculty Senate meeting she didn’t end up attending. “One of our [associate vice presidents] reached out to inform me,” she wrote in a message. “I was mortified!”
At an Illinois institution, a Read AI bot showed up to a meeting in place of a consulting-firm executive who’d been invited. At Hudson County Community College, in New Jersey, a tool from Fireflies.ai transcribed a Board of Trustees meeting, to the surprise of some attendees. At the University of Memphis, the chief information officer recently sent a universitywide email urging the campus community to deny Read AI access to meetings and ignore prompts to create accounts.
Spokespeople for those companies emphasized that their tools can join meetings only with a user’s permission, and that users are in full control of what the bots can access and do through their account settings. For example, a user can disable the auto-join function.
A spokesperson for Read AI added that a user needs to “review and confirm” their settings when they create an account. Many sources The Chronicle heard from didn’t recall granting these tools any permissions.
Considering risk
The commercial terms of service and privacy policies for the two most cited vendors, Otter.ai and Read AI, note that when an individual creates an account, they agree to license the “content” they share or upload while using the service. This could include voice and audio recordings, text, and photographs. The companies can then “use” and “reproduce” that content.
That may extend to training AI models. Otter.ai’s privacy policy states that it trains its AI technology on “de-identified” audio recordings and transcripts that “may contain personal information.” (Companies that de-identify data remove information that identifies or relates to an individual; generally speaking, that process would not apply to confidential or proprietary materials that don’t include personal information, such as curricular resources.) A spokesperson added in an email that audio recordings and transcripts “are not manually reviewed by a human” without explicit permission.
Read AI users have to opt in to contribute data “to improve the product,” a spokesperson wrote in an email when asked about how the company trains AI models.
The policies also include language that absolves the vendors from any responsibility for losing or disclosing user content.
These are common provisions for tech companies to add, said Sid Bose, partner and chair of the data-security and privacy team at the law firm Ice Miller. Still, they “warrant further investigation” by subject-matter experts at an institution who can assess risk and determine whether it’s worth the benefits, he said.
Privacy and security aren’t the only considerations, though, said Deirdre Mulligan, a professor of law at the University of California at Berkeley’s School of Information. When every interaction is recorded, she said, there’s a risk of stifling academic discourse — a valued tenet of higher education.
“What does research, teaching, and learning depend upon? It depends upon people feeling free to express themselves, free to explore new ideas, free to take chances, free to make mistakes.”
Regaining control
So what can institutions do to respond?
An institution may be able to work with a videoconferencing provider, like Zoom, to block certain AI assistants. The California Institute of the Arts, for example, did this to block Read AI from its enterprise account.
Sources said the solution isn’t to write off AI assistants altogether, though — especially given colleagues’ interest in the products and the support they can provide to people with disabilities. So they’re exploring their options.
One is to identify alternative tools. The University of San Francisco, for example, has enabled the AI assistant that’s available through its site license with Zoom (a service the faculty, staff, and students can use free of charge). However, administrators have put locks on what the tool can do, said Ken Yoshioka, a senior instructional technologist. The assistant can’t use uploaded files as a data source, for example, or access emails and documents tied to a user’s Microsoft 365 or Google accounts.
“It’s all about transparency” and giving control to people, not the tools, Yoshioka said.
Another option is to develop policies or guidance that set clear guardrails. At Berkeley, the “Appropriate Use of Generative AI Tools” guide prohibits any use of AI tools to submit queries or generate results that are not public information — unless the university has a negotiated agreement with the vendor containing “appropriate contract protections,” Mulligan wrote in a follow-up email. The guidance notes which AI tools the university has agreements with.
Alongside such directives, Brown, at Tidewater Community College, believes it’s essential to educate everyone on campus about how these tools work and the privacy considerations involved.
“A well-informed community can better align with institutional policies and safeguard sensitive information,” she wrote.