Last fall, instructors at the University of Massachusetts at Amherst suddenly started receiving scores for every student’s writing assignment, estimating how likely it was that they had been completed using generative artificial intelligence. The percentile scores were generated by an AI tool built into the institution’s learning-management system. The scenario, administrators say, caused “massive confusion.” Faculty members might see a high percentile score for an assignment, but how high did a score have to be to justify some kind of action? What if the software’s analysis gave an assignment a 51 percent likelihood of AI use? How does a professor interpret that? And the leapfrogging rate of innovation in AI technology made the university’s own computer scientists skeptical that AI-detection tools were reliable predictors of anything at all.
The tool fueled a discussion already underway at UMass Amherst and many other institutions: the need to create a universitywide generative AI policy. As the technology spreads throughout all aspects of academe — and evolves at a pace measured in months, not years — experts and a burgeoning number of administrators believe that colleges need to establish guidelines about its use or face potential disaster.
What kind of disaster? So far, higher education has been devoid of major public AI scandals. But ungoverned use of the technology across a campus could lead to exposure of sensitive data and the proliferation of inconsistent uses that could potentially harm students and other stakeholders as well as the institution. Confusing or patchy AI policies might be worse than none at all.
The need for comprehensive AI policies is already apparent to colleges’ technology leaders. A survey conducted in the fall of 2023 by Educause, a membership organization for technology professionals in higher education, found that almost a quarter of respondents’ colleges had policies in place to regulate AI use. Nearly half of respondents, however, disagreed or strongly disagreed that their institutions had sufficient existing policies in place.
The biggest use of generative AI at most colleges is in the classroom, and at many colleges, administrators let instructors determine how, or if, the technology can be used in their courses and provide some guidelines.
The phrase that we mentioned often was, ‘You can’t blame the AI.’ If you invoke the AI, you’re responsible for the output.
The University of Texas at Arlington’s AI policy allows faculty members to choose whether and how to allow AI use, but it includes four specific language options for instructors to include in their syllabi on how AI can be used, from not at all to unrestricted use.
“I’ve got 1000-plus faculty members — I don’t want 1000-plus different ways that our instructors are using AI in the classroom,” said Jennifer Cowley, UT Arlington’s president. “Let’s create some buckets that our faculty can align with so it makes it easier for our students to understand what they should and shouldn’t be doing.”
AI policies can also be a huge help as colleges’ employees sift through myriad generative AI products being firehosed at them by ed-tech companies or bundled into products they may already use. At UT Arlington, a special AI council reviews potential products for use.
When AI was brand-new, “we had everybody wanting to buy chatbots, but it doesn’t make sense for us to have 100 different ways that we’re doing things,” Cowley said. “We need to think collectively about what are the right solutions.”
The council, which includes representatives from across the university, also provides the benefit of sharing information between units — a solution that works for admissions may have some application in athletics, Cowley pointed out.
Perhaps the biggest reason driving collegewide AI policies is data security. For example, a professor or staff member playing around with ChatGPT in her office may not realize that, like many generative AI platforms, it uses the information a user gives it as further training data that could find its way into answers it gives another user later. Feeding it student data could be a violation of federal privacy laws. Using generative AI software to crunch admissions information or financial data could leak information a college wouldn’t want its rivals to have.
Data security was the main reason leaders at Babson College, in Wellesley, Mass., decided to devise an overarching AI policy last year, a process that took several months. Babson has a strong business focus, so many faculty and staff members expressed interest in exploring the new technology and its role in the evolving business landscape, said Patty Patria, the college’s chief information officer. But leaders grew concerned about unfettered AI use after Samsung and other companies banned the use of generative AI platforms in 2023 when employees inadvertently shared sensitive information.
“That was the reason we moved forward with a formal policy,” Patria said.
Babson’s three-page policy focuses exclusively on data security.
‘You Can’t Blame the AI’
Some college AI policies are aimed at curtailing specific risky practices, but other institutions are designing their policies to encompass a broader array of issues that may arise with generative AI. Discussions about an institutionwide policy at the University of Massachusetts at Amherst began in the spring of 2023 when professors began asking what they should put on their syllabi regarding AI. Leaders soon realized that more guidance was needed.
In the early fall of 2023, administrators and the faculty senate at UMass Amherst formed a joint task force made up of representatives from across campus, including faculty members, administrators, and students. The group’s brief, said Tilman Wolf, senior vice provost for academic affairs and a professor of electrical and computer engineering, was “to think about impacts on the university in all aspects.” The 24 participants formed four subgroups to focus on education, research, operations, and privacy, bias, and ethics.
Privacy, bias, and ethics cross all the other areas, Wolf added, “but we wanted to put a particular emphasis on that in case the other subgroups overlooked something.” At the end of the academic year, the task force came up with a 40-page document that recommends the following:
- The university must train faculty, staff, and students on generative AI and its uses and limits.
- It must balance the use of AI against the inherent risks and keep in mind that it may not be appropriate for some uses.
- Anyone who creates material using generative AI must be responsible and accountable for it.
- Any use of AI must be disclosed.
- Any use of AI must follow principles of consent.
- The university must work with outside vendors to ensure they maintain principles of disclosure and consent.
- Any use of AI must follow established legal principles of consent, such as federal data-privacy protocols.
If approved by the faculty senate, the policy recommendations would apply to the entire UMass Amherst community, but the decisions it’s most likely to influence may happen in offices than in classrooms. (It leaves AI use in classes up to the discretion of the instructor as long as it’s used in a way that upholds the other guidelines.)
Data security is key. Like Babson, UMass Amherst uses several Microsoft software products across the institution, which means that many faculty and staff members have access to Copilot, a Microsoft software that allows them to create their own generative AI spaces and keeps any shared data private. (Other software providers offer similar services.) Professors, staff members, and students are free to experiment with generative AI platforms in the classroom for idea generation, building chatbots, or other work that doesn’t involve sensitive information.
Reducing risk and harm are key components of the policy recommendations. The task force, Wolf said, began to weigh the use of AI in terms of low-impact, low-risk decisions versus high-impact, high-risk decisions. It was agreed, for example, that using a chatbot to help students navigate university bureaucracy would be a benefit with little risk or adverse impact if it didn’t work every time. “But the place where everybody said, ‘Oh, this is really worrisome,’ was if a student says, ‘I’m in distress and I need help,’” he said, referring to students’ mental health. “And at that point, I think the risk of not referring the student to the right support is high. If you have a machine there, it’s not clear that the chat button will do the right thing at that moment.”
There may be no substitute for the human touch in many aspects of how a college works — admissions decisions, say, or writing a condolence note — and there’s no substitute for a human backstop. One of the key principles to emerge from the discussions around UMass Amherst’s AI policy recommendations was that humans should always have the final say in any high-impact decision and must remain accountable. “The phrase that we mentioned often was, ‘You can’t blame the AI,’” Wolf said. “If you invoke the AI, you’re responsible for the output. And I think that that was an important piece, because that makes people think about those risks in a very concrete way. You’re basically putting your name at the bottom of a memo or decision, and you want to trust that it reflects what you really want to do.”
Considering the Outcomes
Anecdotally, many institutions are working on or coming to grips with the need for generative AI policies, according to Eyal Darmon, a managing director at Accenture, a company that works with colleges, including consulting on AI. There appears to be no profile for an institution likely to be adopting an AI policy, he said.
“We’ve seen large research institutions that have it, and some not,” Darmon said. “Yet we’ve seen smaller institutions that have it and some that have not.”
A good AI policy, Darmon said, not only secures institutional data, evaluates technologies and processes for risk, and reduces harm and bias, it also reviews the outcomes. It’s important to consider the outcomes from both a quantitative and qualitative standpoint. Is a particular AI tool saving the college time and money? Is it also providing better information or a better experience for students or employees? Educause has issued an action plan to help colleges develop their own AI policies.
Integrating AI into college operations will continue to be a process of education and adjustment for everyone on campus, but getting ahead of the process with a comprehensive AI policy is probably better than being crushed by it, experts say.
“I don’t claim that everybody on campus knows about the policy or has read it in detail,” Wolf, of UMass Amherst, said. But “if anybody has a question about how best to use AI, we have an answer.”