Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    College Advising
    Serving Higher Ed
    Chronicle Festival 2025
Sign In
Unrestrained Tech

Is It Time to Regulate AI Use on Campus?

By Lee Gardner November 11, 2024
Gardner-AIAdmin-1020
iStock

Last fall, instructors at the University of Massachusetts at Amherst suddenly started receiving scores for every student’s writing assignment, estimating how likely it was that they had been completed using generative artificial intelligence. The percentile scores were generated by an AI tool built into the institution’s learning-management system. The scenario, administrators say, caused “massive confusion.” Faculty members might see a high percentile score for an assignment, but how high did a score have to be to justify some kind of action? What if the software’s analysis gave an assignment a 51 percent likelihood of AI use? How does a professor interpret that? And the leapfrogging rate of innovation in AI technology made the university’s own computer scientists skeptical that AI-detection tools were reliable predictors of anything at all.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Last fall, instructors at the University of Massachusetts at Amherst suddenly started receiving scores for every student’s writing assignment, estimating how likely it was that they had been completed using generative artificial intelligence. The percentile scores were generated by an AI tool built into the institution’s learning-management system. The scenario, administrators say, caused “massive confusion.” Faculty members might see a high percentile score for an assignment, but how high did a score have to be to justify some kind of action? What if the software’s analysis gave an assignment a 51 percent likelihood of AI use? How does a professor interpret that? And the leapfrogging rate of innovation in AI technology made the university’s own computer scientists skeptical that AI-detection tools were reliable predictors of anything at all.

The tool fueled a discussion already underway at UMass Amherst and many other institutions: the need to create a universitywide generative AI policy. As the technology spreads throughout all aspects of academe — and evolves at a pace measured in months, not years — experts and a burgeoning number of administrators believe that colleges need to establish guidelines about its use or face potential disaster.

What kind of disaster? So far, higher education has been devoid of major public AI scandals. But ungoverned use of the technology across a campus could lead to exposure of sensitive data and the proliferation of inconsistent uses that could potentially harm students and other stakeholders as well as the institution. Confusing or patchy AI policies might be worse than none at all.

The need for comprehensive AI policies is already apparent to colleges’ technology leaders. A survey conducted in the fall of 2023 by Educause, a membership organization for technology professionals in higher education, found that almost a quarter of respondents’ colleges had policies in place to regulate AI use. Nearly half of respondents, however, disagreed or strongly disagreed that their institutions had sufficient existing policies in place.

The biggest use of generative AI at most colleges is in the classroom, and at many colleges, administrators let instructors determine how, or if, the technology can be used in their courses and provide some guidelines.

The phrase that we mentioned often was, ‘You can’t blame the AI.’ If you invoke the AI, you’re responsible for the output.

The University of Texas at Arlington’s AI policy allows faculty members to choose whether and how to allow AI use, but it includes four specific language options for instructors to include in their syllabi on how AI can be used, from not at all to unrestricted use.

“I’ve got 1000-plus faculty members — I don’t want 1000-plus different ways that our instructors are using AI in the classroom,” said Jennifer Cowley, UT Arlington’s president. “Let’s create some buckets that our faculty can align with so it makes it easier for our students to understand what they should and shouldn’t be doing.”

AI policies can also be a huge help as colleges’ employees sift through myriad generative AI products being firehosed at them by ed-tech companies or bundled into products they may already use. At UT Arlington, a special AI council reviews potential products for use.

When AI was brand-new, “we had everybody wanting to buy chatbots, but it doesn’t make sense for us to have 100 different ways that we’re doing things,” Cowley said. “We need to think collectively about what are the right solutions.”

The council, which includes representatives from across the university, also provides the benefit of sharing information between units — a solution that works for admissions may have some application in athletics, Cowley pointed out.

Perhaps the biggest reason driving collegewide AI policies is data security. For example, a professor or staff member playing around with ChatGPT in her office may not realize that, like many generative AI platforms, it uses the information a user gives it as further training data that could find its way into answers it gives another user later. Feeding it student data could be a violation of federal privacy laws. Using generative AI software to crunch admissions information or financial data could leak information a college wouldn’t want its rivals to have.

ADVERTISEMENT

Data security was the main reason leaders at Babson College, in Wellesley, Mass., decided to devise an overarching AI policy last year, a process that took several months. Babson has a strong business focus, so many faculty and staff members expressed interest in exploring the new technology and its role in the evolving business landscape, said Patty Patria, the college’s chief information officer. But leaders grew concerned about unfettered AI use after Samsung and other companies banned the use of generative AI platforms in 2023 when employees inadvertently shared sensitive information.

“That was the reason we moved forward with a formal policy,” Patria said.

Babson’s three-page policy focuses exclusively on data security.

‘You Can’t Blame the AI’

Some college AI policies are aimed at curtailing specific risky practices, but other institutions are designing their policies to encompass a broader array of issues that may arise with generative AI. Discussions about an institutionwide policy at the University of Massachusetts at Amherst began in the spring of 2023 when professors began asking what they should put on their syllabi regarding AI. Leaders soon realized that more guidance was needed.

ADVERTISEMENT

In the early fall of 2023, administrators and the faculty senate at UMass Amherst formed a joint task force made up of representatives from across campus, including faculty members, administrators, and students. The group’s brief, said Tilman Wolf, senior vice provost for academic affairs and a professor of electrical and computer engineering, was “to think about impacts on the university in all aspects.” The 24 participants formed four subgroups to focus on education, research, operations, and privacy, bias, and ethics.

Privacy, bias, and ethics cross all the other areas, Wolf added, “but we wanted to put a particular emphasis on that in case the other subgroups overlooked something.” At the end of the academic year, the task force came up with a 40-page document that recommends the following:

  • The university must train faculty, staff, and students on generative AI and its uses and limits.
  • It must balance the use of AI against the inherent risks and keep in mind that it may not be appropriate for some uses.
  • Anyone who creates material using generative AI must be responsible and accountable for it.
  • Any use of AI must be disclosed.
  • Any use of AI must follow principles of consent.
  • The university must work with outside vendors to ensure they maintain principles of disclosure and consent.
  • Any use of AI must follow established legal principles of consent, such as federal data-privacy protocols.

If approved by the faculty senate, the policy recommendations would apply to the entire UMass Amherst community, but the decisions it’s most likely to influence may happen in offices than in classrooms. (It leaves AI use in classes up to the discretion of the instructor as long as it’s used in a way that upholds the other guidelines.)

ADVERTISEMENT

Data security is key. Like Babson, UMass Amherst uses several Microsoft software products across the institution, which means that many faculty and staff members have access to Copilot, a Microsoft software that allows them to create their own generative AI spaces and keeps any shared data private. (Other software providers offer similar services.) Professors, staff members, and students are free to experiment with generative AI platforms in the classroom for idea generation, building chatbots, or other work that doesn’t involve sensitive information.

Reducing risk and harm are key components of the policy recommendations. The task force, Wolf said, began to weigh the use of AI in terms of low-impact, low-risk decisions versus high-impact, high-risk decisions. It was agreed, for example, that using a chatbot to help students navigate university bureaucracy would be a benefit with little risk or adverse impact if it didn’t work every time. “But the place where everybody said, ‘Oh, this is really worrisome,’ was if a student says, ‘I’m in distress and I need help,’” he said, referring to students’ mental health. “And at that point, I think the risk of not referring the student to the right support is high. If you have a machine there, it’s not clear that the chat button will do the right thing at that moment.”

There may be no substitute for the human touch in many aspects of how a college works — admissions decisions, say, or writing a condolence note — and there’s no substitute for a human backstop. One of the key principles to emerge from the discussions around UMass Amherst’s AI policy recommendations was that humans should always have the final say in any high-impact decision and must remain accountable. “The phrase that we mentioned often was, ‘You can’t blame the AI,’” Wolf said. “If you invoke the AI, you’re responsible for the output. And I think that that was an important piece, because that makes people think about those risks in a very concrete way. You’re basically putting your name at the bottom of a memo or decision, and you want to trust that it reflects what you really want to do.”

Considering the Outcomes

Anecdotally, many institutions are working on or coming to grips with the need for generative AI policies, according to Eyal Darmon, a managing director at Accenture, a company that works with colleges, including consulting on AI. There appears to be no profile for an institution likely to be adopting an AI policy, he said.

ADVERTISEMENT

“We’ve seen large research institutions that have it, and some not,” Darmon said. “Yet we’ve seen smaller institutions that have it and some that have not.”

A good AI policy, Darmon said, not only secures institutional data, evaluates technologies and processes for risk, and reduces harm and bias, it also reviews the outcomes. It’s important to consider the outcomes from both a quantitative and qualitative standpoint. Is a particular AI tool saving the college time and money? Is it also providing better information or a better experience for students or employees? Educause has issued an action plan to help colleges develop their own AI policies.

Integrating AI into college operations will continue to be a process of education and adjustment for everyone on campus, but getting ahead of the process with a comprehensive AI policy is probably better than being crushed by it, experts say.

“I don’t claim that everybody on campus knows about the policy or has read it in detail,” Wolf, of UMass Amherst, said. But “if anybody has a question about how best to use AI, we have an answer.”

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Correction (Nov. 13, 2024, 3:09 p.m.): A previous version of this article stated that the task force at the University of Massachusetts at Amherst created the campus's AI policy and that the task force was created by the institution's administrators. The article has been updated to state that the task force was created by both administrators and the faculty senate, and that the task force created policy recommendations regarding AI use on the campus.

Also, this article previously stated that the task force had 15 participants. It had 24.
Tags
Technology
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
Gardner_Lee.jpg
About the Author
Lee Gardner
Lee Gardner writes about the management of colleges and universities. Follow him on Twitter @_lee_g, or email him at lee.gardner@chronicle.com.
ADVERTISEMENT
ADVERTISEMENT

More News

PPP 10 FINAL promo.jpg
Bouncing Back?
For Once, Public Confidence in Higher Ed Has Increased
University of California, Berkeley chancellor Dr. Rich Lyons, testifies at a Congressional hearing on antisemitism, in Washington, D.C., U.S., on July 15, 2025. It is the latest in a series of House hearings on antisemitism at the university level, one that critics claim is a convenient way for Republicans to punish universities they consider too liberal or progressive, thereby undermining responses to hate speech and hate crimes. (Photo by Allison Bailey/NurPhoto via AP)
Another Congressional Hearing
3 College Presidents Went to Congress. Here’s What They Talked About.
Tufts University student from Turkey, Rumeysa Ozturk, who was arrested by immigration agents while walking along a street in a Boston suburb, talks to reporters on arriving back in Boston, Saturday, May 10, 2025, a day after she was released from a Louisiana immigration detention center on the orders of a federal judge. (AP Photo/Rodrique Ngowi)
Law & Policy
Homeland Security Agents Detail Run-Up to High-Profile Arrests of Pro-Palestinian Scholars
Photo illustration of a donation jar turned on it's side, with coins spilling out.
Financial aid
The End of Unlimited Grad-School Loans Could Leave Some Colleges and Students in the Lurch

From The Review

Illustration of an ocean tide shaped like Donald Trump about to wash away sandcastles shaped like a college campus.
The Review | Essay
Why Universities Are So Powerless in Their Fight Against Trump
By Jason Owen-Smith
Photo-based illustration of a closeup of a pencil meshed with a circuit bosrd
The Review | Essay
How Are Students Really Using AI?
By Derek O'Connell
John T. Scopes as he stood before the judges stand and was sentenced, July 2025.
The Review | Essay
100 Years Ago, the Scopes Monkey Trial Discovered Academic Freedom
By John K. Wilson

Upcoming Events

07-31-Turbulent-Workday_assets v2_Plain.png
Keeping Your Institution Moving Forward in Turbulent Times
Ascendium_Housing_Plain.png
What It Really Takes to Serve Students’ Basic Needs: Housing
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin