> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
ADVERTISEMENT
Profhacker Logo

ProfHacker: Against the 3A’s of EdTech: AI, Analytics, and Adaptive Technologies in Education

Teaching, tech, and productivity.

  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Against the 3A’s of EdTech: AI, Analytics, and Adaptive Technologies in Education

By  Maha Bali
November 29, 2017

Nick Sousanis cartoon showing rules of culture

I’ve had/have the privilege of being a member of the expert panel working on the NMC Horizon Report (the one that’s still in progress, the process of which I plan to write/co-write about soon, inshallah). One of the key things about the report is that we are asked to suggest what technologies we think will be highly influential in education in the far, medium and near terms. You can see the list of topics and expert panelists’ comments

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

Nick Sousanis cartoon showing rules of culture

I’ve had/have the privilege of being a member of the expert panel working on the NMC Horizon Report (the one that’s still in progress, the process of which I plan to write/co-write about soon, inshallah). One of the key things about the report is that we are asked to suggest what technologies we think will be highly influential in education in the far, medium and near terms. You can see the list of topics and expert panelists’ comments here. I’ve had technical problems and haven’t been able to post many comments myself, but needed space to clarify my votes on some things, and that’s what this post is for.

I want to clarify why I voted for certain things as influencing higher education in the future, but that I don’t actually endorse these things. I can see evidence around me that AI (Artificial Intelligence), Analytics and Adaptive Learning are being pushed in the education field, but I have a strong aversion to all of them, mainly on ethical grounds. I agree with all of Audrey Watters', Chris Gilliard’s, Autumm Caines’ and Benjamin Doxtdator’s critiques on these topics (also: it’s scary how my Google docs app immediately recommended their websites when I started inserting the links here), and it’s difficult to pick just one post by any of them on these issues, but if you’re not following these people, you should be.

ADVERTISEMENT

But I’d like to explain this a little more from my personal perspective, in a way that’s focused on pedagogical considerations (so I am not focused on surveillance capitalism or the hegemony of Silicon Valley, which others above speak much more eloquently about). I have struggled in the past to explain my views on this to a subset of computer scientists I know (my past professors and others in edtech) who don’t normally think critically about education, and perhaps the ones who do think critically about education can give me feedback on how to make this clearer.

Misunderstanding of Teacher Roles

In some of the discourses surrounding these technologies, there tends to be a reduction of what a teacher’s role is. I have watched people make arguments for AI by basically listing 10 functions of a teacher, and ticking them off as doable by a computer. This is highly problematic, because those lists are, in the first place, limiting a teacher’s role to some cognitive and routine tasks that indeed could be done by a computer, and completely ignoring the educational research that suggests the importance of having an adult or more knowledgeable peer supporting a young person’s process of knowledge construction. It also completely ignores the socio-emotional role of a teacher: in motivating the learner, promoting their love of a subject, recognizing their interest in something, and in providing moral support whether in things related to learning or to life. It also is completely unaware of the potential role a caring teacher can have to foster the critical consciousness of a person, beyond transmitting knowledge.

ADVERTISEMENT

Not every teacher does this with every student, of course, and those who have to teach larger numbers of students cannot do so, but instead of investing in cold technologies to teach e.g. refugees remotely, can’t we instead invest in helping re-train adult refugees to teach and mentor younger refugees, thereby making those communities more sustainable? Don’t refugees need emotional support and a human touch more than knowledge delivered via devices they may or may not have? Don’t adult refugees want to work and support their communities? Why invest in technology when the money can be put to better use investing in people?

Two of my pet peeves that use AI are plagiarism-detection systems and automatic grading of writing. They reduce the problems of both these things to their technical bare bones. Teaching writing is NOT about getting the grammar of the sentences right into a paragraph within a five-paragraph essay. Teaching writing is about helping learners express themselves clearly and effectively with other human beings. What value is there in a machine giving students feedback? It’s bad enough if our students’ writing gets read by no one but a teacher, but at the very least, let another human being see it. I understand there are teachers who are responsible for larger classes who cannot humanly read that large an amount of writing. So give them teaching assistants or do more peer assessment. But for goodness sake, why would anyone want to write for a machine?

ADVERTISEMENT

Plagiarism-detection tools are no better. It sort of primes students to figure out technical workarounds to plagiarism, rather than developing the attitude of valuing their own ideas and words while being respectful and appreciative to how others’ ideas and words influence their own. A good teacher can promote these attitudes despite plagiarism-detection software, but the software itself does not do this.

Collecting Data: Bias, Surveillance & Control

All three of these technologies (particularly if AI is of the machine learning variety) rely heavily on collecting data from the learner - usually ubiquitously (meaning, without learners recognizing that their behaviors are being mined as data all the time, without giving explicit permission, without knowing what will be done to the data and by whom). This data is being collected based on some tech person’s decision of which data is relevant in order to categorize learners in certain ways, which are often indicators that don’t necessarily help you towards a particular conclusion, but maybe make policymakers feel good. For example, learning analytics may look at how many times someone has watched a video and for how long. It doesn’t usually tell you if they were concentrating while watching it, multitasking (though I’m sure someone will come up with ways to check if they’re doing something else on the computer or suggest putting a camera to track if they’ve moved away from the computer - shudder), or if they’re having technical problems that make them restart the video many times. The choices of which data to collect are usually made by the techies and policy makers, but not teachers or students. Do teachers really need to know this stuff and should they be asked to look at this data to make decisions about their learners, or should they instead, say, actually look into their students eyes while teaching them, and maybe, you know, ask them what they’re thinking, or feeling, or how they felt about those readings.

ADVERTISEMENT

These kinds of tools make it seem like surveillance of learners in order to control them is acceptable and even desirable. It should not be, even if education already does so in some ways; actual learning is not helped by this kind of approach. Using technology to reproduce neoliberal approaches to education won’t make learning better. It will make it worse in all the worst ways, particularly reproducing inequalities.

I don’t know if I need to re-hash issues of bias within algorithms. Yes, even when a machine is learning, rather than following a prescribed set of steps, it learns in biased ways. We (well the developers) train the algorithm to learn from data, and that data has in-built biases, which the resulting algorithm (now having learned from it) reproduces back to us when confronted with new data, and we pretend its output is more “objective” than we are. We all know about algorithmic bias (e.g. Safiyya Noble’s work on Google). And if you haven’t read Cathy O’Neil’s Weapons of Math Destruction, you really should (here’s a list of podcast interviews and her blog). Using such approaches in education is likely to reinforce bias against minorities and underprivileged students. Articles on tools that would allow the teacher to predict a student’s grade or performance from early in the semester is likely to disproportionately label minority and underprivileged students or anyone whose learning habits deviate from the “norm”. And wouldn’t labeling these students to the teacher possibly bias the teacher further against them, or make the teacher have lower expectations of them? I realize these biases exist with or without technology, and that the technology is intended to “help” the students - but that does not discount the dangers of using these tools uncritically, and the ways in which they are likely to go horribly wrong.

Stripping Learners of Agency

ADVERTISEMENT

From conversations with people working on adaptive learning, some of these use a pre-built algorithm that the developers came up with, and others use machine learning. In either case, someone make decisions over which data points to collect and which adaptations to “give” learners at certain stages. This completely strips learners of their agency to control their own learning! Shouldn’t education be about fostering learners’ metacognitive capacity to know when they need to go faster or slower or seek additional information? Feeding them what the machine has decided they need at a certain point in time is an approach where we are letting the machine program the person, not the other way around, as Seymour Papert would have said. For a discussion on decolonizing learning analytics, check out this blogpost by Paul Prinsloo, and then do a Scholar search on his publications on the topic of ethics in learning analytics.

Some Critical Questions

This post is getting long. I’ll just end by saying that many of these technology solutions need to address these questions:

ADVERTISEMENT

  1. Which educational problem are you trying to solve? (sometimes this problem is non-existent, and these tools aren’t justifiable)

  2. What human solutions to this problem exist? Why aren’t you investing in those?

  3. What harm could come (to students) from using this tool? What harm could come to teachers? To society?

  4. In what ways might this tool disproportionately harm less privileged learners and societies? In what ways might it reproduce inequality?

  5. How much have actual teachers and learners on the ground been involved in or consulted on the design of these tools?

I was talking to an undergraduate computer science student yesterday about my ethical objections to surveillance used in personalized learning, and he told me “ethics have evolved”. I told him, “Actually, ethics still exist, but many computer scientists don’t listen to what others are telling them.” I rarely see AI, adaptive learning or analytics tools questioned in this way by those who create and implement them. And they need to start listening to all of us who are doing so and be held accountable.

What are your thoughts on the 3A’s? Tell us in the comments!

ADVERTISEMENT

Feature image: Copyright Nick Sousanis, used & edited with permission:Games Culture 1, from p. 12 of “Possibilities” created for Game Show Detroit in 2006.

ADVERTISEMENT
ADVERTISEMENT
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin