June 20, 2023
Artificial intelligence, and more specifically a blazingly fast-growing subset called generative AI, has seemingly invaded every nook of society on the way to a projected market value of $900 billion by 2026. With its long history of embracing different forms of technology, education is one of many sectors that’s caught up (for better and worse) in the AI revolution.
But for all its touted benefits — providing instant feedback, grading tests and quizzes, automating repetitive tasks, to name just a few — AI and its generative cousin present a host of not-easily-solved challenges for the educators that use it and the professionals that implement it.
As Andrea Jones-Rooy, a professor of data science at New York University, recently told Bloomberg, an AI-induced “huge reckoning” is in the offing for education — one that will greatly impact how educators design courses and teach material — and by extension, she implied, how students learn. “We can’t keep telling ourselves that we’re here to teach [students] how to do code or have these tangible skills because you can teach yourself all of those things,’’ she said.
While that prediction was limited to colleges and universities, it’s broadly applicable to educational institutions of all kinds.
Ethics: Where AI is concerned, this relates to things like plagiarism and copyright infringement — both of which are rampant in the age of AI. Who actually created the work? If it’s not original, where did it come from? (More and more, it’s probably not some paid brainiac to whom you outsource your term paper on Napoleon Boneparte’s favorite snacks.) Sussing those things out is harder than it sounds, particularly with new tools popping up that purport to make AI-created content “undetectable.” Fortunately, there are other digital tools that can help determine what’s original and what’s not.
Content Curation: Just because ChatGPT or Google Bard presents something as true doesn’t mean it actually is true. “Generative AI models are considered ‘black box’ models,” VentureBeat explains. “It is impossible to understand how they come up with their outputs, as no underlying reasoning is provided. Even professional researchers often struggle to comprehend the inner workings of such models. It is notoriously difficult, for example, to determine what makes an AI correctly identify an image of a matchstick.” Only humans can determine what’s garbage and what’s not by fact-checking the output of these automated systems that are only as “smart” and up-to-date as the material they ingest. So monitoring AI-generated text, images, videos, etc. for accuracy, relevance and appropriateness is something every educator must learn to do effectively in order to curate only the best material.
Machine learning: As in, learning from machines. No matter how sophisticated AI becomes, it can’t replace human teachers. Not yet, anyway. Why? Because it’s inhuman. It lacks awareness of tone and knows next to nothing about nuance and is bad at forming the interpersonal relationships that are so crucial to a student’s success in the classroom. Ideally, AI should complement rather than replace. This Forbes essay was written in 2019 but remains relevant: “Without a transparent, scientific approach, education is at risk of being stuck with the same problems ten years from now: using a largely marketing approach to “sell” education on AI, without solid methods grounded in research. And, in the process, it will needlessly unnerve educators, students, and parents with visions of ‘robo-teachers’ replacing humans in the classroom.”
Insufficient AI support systems: Before educational institutions can deploy AI, they need to have certain things in place: namely, the right hardware, software and network infrastructure. Without those certain things, the game is over before it begins. Qualified pros (like the ones at Mindsight) can determine what’s needed on a case-by-case basis and help maintain newly implemented systems in the long run.
Safeguarding data: In order for generative AI to function properly, it needs lots of data — including personal and private data from students. That’s exceedingly risky. Which is why it’s up to IT professionals to protect that data and keep it from falling into the wrong hands by beefing up security measures, using the most sophisticated encryption techniques and closely adhering to data privacy regulations — which are most likely to be institution- or district-specific. (The federal government, for instance, offers suggestions but imposes no rules.) According to this K12dive.com article, “Ed tech experts have long stressed that the increased use of technology tools and apps in the classroom puts student data at risk. Meanwhile, recent research has discovered a majority of ed tech companies use “extensive” tracking technologies and share students’ personal information with third parties.”
Rampant bias: Garbage in, garbage out. Which means that any biases introduced during AI training will eventually spew forth. It’s the job of IT experts to make sure that doesn’t happen, or happens as infrequently as possible, by identifying problematic training data and the resultant algorithms before they gum up the works. Continuous monitoring, evaluating and (when necessary) rejiggering are absolute musts. According to one of the world’s top AI bias researchers and educators, self-described “poet of code” Joy Buolamwini, “We can deceive ourselves into thinking they’re not doing harm, or we can fool ourselves into thinking, because it’s based on numbers, that it is somehow neutral.”
“AI is creeping into our lives. And even though the promise is that it’s going to be more efficient, it’s going to be better — if what’s happening is we’re automating inequality through weapons of math destruction and we have algorithms of oppression, this promise is not actually true and certainly not true for everybody.”