A batch of student essays. About a third are clearly AI-generated. Two professors—same discipline, same university, same evidence—react in diametrically opposite ways. Rodrigo sighs with relief. Jane spirals into panic.
For Rodrigo, it is almost liberating. If his students can now write coherent college-level essays with the help of machines, then he is free to teach them something more ambitious. Argument structure, epistemology, stylistic nuance—areas where automation falters. He is not naïve; he knows AI is here to stay. But rather than fight it, he welcomes the detour. Less time marking the same intro-to-critical-writing dreck, more time pushing the intellectual envelope. Lucky him.
Jane, however, reads the situation as academic apocalypse. Her course was the product of years of iteration, finely tuned to teach writing through careful scoping, scaffolding, and feedback. Skip the process, she believes, and you skip the learning. The AI is not a tool in her eyes; it is a cheat code, one that threatens to render her teaching obsolete. She starts researching detection tools, imagining a future of surveillance, suspicion, and pedagogical collapse.
These are not just personality quirks or different thresholds for academic dishonesty. What really separates them is how they understand curriculum. For Rodrigo, curriculum is plastic—something owned, revised, improved. He feels empowered to tinker. If a foundational skill can be outsourced, then the baseline has shifted, and he can raise the stakes. A change in student capability is an opportunity, not a crisis.
Jane sees curriculum differently. For her, it is an infrastructure. Complex, interdependent, and not easily re-routed. Learning outcomes, general education requirements, accreditation standards—these are not suggestions, they are fixtures. If a key skill like essay-writing becomes an unreliable indicator of mastery, the whole sequence threatens to unravel. You cannot simply skip a floor in the building and hope the roof holds.
There is a quiet tragedy here. Not because Jane is wrong—her concerns are deeply valid—but because she feels disempowered by a system she herself has worked so hard to uphold. The larger structures of academia—its bureaucracies, its resistance to rapid change—amplify the sense of threat. It is not just that students are using ChatGPT; it is that there is no institutional plan, no curricular pivot, no workflow update to guide faculty through this transition. So each professor is left to improvise, bringing their own philosophies and tolerances to bear.
And that is where the real tension lies. Technology does not just disrupt skills—it exposes fault lines in our educational ideologies. Are we guardians of a process or facilitators of progress? Should we protect the sequence, or adjust the map when shortcuts appear?
Rodrigo shrugs and walks forward. Jane looks for the brakes. But maybe it is not about who is right. Maybe the more urgent task is to build a system where professors do not have to choose between clinging to the past and embracing a future they did not ask for. Because either way, the syllabus is no longer entirely ours to write.
UPD: Thanks to my colleague Hogan Hays for his thoughtful critique of this blog.
AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Search This Blog
Subscribe to:
Post Comments (Atom)
Freeze-Dried Text Experiment
It is like instant coffee, or a shrunken pear: too dry to eat, but OK if you add water. Meet "freeze-dried text" – concentrated i...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI...
-
As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer suffic...
No comments:
Post a Comment