AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Search This Blog
Monday, March 24, 2025
Two Reactions to AI
For Rodrigo, it is almost liberating. If his students can now write coherent college-level essays with the help of machines, then he is free to teach them something more ambitious. Argument structure, epistemology, stylistic nuance—areas where automation falters. He is not naïve; he knows AI is here to stay. But rather than fight it, he welcomes the detour. Less time marking the same intro-to-critical-writing dreck, more time pushing the intellectual envelope. Lucky him.
Jane, however, reads the situation as academic apocalypse. Her course was the product of years of iteration, finely tuned to teach writing through careful scoping, scaffolding, and feedback. Skip the process, she believes, and you skip the learning. The AI is not a tool in her eyes; it is a cheat code, one that threatens to render her teaching obsolete. She starts researching detection tools, imagining a future of surveillance, suspicion, and pedagogical collapse.
These are not just personality quirks or different thresholds for academic dishonesty. What really separates them is how they understand curriculum. For Rodrigo, curriculum is plastic—something owned, revised, improved. He feels empowered to tinker. If a foundational skill can be outsourced, then the baseline has shifted, and he can raise the stakes. A change in student capability is an opportunity, not a crisis.
Jane sees curriculum differently. For her, it is an infrastructure. Complex, interdependent, and not easily re-routed. Learning outcomes, general education requirements, accreditation standards—these are not suggestions, they are fixtures. If a key skill like essay-writing becomes an unreliable indicator of mastery, the whole sequence threatens to unravel. You cannot simply skip a floor in the building and hope the roof holds.
There is a quiet tragedy here. Not because Jane is wrong—her concerns are deeply valid—but because she feels disempowered by a system she herself has worked so hard to uphold. The larger structures of academia—its bureaucracies, its resistance to rapid change—amplify the sense of threat. It is not just that students are using ChatGPT; it is that there is no institutional plan, no curricular pivot, no workflow update to guide faculty through this transition. So each professor is left to improvise, bringing their own philosophies and tolerances to bear.
And that is where the real tension lies. Technology does not just disrupt skills—it exposes fault lines in our educational ideologies. Are we guardians of a process or facilitators of progress? Should we protect the sequence, or adjust the map when shortcuts appear?
Rodrigo shrugs and walks forward. Jane looks for the brakes. But maybe it is not about who is right. Maybe the more urgent task is to build a system where professors do not have to choose between clinging to the past and embracing a future they did not ask for. Because either way, the syllabus is no longer entirely ours to write.
UPD: Thanks to my colleague Hogan Hays for his thoughtful critique of this blog.
Wednesday, March 19, 2025
RAG and the Tyranny of Text
Writing and reading are, at their core, terribly inefficient. To communicate knowledge, we take complex non-linear understanding and flatten it into a linear string of symbols—words, sentences, paragraphs—then expect someone else to decode those symbols one by one to reconstruct the original meaning. For every piece of information useful to us in a particular moment, we probably read thousands of unnecessary words. Laws, academic research, instruction manuals—entire professions exist solely to interpret and summarize the large texts, and find the bits useful for a particular case.
We are so accustomed to this system that we barely question it. We assume that knowledge must be buried in thick books, endless PDFs, or jargon-laden policies, and that extracting value from them is simply the price we pay. The reality is that text, as a technology, is painfully exclusionary. It filters out those who do not have the time, education, or patience to wade through its inefficiencies. The result? A world where information is not truly accessible—it is just available, locked behind barriers of expertise and labor. The problem only growth with the increase of information. We can search now, but search you need to know what exactly the thing you're searching is called.
Enter Retrieval-Augmented Generation (RAG). This technology upends the whole premise of reading as a necessary struggle. Instead of requiring humans to sift through dense documents, a RAG-powered AI can scan, understand, and extract the exact information you need. It will understand you even you're not sure what to look for. No more endless searching, skimming, or cross-referencing. You ask, it finds and explains at whatever level of difficulty you are comfortable with, in any language.
The applications are obvious. College course materials, legal codes, corporate policies—things we must understand but rarely want to read—can now be accessed through AI assistants that do the heavy lifting. Medical test results, car repair manuals, tax codes—fields where knowledge has traditionally been mediated by experts—become directly intelligible to the people who need them. RAG doesn’t just speed up information retrieval; it removes the gatekeepers.
Despite the significance of this shift, most major AI companies have not fully embraced it. OpenAI is the only major player that has prioritized user-friendly RAG-based tools, allowing everyday users to create and share custom bots. The others—Anthropic, Google Gemini, Meta, Grok, Deep Seek— all offer API-based solutions that cater to developers, not the general public. Gemini allows its paid users to create custom bots, but somehow, inexplicably, does not allow to share them. It is a strange oversight. The AI race is usually about copying and outpacing competitors, yet here, OpenAI is sprinting ahead while others somehow hesitate.
The gap has created an opportunity. Startups are rushing in to offer the ease of use that the AI giants have neglected, sensing that the true power of AI is not just in intelligence but in revolutionary leap to accessibility. AI is, by nature, a democratic technology—relatively cheap, scalable, and available to almost anyone. And yet, its most transformative application—RAG—is still frustratingly out of reach for many.
What we are witnessing is the beginning of a fundamental shift. For centuries, knowledge has been tied to advanced literacy (the ability to create and understand long texts), to institutions, to intermediaries who dictate who gets to understand what. RAG challenges that structure. It does not just improve search; it changes who gets to find answers in the first place. If AI is truly to fulfill its promise, it won’t be by making people read faster—it will be by making linear reading largely obsolete. We will always always read novels and poems word by word, because humans created art out of the terrible technology of writing. But those are only small portion of written information we consume.
Wednesday, March 12, 2025
The Modern Confessional: AI Disclosure as Ritual
This peculiar practice reveals our creation of a new moral domain around AI use in knowledge production. The requirement to disclose functions precisely as Michel Foucault described confessional practices in "The History of Sexuality." Foucault argued that confession itself produces the very notion of sin it purports to address. The act of requiring disclosure creates the impression of transgression where none inherently exists.
Medieval confession did not merely document pre-existing sins - it manufactured them through the very apparatus of confession. Similarly, disclosure requirements around AI use manufacture a transgressive quality around technologies that have no inherent moral valence.
The mechanics operate almost identically. Both create categories of behavior requiring special scrutiny. Both position authority figures as arbiters of acceptability. The confessing subject experiences this manufactured transgression viscerally - the academic disclosing AI use feels compelled toward contrition without clear understanding of what offense they have committed.
Authors find themselves in impossible positions, uncertain how much assistance constitutes meaningful use. Did grammar checking through Microsoft Editor count? What about Grammarly's suggestions? The lack of clear standards transforms disclosure into guesswork.
Rather than focusing on tools, we might evaluate outputs based on established academic standards regardless of production methods. This approach acknowledges that quality, originality and intellectual integrity manifest in final products, not production processes. Technical assistance has always existed across academic work - from statistical software to citation managers to editorial help from colleagues.
Current disclosure requirements function primarily as modern confession, manufacturing transgression through the very apparatus designed to reveal it. By recognizing this dynamic, we might reimagine our approach to technological assistance in ways that foster genuine integrity rather than performative disclosure.
Two Reactions to AI
A batch of student essays. About a third are clearly AI-generated. Two professors—same discipline, same university, same evidence—react in d...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI...
-
As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer suffic...