AI in Society
Opinions and positions expressed in this blog are mine, and do not represent my employer's opinions or positions.
Search This Blog
Monday, March 24, 2025
Two Reactions to AI
For Rodrigo, it is almost liberating. If his students can now write coherent college-level essays with the help of machines, then he is free to teach them something more ambitious. Argument structure, epistemology, stylistic nuance—areas where automation falters. He is not naïve; he knows AI is here to stay. But rather than fight it, he welcomes the detour. Less time marking the same intro-to-critical-writing dreck, more time pushing the intellectual envelope. Lucky him.
Jane, however, reads the situation as academic apocalypse. Her course was the product of years of iteration, finely tuned to teach writing through careful scoping, scaffolding, and feedback. Skip the process, she believes, and you skip the learning. The AI is not a tool in her eyes; it is a cheat code, one that threatens to render her teaching obsolete. She starts researching detection tools, imagining a future of surveillance, suspicion, and pedagogical collapse.
These are not just personality quirks or different thresholds for academic dishonesty. What really separates them is how they understand curriculum. For Rodrigo, curriculum is plastic—something owned, revised, improved. He feels empowered to tinker. If a foundational skill can be outsourced, then the baseline has shifted, and he can raise the stakes. A change in student capability is an opportunity, not a crisis.
Jane sees curriculum differently. For her, it is an infrastructure. Complex, interdependent, and not easily re-routed. Learning outcomes, general education requirements, accreditation standards—these are not suggestions, they are fixtures. If a key skill like essay-writing becomes an unreliable indicator of mastery, the whole sequence threatens to unravel. You cannot simply skip a floor in the building and hope the roof holds.
There is a quiet tragedy here. Not because Jane is wrong—her concerns are deeply valid—but because she feels disempowered by a system she herself has worked so hard to uphold. The larger structures of academia—its bureaucracies, its resistance to rapid change—amplify the sense of threat. It is not just that students are using ChatGPT; it is that there is no institutional plan, no curricular pivot, no workflow update to guide faculty through this transition. So each professor is left to improvise, bringing their own philosophies and tolerances to bear.
And that is where the real tension lies. Technology does not just disrupt skills—it exposes fault lines in our educational ideologies. Are we guardians of a process or facilitators of progress? Should we protect the sequence, or adjust the map when shortcuts appear?
Rodrigo shrugs and walks forward. Jane looks for the brakes. But maybe it is not about who is right. Maybe the more urgent task is to build a system where professors do not have to choose between clinging to the past and embracing a future they did not ask for. Because either way, the syllabus is no longer entirely ours to write.
UPD: Thanks to my colleague Hogan Hays for his thoughtful critique of this blog.
Wednesday, March 19, 2025
RAG and the Tyranny of Text
Writing and reading are, at their core, terribly inefficient. To communicate knowledge, we take complex non-linear understanding and flatten it into a linear string of symbols—words, sentences, paragraphs—then expect someone else to decode those symbols one by one to reconstruct the original meaning. For every piece of information useful to us in a particular moment, we probably read thousands of unnecessary words. Laws, academic research, instruction manuals—entire professions exist solely to interpret and summarize the large texts, and find the bits useful for a particular case.
We are so accustomed to this system that we barely question it. We assume that knowledge must be buried in thick books, endless PDFs, or jargon-laden policies, and that extracting value from them is simply the price we pay. The reality is that text, as a technology, is painfully exclusionary. It filters out those who do not have the time, education, or patience to wade through its inefficiencies. The result? A world where information is not truly accessible—it is just available, locked behind barriers of expertise and labor. The problem only growth with the increase of information. We can search now, but search you need to know what exactly the thing you're searching is called.
Enter Retrieval-Augmented Generation (RAG). This technology upends the whole premise of reading as a necessary struggle. Instead of requiring humans to sift through dense documents, a RAG-powered AI can scan, understand, and extract the exact information you need. It will understand you even you're not sure what to look for. No more endless searching, skimming, or cross-referencing. You ask, it finds and explains at whatever level of difficulty you are comfortable with, in any language.
The applications are obvious. College course materials, legal codes, corporate policies—things we must understand but rarely want to read—can now be accessed through AI assistants that do the heavy lifting. Medical test results, car repair manuals, tax codes—fields where knowledge has traditionally been mediated by experts—become directly intelligible to the people who need them. RAG doesn’t just speed up information retrieval; it removes the gatekeepers.
Despite the significance of this shift, most major AI companies have not fully embraced it. OpenAI is the only major player that has prioritized user-friendly RAG-based tools, allowing everyday users to create and share custom bots. The others—Anthropic, Google Gemini, Meta, Grok, Deep Seek— all offer API-based solutions that cater to developers, not the general public. Gemini allows its paid users to create custom bots, but somehow, inexplicably, does not allow to share them. It is a strange oversight. The AI race is usually about copying and outpacing competitors, yet here, OpenAI is sprinting ahead while others somehow hesitate.
The gap has created an opportunity. Startups are rushing in to offer the ease of use that the AI giants have neglected, sensing that the true power of AI is not just in intelligence but in revolutionary leap to accessibility. AI is, by nature, a democratic technology—relatively cheap, scalable, and available to almost anyone. And yet, its most transformative application—RAG—is still frustratingly out of reach for many.
What we are witnessing is the beginning of a fundamental shift. For centuries, knowledge has been tied to advanced literacy (the ability to create and understand long texts), to institutions, to intermediaries who dictate who gets to understand what. RAG challenges that structure. It does not just improve search; it changes who gets to find answers in the first place. If AI is truly to fulfill its promise, it won’t be by making people read faster—it will be by making linear reading largely obsolete. We will always always read novels and poems word by word, because humans created art out of the terrible technology of writing. But those are only small portion of written information we consume.
Wednesday, March 12, 2025
The Modern Confessional: AI Disclosure as Ritual
This peculiar practice reveals our creation of a new moral domain around AI use in knowledge production. The requirement to disclose functions precisely as Michel Foucault described confessional practices in "The History of Sexuality." Foucault argued that confession itself produces the very notion of sin it purports to address. The act of requiring disclosure creates the impression of transgression where none inherently exists.
Medieval confession did not merely document pre-existing sins - it manufactured them through the very apparatus of confession. Similarly, disclosure requirements around AI use manufacture a transgressive quality around technologies that have no inherent moral valence.
The mechanics operate almost identically. Both create categories of behavior requiring special scrutiny. Both position authority figures as arbiters of acceptability. The confessing subject experiences this manufactured transgression viscerally - the academic disclosing AI use feels compelled toward contrition without clear understanding of what offense they have committed.
Authors find themselves in impossible positions, uncertain how much assistance constitutes meaningful use. Did grammar checking through Microsoft Editor count? What about Grammarly's suggestions? The lack of clear standards transforms disclosure into guesswork.
Rather than focusing on tools, we might evaluate outputs based on established academic standards regardless of production methods. This approach acknowledges that quality, originality and intellectual integrity manifest in final products, not production processes. Technical assistance has always existed across academic work - from statistical software to citation managers to editorial help from colleagues.
Current disclosure requirements function primarily as modern confession, manufacturing transgression through the very apparatus designed to reveal it. By recognizing this dynamic, we might reimagine our approach to technological assistance in ways that foster genuine integrity rather than performative disclosure.
Friday, February 28, 2025
The Invasive Innovation, a Case of AI in Education
Disruptive innovations, as defined by Clayton Christensen, typically emerge at the margins before gradually overtaking established players. In contrast, artificial intelligence in education is not a slow-moving force of competitive displacement. It is an invasive innovation—spreading far beyond the predictions of Everett Rogers’ diffusion model, propelled not as much by institutions as by students themselves. This is not a technology waiting to be adopted. It is a reality forcing adaptation. Treating it as an optional innovation mises the point entirely.
Educational leaders, policymakers, and institutions may wish to evaluate AI’s impact, consider best practices, and implement careful frameworks. But they are not in control of the timeline. The adoption is not up for debate. It is already happening, driven by students who see AI not as a tool of the future but as an everyday utility, as natural as search engines or calculators. The question is not when to integrate AI into education, but how to manage an environment where AI is already integrated—often in ways institutions neither designed nor anticipated.
This is why traditional approaches to educational innovations are inadequate. Normally, innovations are assessed, piloted, and then scaled. But AI in education is unfolding more like a crisis than an initiative. It requires a response more akin to a public health emergency than a strategic rollout. When COVID-19 disrupted education, schools did not spend years analyzing whether remote learning was effective. They pivoted overnight, creating imperfect but necessary solutions. AI invasion demands a similar level of urgency.
Institutional response to invasive AI must be comprehensive rather than defensive. How do we accommodate this invasive innovation? Putting AI tools directly in the hands of faculty represents a critical first step, ensuring they engage with the same technologies their students already access freely. This must be paired with robust professional development—not merely technical training but genuine pedagogical exploration that helps educators reimagine their disciplinary approaches in an AI-augmented world.
Beyond tools and training, institutions must invest in creating dedicated time and space for disciplinary teams to collaboratively revise their assignments. The goal is not to create "cheat-proof" assessments but to develop authentic tasks that meaningfully incorporate AI as a partner in learning. This demands protected time away from everyday teaching pressures, where faculty can experiment, fail, refine, and document new approaches that acknowledge AI as a permanent fixture in their educational landscape rather than an external threat to be neutralized.
Wednesday, February 26, 2025
Sidorkin Hypothesis: Beyond Procedural Fluency in AI-Augmented Instruction
The hypothesis is this: Quality AI-augmented instruction reduces emphasis on procedural skills while enhancing higher-order thinking and conceptual learning. This shift may offer an alternative pathway for cognitive offloading, which could supplement or even replace traditional procedural skills acquisition.
Cognitive load theory, developed by John Sweller in the 1980s, provides a useful framework. The theory posits that our working memory has limited capacity when dealing with new information. Sweller himself, along with many other cognitive scientists, views fluency in procedural skills as a major mechanism for cognitive offloading. When basic procedures become automatic through practice, they consume fewer working memory resources, thereby freeing up mental capacity for higher-order learning. This is why traditional education emphasizes mastering procedural skills—calculating derivatives, balancing chemical equations, applying grammatical rules—before tackling complex conceptual work.
In my view, AI tools function as an alternative cognitive offloading mechanism that can complement or even supersede traditional procedural fluency. These tools handle procedural tasks , creating cognitive space for students to engage with concepts at a deeper level without first developing procedural automaticity. Consider a student learning calculus: traditionally, substantial mental resources are dedicated to memorizing differentiation rules and practicing their application. With AI assistance handling these procedural aspects, the student can direct cognitive resources toward understanding conceptual foundations—what derivatives actually represent, how they relate to real-world phenomena, and when different approaches should be applied.
This approach necessitates reimagining assessment. Traditional writing instruction heavily rewards procedural accuracy in grammar, citation formats, and structural conventions, but in an AI-augmented educational landscape, assessment must evolve to measure the ability to prompt AI tools effectively, the quality of critical analysis, application of rhetorical strategies in novel contexts, and evaluation of AI-generated text. The focus shifts from "Can you properly format this essay?" to "Can you determine which rhetorical approach best suits your purpose, interpret complex texts critically, and evaluate whether the AI-generated analysis captures the deeper meaning of the work being examined?"
This transformation faces substantial challenges. Educators rightfully question whether students can truly grasp concepts without some level of procedural fluency. it is likely that some basic procedural knowledge remains necessary for conceptual understanding. The optimal approach involves thoughtful integration—using AI to reduce procedural load while ensuring students develop core competencies. To put it roughly, "procedural knowledge - yes, procedural fluency - not, really."
The equalizing potential of AI-augmented instruction cannot be overlooked. Students who traditionally struggle with procedural aspects—whether due to learning differences, educational disruptions, or lack of prior opportunities—may find themselves able to participate more fully in higher-order learning. This democratization of advanced conceptual engagement could elevate educational outcomes across diverse student populations. We stand at the threshold of potentially transforming education from a system that sorts students based on procedural proficiency to one that cultivates universally richer, more advanced learning outcomes: deeper critical analysis, more sophisticated problem formulation, nuanced ethical reasoning, and creative synthesis across domains—intellectual capabilities that more authentically reflect the complex demands of contemporary society.
Saturday, February 22, 2025
On Techno-Utopianism. Elon Musk and the Soul of Education
The recent video of Elon Musk promising AI teachers reveals a common misunderstanding among technology leaders. They see education primarily as information transfer and skills training, where an infinitely patient AI system delivers perfectly tailored content to each student. This viewpoint ignores the fundamental nature of education as a relational institution.
Since Gutenberg's invention of the printing press, motivated individuals could teach themselves almost anything. Libraries contain more knowledge than any single teacher. Yet most people do not turn into autodidacts. Why is that? The question is not how to make knowledge more accessible, but why people choose to engage with it.
Teachers generate reasons to learn through two main approaches. In more constructivist settings, they inspire curiosity and create engaging problems to solve. In mor traditional schools, they maintain authority and discipline. In most schools, there is a mixture of both. Both methods work because they establish a social framework for learning. A good teacher knows when to push and when to comfort, when to explain and when to let students struggle.
The comparison of AI to Einstein as a teacher misses the point. Teaching requires different qualities than scientific genius - the capacity to enter a relationship, to create meaningful connections, and to help students discover their own reasons for learning. An AI system, no matter how knowledgeable, cannot do any of that.
Students often study not because they find the subject inherently fascinating, but because they respect their teacher, want to belong to a learning community, or seek to fulfill social expectations. Even negative motivations like fear of disappointing others have a distinctly human character.
The techno-utopian vision reduces learning to information exchanges and skill assessments. This mechanistic view fails to account for the social and emotional dimensions of human development. While AI can enhance teaching by handling routine tasks, it cannot replace the essential human relationships that drive educational engagement. The future of education lies not in perfecting content delivery algorithms, but in strengthening the relational foundations of learning.
Such overblown promises about AI in education do more harm than good. They create unnecessary anxiety among teachers and administrators, leading to resistance against even modest technological improvements. Instead of addressing real challenges in education - student engagement, equitable access, and meaningful assessment - institutions get distracted by unrealistic visions of AI-driven transformation. We need a more balanced approach that recognizes both the potential and limitations of AI in supporting, not replacing, the fundamentally human enterprise of education.
Thursday, February 20, 2025
The AI Recruiter Will See You Now
The tidy world of job applications, carefully curated CVs and anxious cover letters may soon become a relic. Every professional now leaves digital traces across the internet - their work, opinions, and achievements create detailed patterns of their capabilities. Artificial Intelligence agents will soon navigate these digital landscapes, transforming how organizations find talent.
Unlike current recruitment tools that passively wait for queries, these AI agents will actively explore the internet, following leads and making connections. They will analyze not just LinkedIn profiles, but candidates' entire digital footprint. The approach promises to solve a persistent problem in recruitment: finding qualified people who are not actively job-hunting.
The matching process will extend beyond technical qualifications. Digital footprints reveal working styles and professional values. A cybersecurity position might require someone who demonstrates consistent risk awareness; an innovation officer role might suit someone comfortable with uncertainty. AI agents could assess such traits by analyzing candidates' professional communications and public activities.
Yet this technological advance brings fresh concerns. Privacy considerations demand attention - while AI agents would analyze public information, organizations must establish clear ethical guidelines about data usage. More fundamentally, AI agents must remain sophisticated talent scouts rather than final decision makers. They can gather evidence and make recommendations, but human recruiters must evaluate suggestions within their understanding of organizational needs.
The transformation suggests a future where talent discovery becomes more equitable. AI agents could help overcome human biases by focusing on demonstrated capabilities rather than credentials or connections. The winners will be organizations that master this partnership between artificial intelligence and human judgment. The losers may be traditional recruitment agencies - unless they swiftly adapt to the new reality.
Your Brain on ChatGPT, a Critique
Looking at this MIT study reveals a fundamental design flaw that undermines its conclusions about AI and student engagement. The researcher...
-
Education has always been, at its core, a wager on the future. It prepares students not only for the world that is, but for the world that m...
-
As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI...
-
The relationship between thought and writing has never been simple. While writing helps organize and preserve thought, the specific form wri...