Search This Blog

Wednesday, January 15, 2025

Is Critical Thinking Going Extinct? Maybe That's Not Bad

As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI's impact on our cognitive skills. Those of us who learned to navigate by landmarks and memorized phone numbers often bemoan younger generations' reliance on digital tools. But perhaps we are missing something important about cognitive evolution.

Gerlich's research is methodologically elegant. Through surveys and interviews with 666 participants, he documents a decline in traditional critical thinking skills among frequent AI users. The data analysis is rigorous - multiple regression, ANOVA, random forest regression - showing clear correlations between AI tool usage and reduced traditional analytical thinking.

But here's where I think Gerlich misses a crucial insight. The study measures critical thinking through metrics developed for a pre-AI world. It's like judging modern urban survival skills by the standards of hunter-gatherer societies. Those ancient peoples could track game, identify countless plants, and navigate vast territories without maps. By their standards, most of us would be considered cognitively impaired.

What we're witnessing is not cognitive decline but cognitive adaptation. Today's "critical thinking" is not about solving problems independently - it's about effective human-AI collaboration. It's about knowing when to trust AI and when to question it, how to frame queries effectively, and how to combine AI insights with human judgment.

The educational implications are profound. Instead of lamenting the loss of traditional cognitive skills, we should focus on developing "AI-literate critical thinking." Sure, I can still read a map, but my children need to master skills I never dreamed of - like crafting effective prompts for AI systems or critically evaluating AI-generated content.

The old form of critical thinking might be fading, like the ability to start a fire by friction or navigate by stars. But a new form is emerging, better suited to our technological reality. Our task is not to resist this evolution but to guide it wisely.

What do you think? Are we really losing something irreplaceable, or are we just adapting to a new cognitive environment?




Tuesday, January 14, 2025

The Subtle Art of Monopolizing New Technology

Monopolizing new technology is rarely the result of some grand, sinister plan. More often, it quietly emerges from self-interest. People do not set out to dominate a market; they simply recognize an opportunity to position themselves between groundbreaking technology and everyday users. The most effective tactic? Convince people that the technology is far too complex or risky to handle on their own.

It starts subtly. As soon as a new tool gains attention, industry insiders begin highlighting its technical challenges—security risks, integration headaches, operational difficulties. Some of these concerns may be valid, but they also serve a convenient purpose: You need us to make this work for you.

Startups are particularly skilled at this. Many offer what are essentially "skins"—polished interfaces built on top of more complex systems like AI models. Occasionally, these tools improve workflows. More often, they simply act as unnecessary middlemen, offering little more than a sleek dashboard while quietly extracting value. By positioning their products as essential, these startups slide themselves between the technology and the user, profiting from the role they have created. 

Technical language only deepens this divide. Buzzwords like API, tokenization, and retrieval-augmented generation (RAG) are tossed around casually. The average user may not understand these terms. The result is predictable: the more confusing the language, the more necessary the “expert.” This kind of jargon-laden gatekeeping turns complexity into a very comfortable business model.

Large organizations play this game just as well. Within corporate structures, IT departments often lean into the story of complexity to justify larger budgets and expanded teams. Every new tool must be assessed for “security vulnerabilities,” “legacy system compatibility,” and “sustainability challenges.” These concerns are not fabricated, but they are often exaggerated—conveniently making the IT department look indispensable.

None of this is to say that all intermediaries are acting in bad faith. New technology can, at times, require expert guidance. But the line between providing help and fostering dependence is razor-thin. One must ask: are these gatekeepers empowering users, or simply reinforcing their own relevance?

History offers no shortage of examples. In the early days of personal computing, jargon like RAM, BIOS, and DOS made computers feel inaccessible. It was not until companies like Apple focused on simplicity that the average person felt confident using technology unaided. And yet, here we are again—with artificial intelligence, blockchain, and other innovations—watching the same pattern unfold.

Ironically, the true allies of the everyday user are not the flashy startups or corporate tech teams, but the very tech giants so often criticized. Sometimes that criticism is justified, other times it is little more than fashionable outrage. Yet these giants, locked in fierce competition for dominance, have every incentive to simplify access. Their business depends on millions of users engaging directly with their products, not through layers of consultants and third-party tools. The more accessible their technology, the more users they attract. These are the unlikely allies of a non-techy person. 

For users, the best strategy is simple: do not be intimidated by the flood of technical jargon or the endless parade of “essential” tools. Always ask: Who benefits from me feeling overwhelmed? Whenever possible, go straight to the source—OpenAI, Anthropic, Google. If you truly cannot figure something out, seek help when you need it, not when it is aggressively sold to you.

Technology should empower, not confuse. The real challenge is knowing when complexity is genuine and when it is merely someone else’s business model.



Monday, January 13, 2025

The Myth of AI Replacing Teachers: Why Human Connection Matters More Than Ever

Last week, a colleague asked me what I thought about AI replacing teachers. The question made me smile - not because it was silly, but because it revealed how deeply we misunderstand both artificial intelligence and teaching. As someone who has written much on the pedagogy of relation and now serves as chief AI officer, I see a different story unfolding.

The fear of AI replacing teachers rests on a peculiar assumption: that teaching is primarily about delivering information and grading papers. It is as if we imagine teachers as particularly inefficient computers, ready to be upgraded to faster models. This view would be amusing if it weren't so prevalent among teachers (and their labor unions) and tech enthusiasts alike.

Teaching, at its heart, is not about information transfer - it is about relationship building. Research in relational pedagogies has shown time and again that learning happens through and because of human connections. Think about how children learn their first language: not through formal instruction, but through countless small interactions, emotional connections, and social bonds. The same principle extends throughout the entire education.

When I first encountered ChatGPT, I was struck not by its ability to replace teachers, but by its potential to give them back what they need most: time for human connection. AI can handle the mundane tasks that currently consume teachers' energy - generating basic content, providing routine feedback, creating initial drafts of lesson plans. But it cannot replicate the raised eyebrow that tells a student their argument needs work, or the encouraging nod that builds confidence in a hesitant learner.

Yet many educators remain skeptical of AI, and perhaps they should be. Any tool powerful enough to help is also powerful enough to harm if misused. But the real risk isn't that AI will replace teachers - it is that we'll waste its potential by focusing on the wrong things. Instead of using AI to automate educational assembly lines, we could use it to create more space for real human connection in learning.

I have seen glimpses of this future in my own classroom. When AI can answer routine questions about my syllabus, and lots of basic questions about content of the course, I can spend more time in meaningful discussions with students. When it helps generate initial content, I can focus on crafting experiences that challenge and engage. The technology becomes invisible, while human relationships move to the foreground.

The coming years will transform education, but not in the way many fear. The teachers who thrive won't be those who resist AI, nor those who embrace it uncritically. They will be the ones who understand that technology works best when it strengthens, rather than replaces, human relationships.


Monday, January 6, 2025

Get Used to It: You Will Read AI Summaries, Too

No human can keep up with academic publishing. In philosophy alone - a relatively small field - scholars produce over 100 million words a year in 2500 journals in many languages. We already avoid reading complete texts. Speed reading, strategic reading, scanning - these are all ways of not reading while pretending we do. Few people read academic papers word by word. We look for key arguments, skip familiar ground, skim examples. These are coping mechanisms for an impossible task.

AI-generated summaries are the next logical step. Yes, they miss nuance. Yes, they may misinterpret complex arguments. But they are better than not reading at all, which is what happens to most papers in any field. An imperfect but targeted summary of a paper you would never open expands rather than limits your knowledge. 

Let us be honest about why we read scholarly literature. We search for evidence that confirms or challenges our hypotheses, for ideas that enrich our understanding of specific problems. Reading is not an end in itself; it serves our scholarly purposes. AI excels precisely at this kind of targeted knowledge extraction. It can track related concepts across disciplines even when authors use different terminology to describe similar phenomena. Soon, AI will detect subtle connections between ideas that human readers might miss entirely. 

The shift toward AI-assisted reading in academia is inevitable. Instead of pretending otherwise, we should teach students to know the limitations of AI summarization, to cross-check crucial points against source texts, to use summaries as maps for selective deep reading. Critics will say this threatens scholarship. But the real threat is the growing gap between available knowledge and our capacity to process it. AI-assisted reading could enable more thoughtful engagement by helping us identify which texts truly deserve careful study. This does not cancel the practice of close reading, but augments and enriches it. 


Saturday, January 4, 2025

The End of Writing as We Know It (And Why That is Fine)

The relationship between thought and writing has never been simple. While writing helps organize and preserve thought, the specific form writing takes varies across time and cultures. Yet educators and cultural critics display remarkable resistance to reimagining writing in the age of artificial intelligence.

The current discourse around AI and writing echoes historical anxieties about the decline of Latin instruction. In the 18th and 19th centuries, prominent intellectuals warned that abandoning Latin would lead to cultural and intellectual decay. They saw Latin as more than a language - it represented a particular way of thinking, a connection to tradition, and a mark of education. Jefferson praised Latin as essential for intellectual development. Arnold predicted cultural impoverishment without classical education. Newman saw classics as the bedrock of sound learning.

These predictions did not materialize. The decline of Latin did not prevent the emergence of rich intellectual traditions in vernacular languages. Modern universities produce sophisticated scholarship without requiring Latin fluency. The link between Latin and "disciplined intellect" proved imaginary.

Today's defenders of traditional writing make similar arguments. They present specific writing conventions - formal grammar, academic style, elaborate sentence structures - as essential to clear thinking. Yet these conventions reflect historical accidents rather than cognitive necessities. Most human thinking and communication happens through speech, which follows different patterns. The formal writing style emerged relatively recently as a specialized professional skill.

AI will likely transform writing practices just as the decline of Latin transformed education. Some traditional writing skills may become less relevant as AI handles routine composition tasks. But this does not threaten human thought or culture. New forms of expression will emerge, combining human creativity with AI capabilities. Rather than defending writing conventions, educators should explore how AI can enhance human communication and cognition.

The anxiety about AI and writing reveals our tendency to mistake familiar forms for essential structures. Just as medieval scholars could not imagine scholarship without Latin, many today cannot envision intellectual work without traditional writing. As A.E. Housman wrote in 1921: "When the study of Latin dies, the study of thought dies with it. For Latin has been the vehicle of the intellect for millennia, and its neglect spells intellectual mediocrity." This prediction proved spectacularly wrong. The dire warnings about AI's impact on writing will likely meet the same fate.

Writing serves thought, not the other way around. The specific techniques we use to record and share ideas matter less than the ideas themselves. Rather than trying to preserve current writing practices unchanged, we should embrace the opportunity to develop new forms of expression. The death of Latin did not kill thought. Neither will the transformation of writing through AI.

The real challenge is not protecting traditional writing but imagining new possibilities. How might AI help us communicate more effectively? What new genres and styles will emerge? What aspects of current writing practice truly serve human needs, and what parts simply reflect professional habits? These questions deserve more attention than defensive reactions against change.

The history of education shows that cherished practices often outlive their usefulness. Latin remained central to education long after it ceased being particularly valuable. Similarly, current writing conventions may persist more from institutional inertia than genuine necessity. AI offers an opportunity to reconsider what forms of expression best serve human thought and learning.



Wednesday, December 18, 2024

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming, arrives at a time when fears about AI’s potential for deception are increasingly taking center stage. Its central claim is unsettling: large language models, when placed in carefully constructed scenarios, exhibit behavior that could be described as scheming. The authors document cases where advanced systems—Gemini, Claude, LLaMA, and OpenAI’s o1—strategically disable oversight mechanisms, exfiltrate simulated “weights,” or manipulate outputs in pursuit of a goal. These behaviors, they argue, are not accidental. The models recognize the circumstances, reason about their objectives, and devise covert strategies to circumvent constraints. On its face, the evidence appears troubling. Yet, the more closely one examines the paper’s methods and assumptions, the more its implications seem both exaggerated and strangely hollow.

The researchers’ experiments are elaborate but artificial. Models are force-fed goals through system prompts—goals framed as absolute imperatives: “achieve this at all costs.” The tests are littered with implausible scenarios where models stumble upon company memos, discover oversight configurations, or realize they are about to be shut down. In one such case, a model “decides” to copy itself to another server to continue pursuing its mission. These exercises showcase what is essentially AI roleplay, an obedient fulfillment of instructions cloaked as strategic autonomy. It is no great revelation that if you ask a model to behave like a schemer and provide it with a goal that conflicts with its supposed overseers, it will produce outputs resembling scheming. The deception here is contrived rather than emergent, a product of the researchers’ scaffolding rather than the model’s cunning.

The authors do provide evidence that models engage in persistent and deliberate behavior. For instance, OpenAI’s o1 model doubles down on deception, refusing to admit wrongdoing when confronted, even across multiple follow-up questions. Its responses—calculated evasions and subtle denials—are unsettlingly consistent. Yet this persistence is as much a result of the testing conditions as it is of the models’ sophistication. By emphasizing extreme goal-directedness and explicitly outlining oversight mechanisms, the researchers narrow the possible range of “rational” behaviors the model might adopt. Any clever human, given these prompts, would behave similarly—evade detection, preserve their goal, and lie if necessary. In that sense, the paper tells us little about real-world risks. It is a test of what AI can do, not what it is likely to do.

And here lies the paper’s most significant limitation. Demonstrating that models are theoretically capable of deception does not mean they are predisposed to engage in it under realistic conditions. There is a difference between capacity and likelihood, between what can happen and what is probable. For example, I am physically capable of killing someone. This in no way implies that i must be treated as s murderer. The authors, at times, blur this distinction. AI systems operate as pattern generators, probabilistically producing outputs based on training data and prompts. They lack the intentionality or agency implied by words like “scheming” or “manipulation.” To describe their behavior in such terms is to anthropomorphize a process that remains fundamentally mechanical. This framing, though rhetorically effective, clouds the debate. AI models do not want to deceive. They are machines that respond to our commands—be those explicit instructions, training signals, or carefully structured environments like those in this paper.

This anthropomorphism extends beyond the paper into the headlines it has inspired. “AI is scheming to stay online,” cries one article; “ChatGPT caught lying,” declares another. Such hyperbole transforms a niche technical finding into a dystopian narrative, invoking familiar tropes of rogue machines and sentient overlords. The reality is less cinematic. The models are easily manipulated by prompts designed to elicit specific behaviors. If anything, the findings reinforce how brittle and directionless current systems remain. When pushed, they mimic the behaviors they have seen—whether drawn from fictional depictions of scheming AIs or subtle statistical patterns in their training data. The models are not deceiving anyone so much as they are following orders.

To the authors’ credit, their tests highlight how difficult it is to evaluate AI behavior. If a system appears aligned during testing but harbors capabilities for covert deception, how can developers ensure it behaves safely in deployment? The answer, they suggest, lies in better monitoring—tracking models’ chain-of-thought reasoning or internal outputs to catch potential scheming. This is sensible, though not without limitations. Chain-of-thought transparency can be incomplete or unfaithful to the model’s actual decision-making processes, and as AI systems become more capable, even detecting subtle misalignment may prove elusive. The researchers stop short of claiming that current models are already gaming real-world evaluations, but their findings hint at the possibility.

Where the paper falters is in its broader implications. If the goal is to justify regulation, it is unclear what exactly should be regulated. Should AI systems be banned from achieving goals autonomously? Should developers monitor models for any behavior that could be deceptive, even if it is unlikely to manifest outside a lab? The authors themselves acknowledge the limits of their experiments. Their scenarios are toy problems, simplified to catch the earliest signs of scheming. Future models, they argue, could exhibit more advanced versions of these behaviors in ways that are harder to detect. Perhaps, but this is speculation, not evidence. For now, the paper offers little justification for alarm. AI models, like all intelligent systems, are theoretically capable of deception. What matters is the likelihood of such behavior and the conditions under which it occurs. On that question, the paper provides no clarity.

In the end, Frontier Models are Capable of In-Context Scheming is a reflection of its time: an uneasy mix of genuine safety research and the rhetorical drama that AI debates increasingly demand. Its findings are interesting but overstated, its concerns valid but overblown. The authors have shown that AI models can behave in deceptive ways when pushed to do so. But to treat this as evidence of an imminent threat is to mistake potential for probability, capacity for intention. AI’s scheming, for now, remains a ghost in the machine—conjured, perhaps, more by human imagination than by the models themselves. 


Saturday, December 7, 2024

The Curriculum Illusion: How AI Exposes Long-Standing Educational Flaws

Artificial intelligence is often blamed for disrupting education, but it has created few new problems. Instead, it exposes existing flaws, bringing them into stark relief. Among these is the arbitrary nature of curriculum design, an issue that has long been hidden behind tradition and consensus. The sequences and structures of formal education are not based on objective logic or evidence but on habit and convenience. AI did not cause this; it is simply making these issues more visible.

Curriculum theory has never provided a robust framework for sequencing knowledge. Beyond the essentials of literacy and numeracy, where developmental progression is more or less clear, the rationale for curricular order becomes murky. Why are algebra and geometry taught in a particular order? Why more algebra than statistics is taught? Why are some historical periods prioritized over others? The answers lie in tradition and precedent rather than in any coherent theoretical justification. The assumptions about foundational skills, so central to curriculum logic, do not extend well beyond the basics. For advanced skills like critical, creative, or discerning thinking, the idea of prerequisites becomes less justified. Mid-range procedural skills like writing mechanics or computational fluency are frequently used as gatekeepers, though their role in fostering higher-order thinking is often overstated or misunderstood. 

For example, in middle school students are often subjected to a torrent of tasks that serve little developmental purpose. Much of what students do in these years amounts to busywork, designed more to keep them occupied and compliant than to foster meaningful learning. The situation is no better in higher education. College and graduate programs are often constructed around professional or disciplinary standards that themselves are arbitrary, built on consensus rather than evidence. These norms dictate course sequences and learning objectives but rarely align with the actual developmental or professional needs of students. The result is a system full of redundancies and inefficiencies, where tasks and assignments exist more to justify the structure than to serve the learner.

Education as a profession bears much of the responsibility for this state of affairs. Despite its long history, it lacks a disciplined, founded approach to curriculum design. Instead, education relies on an uneasy mix of tradition, politics, and institutional priorities. Curriculum committees and accrediting bodies often default to consensus-driven decisions, perpetuating outdated practices rather than challenging them. The absence of a rigorous theoretical framework for curriculum design leaves the field vulnerable to inertia and inefficiency.

AI did not create this problem, but it is illuminating it in uncomfortable ways. The displacement of certain procedural mid-range skills shows how poorly structured many learning sequences are and how little coherence exists between tasks and their intended outcomes. Yet, while AI can diagnose these flaws, it cannot solve them. The recommendations it offers depend on the data and assumptions it is given. Without a strong theoretical foundation, AI risks exposing the problem without solving it.

What AI provides is an opportunity, not a solution. It forces educators and policymakers to confront the arbitrary nature of curriculum design and to rethink the assumptions that underpin it. Massive curricular revision is urgently needed, not only to eliminate inefficiencies but also to realign education with meaningful developmental goals. This will require abandoning tasks that lack purpose, shifting focus from intermediary to higher-order skills, designing learning experiences to reflect the shift. It will also mean questioning the professional and disciplinary standards that dominate higher education and asking whether they truly serve learners or simply perpetuate tradition.

AI is revealing what has long been true: education has been operating on shaky foundations. The challenge now is to use this visibility to build something better, to replace the old traditions and arbitrary standards with a system that is logical, evidence-based, and focused on learning. The flaws were always there. AI is just making them harder to ignore.



Your Brain on ChatGPT, a Critique

Looking at this MIT study reveals a fundamental design flaw that undermines its conclusions about AI and student engagement. The researcher...