Monday, February 10, 2025

Form-substance discrimination, a new learning outcome

We have long assumed that clear writing signals clear thinking. Schools teach writing as if it were math - a rigorous exercise that develops logical thinking. Editors catch not just errors but muddled ideas. Or so the theory goes. Artificial intelligence shatters this comfortable assumption by churning out impeccably structured prose devoid of original thought.

Form-substance discrimination represents a higher-order cognitive skill, similar to what art historians develop when they learn to separate technical mastery from creative vision. Just as an art student must unlearn their initial attraction to photorealistic paintings, readers now must develop resistance to seductively clear prose. This requires a kind of cognitive inhibition - the ability to suppress immediate aesthetic pleasure for the sake of deeper analysis.

The skill builds on existing metacognitive abilities but requires their novel application. Readers already know how to identify main ideas, analyze arguments, and evaluate evidence. What is new is the need to perform these operations while actively discounting the surface appeal of the text. This resembles what wine tasters do when they evaluate wines blind, stripped of prestigious labels and beautiful bottles.

The development follows a predictable pattern. At first, readers struggle to overcome their ingrained respect for well-crafted prose. The initial challenge lies not in identifying weak ideas but in giving oneself permission to criticize a text that follows all the rules of good writing. This mirrors the development of critical thinking in general, where students must learn to question authority figures who appear competent and confident.

The second stage involves developing specific techniques for idea extraction. Readers learn to create idea maps independent of the text's structure, to count unique concepts rather than words, to identify circular arguments hidden behind elegant transitions. They begin to see how AI-generated text often creates an illusion of logical flow while merely restating the same point in different words.

The final stage brings automaticity. Experienced readers develop an immediate sense of a text's intellectual weight, just as experienced teachers can quickly gauge a student's understanding despite fluent recitation. This involves pattern recognition built through exposure to many examples of both substantive and hollow texts.

The educational implications are significant. Writing instruction must now explicitly separate craft from content. Students need exposure to both well-written texts with weak ideas and poorly written texts with strong ideas. They must practice identifying when sophisticated language masks conceptual poverty and when rough expression contains genuine insight.

This shift parallels broader changes in how we process information. In a world of information abundance, the key skill is no longer finding or producing well-formed content but discerning its value. Form-substance discrimination represents a specific case of this general challenge - learning to navigate a world where traditional quality signals no longer reliably indicate underlying worth.

The skill matters beyond academia. Business leaders reading AI-generated reports, citizens evaluating political arguments, professionals studying their field's literature - all need to separate rhetorical sophistication from intellectual contribution. As AI writing tools improve, this ability will become as fundamental as basic literacy.

We face a paradox: the better AI becomes at writing, the more important it becomes for humans to see through good writing. The very perfection of AI-generated prose creates a new kind of opacity that readers must learn to penetrate. Form-substance discrimination thus emerges not just as an academic skill but as a key component of modern critical thinking.



Sunday, February 9, 2025

AI and Labor: A Smarter Path Forward

Trade unions face a defining moment. Artificial intelligence presents genuine concerns about job displacement, yet the response need not mirror historical patterns of resistance to technological change. The Luddite movement of the 1810s serves as a cautionary tale - their destruction of mechanized looms neither preserved jobs nor improved workers' conditions. All technology affects labor; that it what technology is, work assistance. 

The automation paradox offers a more nuanced perspective. While machines replace specific tasks, they generate new forms of work. The introduction of automated teller machines in banking led to more bank branches and tellers performing complex customer service roles. This pattern repeats across industries - automation reduces costs, expands services, and creates different job categories.

Labor leaders would serve their members better by negotiating robust transition arrangements. Key demands should include employer-funded retraining programs, preferential access to new positions, and compensation packages that recognize acquired skills. The focus must shift from preventing change to shaping its implementation.

The pace of AI integration varies significantly by sector. Manufacturing and data processing may see rapid adoption, but industries built on human relationships - education, healthcare, social work - will incorporate AI gradually as assistive technology. Complex organizations require extensive testing and workflow redesign before meaningful automation becomes feasible.

Economic history demonstrates that reduced production costs expand economic activity. When basic tasks become automated, human attention shifts to more sophisticated problems. The telephone eliminated telegraph operators but created vast new communication industries. Similarly, AI will likely automate routine cognitive work while opening possibilities in areas we have not yet imagined.

Unions retain significant leverage during this transition. Organizations need experienced workers to implement new technologies effectively. This position allows labor to negotiate favorable terms - extended notice periods, substantial retraining budgets, wage protection during transition, and clear paths to higher-skilled roles.

The key lies in recognizing AI as a tool for augmentation rather than pure replacement. A machine learning system may process medical images faster than radiologists, but interpreting results in complex cases still requires human judgment. Similar patterns will emerge across professions - AI handling routine tasks while humans focus on nuanced decision-making and interpersonal elements.

Rather than resist change, unions should position themselves as partners in managing transition. This approach preserves their relevance and better serves member interests. The alternative - attempting to prevent AI adoption - risks marginalization as companies seek ways around opposition or relocate to more amenable jurisdictions.

The challenge for labor leadership is to shift from defensive postures to proactive engagement. This means developing expertise in emerging technologies, identifying opportunities for worker advancement, and ensuring transition arrangements protect vulnerable members while facilitating adaptation to changing workplace demands.



Tuesday, February 4, 2025

Augmented Problem Finding: The Next Frontier in AI Literacy

In my recent blog on task decomposition as a key AI skill, I highlighted how breaking down complex problems enables effective human-AI collaboration. Yet before we can decompose a task, we must identify which problems are worth pursuing - a skill that takes on new dimensions in the age of AI.

This ability to recognize solvable problems expands dramatically with AI tools at our disposal. Tasks once considered too time-consuming or complex suddenly become manageable. The cognitive offloading that AI enables does not just help us solve existing problems - it fundamentally reshapes our understanding of what constitutes a tractable challenge.

Consider how VisiCalc transformed financial planning in the early 1980s. Initially seen as a mere automation tool for accountants, it revolutionized business planning by enabling instant scenario analysis. Tasks that would have consumed days of manual recalculation became instantaneous, allowing professionals to explore multiple strategic options and ask "what if" questions they would not have contemplated before. Similarly, AI prompts us to reconsider which intellectual tasks we should undertake. Writing a comprehensive literature review might have once consumed months; with AI assistance, scholars can now contemplate more ambitious syntheses of knowledge.

This expanded problem space creates its own paradox. As more tasks become technically feasible, the challenge shifts to identifying which ones merit attention. The skill resembles what cognitive psychologists call "problem finding," but with an important twist. Traditional problem finding focuses on identifying gaps or needs. Augmented problem finding requires understanding both human and AI capabilities to recognize opportunities in this enlarged cognitive landscape.

The distinction becomes clear in professional settings. Experienced AI users develop an intuitive sense of which tasks to delegate and which to tackle themselves. They recognize when a seemingly straightforward request actually requires careful human oversight, or when an apparently complex task might yield to well-structured AI assistance. This judgment develops through experience but could be taught more systematically.

The implications extend beyond individual productivity. Organizations must now cultivate this capacity across their workforce. The competitive advantage increasingly lies not in having access to AI tools - these are becoming ubiquitous - but in identifying novel applications for them. This explains why some organizations extract more value from AI than others, despite using similar technologies.

Teaching augmented problem finding requires a different approach from traditional problem-solving instruction. Students need exposure to varied scenarios where AI capabilities interact with human judgment. They must learn to recognize patterns in successful AI applications while developing realistic expectations about AI limitations. Most importantly, they need practice in identifying opportunities that emerge from combining human and machine capabilities in novel ways.

The skill also has ethical dimensions. Not every task that can be automated should be. Augmented problem finding includes judging when human involvement adds necessary value, even at the cost of efficiency. It requires balancing the technical feasibility of AI solutions against broader organizational and societal impacts.

As AI capabilities evolve, this skill will become increasingly crucial. The future belongs not to those who can best use AI tools, but to those who can best identify opportunities for their application. This suggests a shift in how we think about AI literacy - from focusing on technical proficiency to developing sophisticated judgment about when and how to engage AI capabilities.

The automation paradox that Lisanne Bainbridge identified in her 1983 analysis of industrial systems points to an interesting future. As we become more adept at augmented problem finding, we discover new challenges that merit attention. This creates a virtuous cycle of innovation, where each advance in AI capability opens new frontiers for human creativity and judgment.

Perhaps most intriguingly, this skill might represent a distinctly human advantage in the age of AI. While machines excel at solving well-defined problems, the ability to identify worthy challenges remains a uniquely human capability. By developing our capacity for augmented problem finding, we ensure a meaningful role for human judgment in an increasingly automated world.



Saturday, February 1, 2025

Task Decomposition, a core AI skill

The effective use of artificial intelligence depends on our ability to structure problems in ways that align with both human and machine capabilities. While AI demonstrates remarkable computational abilities, its effectiveness relies on carefully structured input and systematic oversight. This suggests that our focus should shift toward understanding how to break down complex tasks into components that leverage the respective strengths of humans and machines.

Task decomposition - the practice of breaking larger problems into manageable parts - predates AI but takes on new significance in this context. Research in expertise studies shows that experienced problem-solvers often approach complex challenges by identifying distinct components and their relationships. This natural human tendency provides a framework for thinking about AI collaboration: we need to recognize which aspects of a task benefit from computational processing and which require human judgment.

The interaction between human users and AI systems appears to follow certain patterns. Those who use AI effectively tend to approach it as a collaborative tool rather than a complete solution. They typically work through multiple iterations: breaking down the problem, testing AI responses, evaluating results, and adjusting their approach. This mirrors established practices in other domains where experts regularly refine their solutions through systematic trial and error.

Consider the task of writing a research paper. Rather than requesting a complete document from AI, a more effective approach involves breaking down the process: developing an outline, gathering relevant sources, analyzing specific arguments, and integrating various perspectives. Similarly, in data analysis, success often comes from methodically defining questions, selecting appropriate datasets, using AI for initial pattern recognition, and applying human expertise to interpret the findings.

This collaborative approach serves two purposes. First, it helps manage complexity by distributing cognitive effort across human and machine resources. Second, it maintains human oversight of the process while benefiting from AI's computational capabilities. The goal is not to automate thinking but to enhance it through structured collaboration.

Current educational practices have not yet fully adapted to this reality. While many institutions offer technical training in AI or discuss its ethical implications, fewer focus on teaching systematic approaches to human-AI collaboration. Students need explicit instruction in how to break down complex tasks and document their decision-making processes when working with AI tools.

To address this gap, educational programs could incorporate several key elements:

  1. Practice in systematic task analysis and decomposition
  2. Training in structured approaches to AI interaction
  3. Documentation of decision-making processes in AI-assisted work
  4. Critical evaluation of AI outputs and limitations
  5. Integration of human expertise with AI capabilities

The emergence of AI tools prompts us to examine our own cognitive processes more explicitly. As we learn to structure problems for AI collaboration, we also develop a clearer understanding of our own problem-solving approaches. This suggests that learning to work effectively with AI involves not just technical skills but also enhanced metacognition - thinking about our own thinking.

The future of human-AI collaboration likely depends less on technological advancement and more on our ability to develop systematic approaches to task decomposition. By focusing on this fundamental skill, we can work toward more effective integration of human and machine capabilities while maintaining the critical role of human judgment and oversight.

These observations and suggestions should be treated as starting points for further investigation rather than definitive conclusions. As we gather more evidence about effective human-AI collaboration, our understanding of task decomposition and its role in this process will likely evolve. The key is to maintain a balanced approach that recognizes both the potential and limitations of AI while developing structured methods for its effective use. 




Thursday, January 23, 2025

Not Pleased? Don’t Release It: The Only AI Ethics Rule That Matters

Imagine this: you have tasked an AI with drafting an email, and it produces a passive-aggressive disaster that starts, “Per our last conversation, which was, frankly, baffling…” You delete it, chuckle at its misjudgment, and write your own. But what if you had not? What if you had just hit “send,” thinking, Close enough?

This scenario distills the ethical dilemma of AI into its purest form: the moment of release. Not the mechanics of training data or the mysteries of machine learning, but the single, decisive act of sharing output with the world. In that instant, accountability crystallizes. It does not matter whether you crafted most of the the content yourself or leaned on the AI—the responsibility is entirely yours. 

We are used to outsourcing tasks, but AI lures us into outsourcing judgment itself. Its most cunning trick is not in its ability to mimic human language or spin impressive results from vague inputs. It is in convincing us that its outputs are inherently worthy of trust, tempting us to lower our guard. We are used to thinking - if a text is well-phrased and proofread, it must deserve our trust. This assumption does not hold anymore.

This illusion of reliability is dangerous. AI does not think, intend, or care. It is a reflection of its programming, its training data, and your prompt. If it churns out something brilliant, that is no more its triumph than a mirror deserves credit for the sunrise. And if it produces something harmful or inaccurate, the blame does not rest on the tool but on the person who decided its work was good enough to share.

History has seen this before. The printing press did not absolve publishers from libel; a copy machine did not excuse someone distributing fake material. Technology has always been an extension of human will, not a replacement for it. Yet, with AI, there is an emerging tendency to treat it as if it has intentions—blaming its "hallucinations" or "bias" instead of acknowledging the real source of responsibility: the human operator.

The allure of AI lies in its efficiency, its ability to transform inputs into polished-seeming outputs at lightning speed. But this speed can lull us into complacency, making it easier to prioritize convenience over caution. Editing, which used to be the painstaking craft of refining and perfecting, risks being reduced to a hasty skim, a rubber stamp of approval. This surrender of critical oversight is not just laziness—it is a new kind of moral failing.

Ethics in the AI age does not require intricate frameworks or endless debate. It boils down to one unflinching rule: if you release it, you are responsible for it. There is no caveat, no “but the AI misunderstood me.” The moment you publish, share, or forward something generated by AI, you claim its contents as your own.

This principle is a call for realism in the face of AI’s potential. AI can help us create, analyze, and innovate faster than ever, but it cannot—and should not—replace human accountability. The leap from creation to publication is where the line must be drawn. That is where we prove we are still the grown-ups in the room.

Before you hit "send" or "post" or "publish," a few simple questions can save a lot of regret:

  • Have you read it thoroughly? Not just the shiny parts, but the details that could cause harm.
  • Would you stake your reputation on this?
  • Is it biased, or factually wrong?

The alternative is a world where people shrug off misinformation, bias, and harm as the inevitable byproducts of progress. A world where the excuse, The AI did it, becomes a get-out-of-jail-free card for every mistake.

So, when the next output feels close enough, resist the urge to let it slide. That "send" button is not just a convenience—it is a statement of ownership. Guard it fiercely. Responsibility begins and ends with you, not the machine.

Because once you let something loose in the world, you cannot take it back.





Wednesday, January 15, 2025

Is Critical Thinking Going Extinct? Maybe That's Not Bad

As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI's impact on our cognitive skills. Those of us who learned to navigate by landmarks and memorized phone numbers often bemoan younger generations' reliance on digital tools. But perhaps we are missing something important about cognitive evolution.

Gerlich's research is methodologically elegant. Through surveys and interviews with 666 participants, he documents a decline in traditional critical thinking skills among frequent AI users. The data analysis is rigorous - multiple regression, ANOVA, random forest regression - showing clear correlations between AI tool usage and reduced traditional analytical thinking.

But here's where I think Gerlich misses a crucial insight. The study measures critical thinking through metrics developed for a pre-AI world. It's like judging modern urban survival skills by the standards of hunter-gatherer societies. Those ancient peoples could track game, identify countless plants, and navigate vast territories without maps. By their standards, most of us would be considered cognitively impaired.

What we're witnessing is not cognitive decline but cognitive adaptation. Today's "critical thinking" is not about solving problems independently - it's about effective human-AI collaboration. It's about knowing when to trust AI and when to question it, how to frame queries effectively, and how to combine AI insights with human judgment.

The educational implications are profound. Instead of lamenting the loss of traditional cognitive skills, we should focus on developing "AI-literate critical thinking." Sure, I can still read a map, but my children need to master skills I never dreamed of - like crafting effective prompts for AI systems or critically evaluating AI-generated content.

The old form of critical thinking might be fading, like the ability to start a fire by friction or navigate by stars. But a new form is emerging, better suited to our technological reality. Our task is not to resist this evolution but to guide it wisely.

What do you think? Are we really losing something irreplaceable, or are we just adapting to a new cognitive environment?




Tuesday, January 14, 2025

The Subtle Art of Monopolizing New Technology

Monopolizing new technology is rarely the result of some grand, sinister plan. More often, it quietly emerges from self-interest. People do not set out to dominate a market; they simply recognize an opportunity to position themselves between groundbreaking technology and everyday users. The most effective tactic? Convince people that the technology is far too complex or risky to handle on their own.

It starts subtly. As soon as a new tool gains attention, industry insiders begin highlighting its technical challenges—security risks, integration headaches, operational difficulties. Some of these concerns may be valid, but they also serve a convenient purpose: You need us to make this work for you.

Startups are particularly skilled at this. Many offer what are essentially "skins"—polished interfaces built on top of more complex systems like AI models. Occasionally, these tools improve workflows. More often, they simply act as unnecessary middlemen, offering little more than a sleek dashboard while quietly extracting value. By positioning their products as essential, these startups slide themselves between the technology and the user, profiting from the role they have created. 

Technical language only deepens this divide. Buzzwords like API, tokenization, and retrieval-augmented generation (RAG) are tossed around casually. The average user may not understand these terms. The result is predictable: the more confusing the language, the more necessary the “expert.” This kind of jargon-laden gatekeeping turns complexity into a very comfortable business model.

Large organizations play this game just as well. Within corporate structures, IT departments often lean into the story of complexity to justify larger budgets and expanded teams. Every new tool must be assessed for “security vulnerabilities,” “legacy system compatibility,” and “sustainability challenges.” These concerns are not fabricated, but they are often exaggerated—conveniently making the IT department look indispensable.

None of this is to say that all intermediaries are acting in bad faith. New technology can, at times, require expert guidance. But the line between providing help and fostering dependence is razor-thin. One must ask: are these gatekeepers empowering users, or simply reinforcing their own relevance?

History offers no shortage of examples. In the early days of personal computing, jargon like RAM, BIOS, and DOS made computers feel inaccessible. It was not until companies like Apple focused on simplicity that the average person felt confident using technology unaided. And yet, here we are again—with artificial intelligence, blockchain, and other innovations—watching the same pattern unfold.

Ironically, the true allies of the everyday user are not the flashy startups or corporate tech teams, but the very tech giants so often criticized. Sometimes that criticism is justified, other times it is little more than fashionable outrage. Yet these giants, locked in fierce competition for dominance, have every incentive to simplify access. Their business depends on millions of users engaging directly with their products, not through layers of consultants and third-party tools. The more accessible their technology, the more users they attract. These are the unlikely allies of a non-techy person. 

For users, the best strategy is simple: do not be intimidated by the flood of technical jargon or the endless parade of “essential” tools. Always ask: Who benefits from me feeling overwhelmed? Whenever possible, go straight to the source—OpenAI, Anthropic, Google. If you truly cannot figure something out, seek help when you need it, not when it is aggressively sold to you.

Technology should empower, not confuse. The real challenge is knowing when complexity is genuine and when it is merely someone else’s business model.



Monday, January 13, 2025

The Myth of AI Replacing Teachers: Why Human Connection Matters More Than Ever

Last week, a colleague asked me what I thought about AI replacing teachers. The question made me smile - not because it was silly, but because it revealed how deeply we misunderstand both artificial intelligence and teaching. As someone who has written much on the pedagogy of relation and now serves as chief AI officer, I see a different story unfolding.

The fear of AI replacing teachers rests on a peculiar assumption: that teaching is primarily about delivering information and grading papers. It is as if we imagine teachers as particularly inefficient computers, ready to be upgraded to faster models. This view would be amusing if it weren't so prevalent among teachers (and their labor unions) and tech enthusiasts alike.

Teaching, at its heart, is not about information transfer - it is about relationship building. Research in relational pedagogies has shown time and again that learning happens through and because of human connections. Think about how children learn their first language: not through formal instruction, but through countless small interactions, emotional connections, and social bonds. The same principle extends throughout the entire education.

When I first encountered ChatGPT, I was struck not by its ability to replace teachers, but by its potential to give them back what they need most: time for human connection. AI can handle the mundane tasks that currently consume teachers' energy - generating basic content, providing routine feedback, creating initial drafts of lesson plans. But it cannot replicate the raised eyebrow that tells a student their argument needs work, or the encouraging nod that builds confidence in a hesitant learner.

Yet many educators remain skeptical of AI, and perhaps they should be. Any tool powerful enough to help is also powerful enough to harm if misused. But the real risk isn't that AI will replace teachers - it is that we'll waste its potential by focusing on the wrong things. Instead of using AI to automate educational assembly lines, we could use it to create more space for real human connection in learning.

I have seen glimpses of this future in my own classroom. When AI can answer routine questions about my syllabus, and lots of basic questions about content of the course, I can spend more time in meaningful discussions with students. When it helps generate initial content, I can focus on crafting experiences that challenge and engage. The technology becomes invisible, while human relationships move to the foreground.

The coming years will transform education, but not in the way many fear. The teachers who thrive won't be those who resist AI, nor those who embrace it uncritically. They will be the ones who understand that technology works best when it strengthens, rather than replaces, human relationships.


Monday, January 6, 2025

Get Used to It: You Will Read AI Summaries, Too

No human can keep up with academic publishing. In philosophy alone - a relatively small field - scholars produce over 100 million words a year in 2500 journals in many languages. We already avoid reading complete texts. Speed reading, strategic reading, scanning - these are all ways of not reading while pretending we do. Few people read academic papers word by word. We look for key arguments, skip familiar ground, skim examples. These are coping mechanisms for an impossible task.

AI-generated summaries are the next logical step. Yes, they miss nuance. Yes, they may misinterpret complex arguments. But they are better than not reading at all, which is what happens to most papers in any field. An imperfect but targeted summary of a paper you would never open expands rather than limits your knowledge. 

Let us be honest about why we read scholarly literature. We search for evidence that confirms or challenges our hypotheses, for ideas that enrich our understanding of specific problems. Reading is not an end in itself; it serves our scholarly purposes. AI excels precisely at this kind of targeted knowledge extraction. It can track related concepts across disciplines even when authors use different terminology to describe similar phenomena. Soon, AI will detect subtle connections between ideas that human readers might miss entirely. 

The shift toward AI-assisted reading in academia is inevitable. Instead of pretending otherwise, we should teach students to know the limitations of AI summarization, to cross-check crucial points against source texts, to use summaries as maps for selective deep reading. Critics will say this threatens scholarship. But the real threat is the growing gap between available knowledge and our capacity to process it. AI-assisted reading could enable more thoughtful engagement by helping us identify which texts truly deserve careful study. This does not cancel the practice of close reading, but augments and enriches it. 


Saturday, January 4, 2025

The End of Writing as We Know It (And Why That is Fine)

The relationship between thought and writing has never been simple. While writing helps organize and preserve thought, the specific form writing takes varies across time and cultures. Yet educators and cultural critics display remarkable resistance to reimagining writing in the age of artificial intelligence.

The current discourse around AI and writing echoes historical anxieties about the decline of Latin instruction. In the 18th and 19th centuries, prominent intellectuals warned that abandoning Latin would lead to cultural and intellectual decay. They saw Latin as more than a language - it represented a particular way of thinking, a connection to tradition, and a mark of education. Jefferson praised Latin as essential for intellectual development. Arnold predicted cultural impoverishment without classical education. Newman saw classics as the bedrock of sound learning.

These predictions did not materialize. The decline of Latin did not prevent the emergence of rich intellectual traditions in vernacular languages. Modern universities produce sophisticated scholarship without requiring Latin fluency. The link between Latin and "disciplined intellect" proved imaginary.

Today's defenders of traditional writing make similar arguments. They present specific writing conventions - formal grammar, academic style, elaborate sentence structures - as essential to clear thinking. Yet these conventions reflect historical accidents rather than cognitive necessities. Most human thinking and communication happens through speech, which follows different patterns. The formal writing style emerged relatively recently as a specialized professional skill.

AI will likely transform writing practices just as the decline of Latin transformed education. Some traditional writing skills may become less relevant as AI handles routine composition tasks. But this does not threaten human thought or culture. New forms of expression will emerge, combining human creativity with AI capabilities. Rather than defending writing conventions, educators should explore how AI can enhance human communication and cognition.

The anxiety about AI and writing reveals our tendency to mistake familiar forms for essential structures. Just as medieval scholars could not imagine scholarship without Latin, many today cannot envision intellectual work without traditional writing. As A.E. Housman wrote in 1921: "When the study of Latin dies, the study of thought dies with it. For Latin has been the vehicle of the intellect for millennia, and its neglect spells intellectual mediocrity." This prediction proved spectacularly wrong. The dire warnings about AI's impact on writing will likely meet the same fate.

Writing serves thought, not the other way around. The specific techniques we use to record and share ideas matter less than the ideas themselves. Rather than trying to preserve current writing practices unchanged, we should embrace the opportunity to develop new forms of expression. The death of Latin did not kill thought. Neither will the transformation of writing through AI.

The real challenge is not protecting traditional writing but imagining new possibilities. How might AI help us communicate more effectively? What new genres and styles will emerge? What aspects of current writing practice truly serve human needs, and what parts simply reflect professional habits? These questions deserve more attention than defensive reactions against change.

The history of education shows that cherished practices often outlive their usefulness. Latin remained central to education long after it ceased being particularly valuable. Similarly, current writing conventions may persist more from institutional inertia than genuine necessity. AI offers an opportunity to reconsider what forms of expression best serve human thought and learning.



Form-substance discrimination, a new learning outcome

We have long assumed that clear writing signals clear thinking. Schools teach writing as if it were math - a rigorous exercise that develops...