Tuesday, January 14, 2025

The Subtle Art of Monopolizing New Technology

Monopolizing new technology is rarely the result of some grand, sinister plan. More often, it quietly emerges from self-interest. People do not set out to dominate a market; they simply recognize an opportunity to position themselves between groundbreaking technology and everyday users. The most effective tactic? Convince people that the technology is far too complex or risky to handle on their own.

It starts subtly. As soon as a new tool gains attention, industry insiders begin highlighting its technical challenges—security risks, integration headaches, operational difficulties. Some of these concerns may be valid, but they also serve a convenient purpose: You need us to make this work for you.

Startups are particularly skilled at this. Many offer what are essentially "skins"—polished interfaces built on top of more complex systems like AI models. Occasionally, these tools improve workflows. More often, they simply act as unnecessary middlemen, offering little more than a sleek dashboard while quietly extracting value. By positioning their products as essential, these startups slide themselves between the technology and the user, profiting from the role they have created. 

Technical language only deepens this divide. Buzzwords like API, tokenization, and retrieval-augmented generation (RAG) are tossed around casually. The average user may not understand these terms. The result is predictable: the more confusing the language, the more necessary the “expert.” This kind of jargon-laden gatekeeping turns complexity into a very comfortable business model.

Large organizations play this game just as well. Within corporate structures, IT departments often lean into the story of complexity to justify larger budgets and expanded teams. Every new tool must be assessed for “security vulnerabilities,” “legacy system compatibility,” and “sustainability challenges.” These concerns are not fabricated, but they are often exaggerated—conveniently making the IT department look indispensable.

None of this is to say that all intermediaries are acting in bad faith. New technology can, at times, require expert guidance. But the line between providing help and fostering dependence is razor-thin. One must ask: are these gatekeepers empowering users, or simply reinforcing their own relevance?

History offers no shortage of examples. In the early days of personal computing, jargon like RAM, BIOS, and DOS made computers feel inaccessible. It was not until companies like Apple focused on simplicity that the average person felt confident using technology unaided. And yet, here we are again—with artificial intelligence, blockchain, and other innovations—watching the same pattern unfold.

Ironically, the true allies of the everyday user are not the flashy startups or corporate tech teams, but the very tech giants so often criticized. Sometimes that criticism is justified, other times it is little more than fashionable outrage. Yet these giants, locked in fierce competition for dominance, have every incentive to simplify access. Their business depends on millions of users engaging directly with their products, not through layers of consultants and third-party tools. The more accessible their technology, the more users they attract. These are the unlikely allies of a non-techy person. 

For users, the best strategy is simple: do not be intimidated by the flood of technical jargon or the endless parade of “essential” tools. Always ask: Who benefits from me feeling overwhelmed? Whenever possible, go straight to the source—OpenAI, Anthropic, Google. If you truly cannot figure something out, seek help when you need it, not when it is aggressively sold to you.

Technology should empower, not confuse. The real challenge is knowing when complexity is genuine and when it is merely someone else’s business model.



Monday, January 13, 2025

The Myth of AI Replacing Teachers: Why Human Connection Matters More Than Ever

Last week, a colleague asked me what I thought about AI replacing teachers. The question made me smile - not because it was silly, but because it revealed how deeply we misunderstand both artificial intelligence and teaching. As someone who has written much on the pedagogy of relation and now serves as chief AI officer, I see a different story unfolding.

The fear of AI replacing teachers rests on a peculiar assumption: that teaching is primarily about delivering information and grading papers. It is as if we imagine teachers as particularly inefficient computers, ready to be upgraded to faster models. This view would be amusing if it weren't so prevalent among teachers (and their labor unions) and tech enthusiasts alike.

Teaching, at its heart, is not about information transfer - it is about relationship building. Research in relational pedagogies has shown time and again that learning happens through and because of human connections. Think about how children learn their first language: not through formal instruction, but through countless small interactions, emotional connections, and social bonds. The same principle extends throughout the entire education.

When I first encountered ChatGPT, I was struck not by its ability to replace teachers, but by its potential to give them back what they need most: time for human connection. AI can handle the mundane tasks that currently consume teachers' energy - generating basic content, providing routine feedback, creating initial drafts of lesson plans. But it cannot replicate the raised eyebrow that tells a student their argument needs work, or the encouraging nod that builds confidence in a hesitant learner.

Yet many educators remain skeptical of AI, and perhaps they should be. Any tool powerful enough to help is also powerful enough to harm if misused. But the real risk isn't that AI will replace teachers - it is that we'll waste its potential by focusing on the wrong things. Instead of using AI to automate educational assembly lines, we could use it to create more space for real human connection in learning.

I have seen glimpses of this future in my own classroom. When AI can answer routine questions about my syllabus, and lots of basic questions about content of the course, I can spend more time in meaningful discussions with students. When it helps generate initial content, I can focus on crafting experiences that challenge and engage. The technology becomes invisible, while human relationships move to the foreground.

The coming years will transform education, but not in the way many fear. The teachers who thrive won't be those who resist AI, nor those who embrace it uncritically. They will be the ones who understand that technology works best when it strengthens, rather than replaces, human relationships.


Monday, January 6, 2025

Get Used to It: You Will Read AI Summaries, Too

No human can keep up with academic publishing. In philosophy alone - a relatively small field - scholars produce over 100 million words a year in 2500 journals in many languages. We already avoid reading complete texts. Speed reading, strategic reading, scanning - these are all ways of not reading while pretending we do. Few people read academic papers word by word. We look for key arguments, skip familiar ground, skim examples. These are coping mechanisms for an impossible task.

AI-generated summaries are the next logical step. Yes, they miss nuance. Yes, they may misinterpret complex arguments. But they are better than not reading at all, which is what happens to most papers in any field. An imperfect but targeted summary of a paper you would never open expands rather than limits your knowledge. 

Let us be honest about why we read scholarly literature. We search for evidence that confirms or challenges our hypotheses, for ideas that enrich our understanding of specific problems. Reading is not an end in itself; it serves our scholarly purposes. AI excels precisely at this kind of targeted knowledge extraction. It can track related concepts across disciplines even when authors use different terminology to describe similar phenomena. Soon, AI will detect subtle connections between ideas that human readers might miss entirely. 

The shift toward AI-assisted reading in academia is inevitable. Instead of pretending otherwise, we should teach students to know the limitations of AI summarization, to cross-check crucial points against source texts, to use summaries as maps for selective deep reading. Critics will say this threatens scholarship. But the real threat is the growing gap between available knowledge and our capacity to process it. AI-assisted reading could enable more thoughtful engagement by helping us identify which texts truly deserve careful study. This does not cancel the practice of close reading, but augments and enriches it. 


Saturday, January 4, 2025

The End of Writing as We Know It (And Why That is Fine)

The relationship between thought and writing has never been simple. While writing helps organize and preserve thought, the specific form writing takes varies across time and cultures. Yet educators and cultural critics display remarkable resistance to reimagining writing in the age of artificial intelligence.

The current discourse around AI and writing echoes historical anxieties about the decline of Latin instruction. In the 18th and 19th centuries, prominent intellectuals warned that abandoning Latin would lead to cultural and intellectual decay. They saw Latin as more than a language - it represented a particular way of thinking, a connection to tradition, and a mark of education. Jefferson praised Latin as essential for intellectual development. Arnold predicted cultural impoverishment without classical education. Newman saw classics as the bedrock of sound learning.

These predictions did not materialize. The decline of Latin did not prevent the emergence of rich intellectual traditions in vernacular languages. Modern universities produce sophisticated scholarship without requiring Latin fluency. The link between Latin and "disciplined intellect" proved imaginary.

Today's defenders of traditional writing make similar arguments. They present specific writing conventions - formal grammar, academic style, elaborate sentence structures - as essential to clear thinking. Yet these conventions reflect historical accidents rather than cognitive necessities. Most human thinking and communication happens through speech, which follows different patterns. The formal writing style emerged relatively recently as a specialized professional skill.

AI will likely transform writing practices just as the decline of Latin transformed education. Some traditional writing skills may become less relevant as AI handles routine composition tasks. But this does not threaten human thought or culture. New forms of expression will emerge, combining human creativity with AI capabilities. Rather than defending writing conventions, educators should explore how AI can enhance human communication and cognition.

The anxiety about AI and writing reveals our tendency to mistake familiar forms for essential structures. Just as medieval scholars could not imagine scholarship without Latin, many today cannot envision intellectual work without traditional writing. As A.E. Housman wrote in 1921: "When the study of Latin dies, the study of thought dies with it. For Latin has been the vehicle of the intellect for millennia, and its neglect spells intellectual mediocrity." This prediction proved spectacularly wrong. The dire warnings about AI's impact on writing will likely meet the same fate.

Writing serves thought, not the other way around. The specific techniques we use to record and share ideas matter less than the ideas themselves. Rather than trying to preserve current writing practices unchanged, we should embrace the opportunity to develop new forms of expression. The death of Latin did not kill thought. Neither will the transformation of writing through AI.

The real challenge is not protecting traditional writing but imagining new possibilities. How might AI help us communicate more effectively? What new genres and styles will emerge? What aspects of current writing practice truly serve human needs, and what parts simply reflect professional habits? These questions deserve more attention than defensive reactions against change.

The history of education shows that cherished practices often outlive their usefulness. Latin remained central to education long after it ceased being particularly valuable. Similarly, current writing conventions may persist more from institutional inertia than genuine necessity. AI offers an opportunity to reconsider what forms of expression best serve human thought and learning.



The Subtle Art of Monopolizing New Technology

Monopolizing new technology is rarely the result of some grand, sinister plan. More often, it quietly emerges from self-interest. People do ...