Search This Blog

Friday, February 28, 2025

The Invasive Innovation, a Case of AI in Education

Disruptive innovations, as defined by Clayton Christensen, typically emerge at the margins before gradually overtaking established players. In contrast, artificial intelligence in education is not a slow-moving force of competitive displacement. It is an invasive innovation—spreading far beyond the predictions of Everett Rogers’ diffusion model, propelled not as much by institutions as by students themselves. This is not a technology waiting to be adopted. It is a reality forcing adaptation. Treating it as an optional innovation mises the point entirely. 

Educational leaders, policymakers, and institutions may wish to evaluate AI’s impact, consider best practices, and implement careful frameworks. But they are not in control of the timeline. The adoption is not up for debate. It is already happening, driven by students who see AI not as a tool of the future but as an everyday utility, as natural as search engines or calculators. The question is not when to integrate AI into education, but how to manage an environment where AI is already integrated—often in ways institutions neither designed nor anticipated.

This is why traditional approaches to educational innovations are inadequate. Normally, innovations are assessed, piloted, and then scaled. But AI in education is unfolding more like a crisis than an initiative. It requires a response more akin to a public health emergency than a strategic rollout. When COVID-19 disrupted education, schools did not spend years analyzing whether remote learning was effective. They pivoted overnight, creating imperfect but necessary solutions. AI invasion demands a similar level of urgency.

Institutional response to invasive AI must be comprehensive rather than defensive. How do we accommodate this invasive innovation? Putting AI tools directly in the hands of faculty represents a critical first step, ensuring they engage with the same technologies their students already access freely. This must be paired with robust professional development—not merely technical training but genuine pedagogical exploration that helps educators reimagine their disciplinary approaches in an AI-augmented world.

Beyond tools and training, institutions must invest in creating dedicated time and space for disciplinary teams to collaboratively revise their assignments. The goal is not to create "cheat-proof" assessments but to develop authentic tasks that meaningfully incorporate AI as a partner in learning. This demands protected time away from everyday teaching pressures, where faculty can experiment, fail, refine, and document new approaches that acknowledge AI as a permanent fixture in their educational landscape rather than an external threat to be neutralized.



Wednesday, February 26, 2025

Sidorkin Hypothesis: Beyond Procedural Fluency in AI-Augmented Instruction

The hypothesis is this: Quality AI-augmented instruction reduces emphasis on procedural skills while enhancing higher-order thinking and conceptual learning. This shift may offer an alternative pathway for cognitive offloading, which could supplement or even replace traditional procedural skills acquisition.

Cognitive load theory, developed by John Sweller in the 1980s, provides a useful framework. The theory posits that our working memory has limited capacity when dealing with new information. Sweller himself, along with many other cognitive scientists, views fluency in procedural skills as a major mechanism for cognitive offloading. When basic procedures become automatic through practice, they consume fewer working memory resources, thereby freeing up mental capacity for higher-order learning. This is why traditional education emphasizes mastering procedural skills—calculating derivatives, balancing chemical equations, applying grammatical rules—before tackling complex conceptual work.

In my view, AI tools function as an alternative cognitive offloading mechanism that can complement or even supersede traditional procedural fluency. These tools handle procedural tasks , creating cognitive space for students to engage with concepts at a deeper level without first developing procedural automaticity. Consider a student learning calculus: traditionally, substantial mental resources are dedicated to memorizing differentiation rules and practicing their application. With AI assistance handling these procedural aspects, the student can direct cognitive resources toward understanding conceptual foundations—what derivatives actually represent, how they relate to real-world phenomena, and when different approaches should be applied.

This approach necessitates reimagining assessment. Traditional writing instruction heavily rewards procedural accuracy in grammar, citation formats, and structural conventions, but in an AI-augmented educational landscape, assessment must evolve to measure the ability to prompt AI tools effectively, the quality of critical analysis, application of rhetorical strategies in novel contexts, and evaluation of AI-generated text. The focus shifts from "Can you properly format this essay?" to "Can you determine which rhetorical approach best suits your purpose, interpret complex texts critically, and evaluate whether the AI-generated analysis captures the deeper meaning of the work being examined?"

This transformation faces substantial challenges. Educators rightfully question whether students can truly grasp concepts without some level of procedural fluency. it is likely that some basic  procedural knowledge remains necessary for conceptual understanding. The optimal approach involves thoughtful integration—using AI to reduce procedural load while ensuring students develop core competencies. To put it roughly, "procedural knowledge - yes, procedural fluency - not, really."  

The equalizing potential of AI-augmented instruction cannot be overlooked. Students who traditionally struggle with procedural aspects—whether due to learning differences, educational disruptions, or lack of prior opportunities—may find themselves able to participate more fully in higher-order learning. This democratization of advanced conceptual engagement could elevate educational outcomes across diverse student populations. We stand at the threshold of potentially transforming education from a system that sorts students based on procedural proficiency to one that cultivates universally richer, more advanced learning outcomes: deeper critical analysis, more sophisticated problem formulation, nuanced ethical reasoning, and creative synthesis across domains—intellectual capabilities that more authentically reflect the complex demands of contemporary society.



Saturday, February 22, 2025

On Techno-Utopianism. Elon Musk and the Soul of Education

The recent video of Elon Musk promising AI teachers reveals a common misunderstanding among technology leaders. They see education primarily as information transfer and skills training, where an infinitely patient AI system delivers perfectly tailored content to each student. This viewpoint ignores the fundamental nature of education as a relational institution.

Since Gutenberg's invention of the printing press, motivated individuals could teach themselves almost anything. Libraries contain more knowledge than any single teacher. Yet most people do not turn into autodidacts. Why is that? The question is not how to make knowledge more accessible, but why people choose to engage with it.

Teachers generate reasons to learn through two main approaches. In more constructivist settings, they inspire curiosity and create engaging problems to solve. In mor traditional schools, they maintain authority and discipline. In most schools, there is a mixture of both. Both methods work because they establish a social framework for learning. A good teacher knows when to push and when to comfort, when to explain and when to let students struggle.

The comparison of AI to Einstein as a teacher misses the point. Teaching requires different qualities than scientific genius - the capacity to enter a relationship, to create meaningful connections, and to help students discover their own reasons for learning. An AI system, no matter how knowledgeable, cannot do any of that.

Students often study not because they find the subject inherently fascinating, but because they respect  their teacher, want to belong to a learning community, or seek to fulfill social expectations. Even negative motivations like fear of disappointing others have a distinctly human character. 

The techno-utopian vision reduces learning to information exchanges and skill assessments. This mechanistic view fails to account for the social and emotional dimensions of human development. While AI can enhance teaching by handling routine tasks, it cannot replace the essential human relationships that drive educational engagement. The future of education lies not in perfecting content delivery algorithms, but in strengthening the relational foundations of learning. 

Such overblown promises about AI in education do more harm than good. They create unnecessary anxiety among teachers and administrators, leading to resistance against even modest technological improvements. Instead of addressing real challenges in education - student engagement, equitable access, and meaningful assessment - institutions get distracted by unrealistic visions of AI-driven transformation. We need a more balanced approach that recognizes both the potential and limitations of AI in supporting, not replacing, the fundamentally human enterprise of education.



Thursday, February 20, 2025

The AI Recruiter Will See You Now

The tidy world of job applications, carefully curated CVs and anxious cover letters may soon become a relic. Every professional now leaves digital traces across the internet - their work, opinions, and achievements create detailed patterns of their capabilities. Artificial Intelligence agents will soon navigate these digital landscapes, transforming how organizations find talent.

Unlike current recruitment tools that passively wait for queries, these AI agents will actively explore the internet, following leads and making connections. They will analyze not just LinkedIn profiles, but candidates' entire digital footprint. The approach promises to solve a persistent problem in recruitment: finding qualified people who are not actively job-hunting.

The matching process will extend beyond technical qualifications. Digital footprints reveal working styles and professional values. A cybersecurity position might require someone who demonstrates consistent risk awareness; an innovation officer role might suit someone comfortable with uncertainty. AI agents could assess such traits by analyzing candidates' professional communications and public activities.

Yet this technological advance brings fresh concerns. Privacy considerations demand attention - while AI agents would analyze public information, organizations must establish clear ethical guidelines about data usage. More fundamentally, AI agents must remain sophisticated talent scouts rather than final decision makers. They can gather evidence and make recommendations, but human recruiters must evaluate suggestions within their understanding of organizational needs.

The transformation suggests a future where talent discovery becomes more equitable. AI agents could help overcome human biases by focusing on demonstrated capabilities rather than credentials or connections. The winners will be organizations that master this partnership between artificial intelligence and human judgment. The losers may be traditional recruitment agencies - unless they swiftly adapt to the new reality.





Monday, February 10, 2025

Form-substance discrimination, a new learning outcome

We have long assumed that clear writing signals clear thinking. Schools teach writing as if it were math - a rigorous exercise that develops logical thinking. Editors catch not just errors but muddled ideas. Or so the theory goes. Artificial intelligence shatters this comfortable assumption by churning out impeccably structured prose devoid of original thought.

Form-substance discrimination represents a higher-order cognitive skill, similar to what art historians develop when they learn to separate technical mastery from creative vision. Just as an art student must unlearn their initial attraction to photorealistic paintings, readers now must develop resistance to seductively clear prose. This requires a kind of cognitive inhibition - the ability to suppress immediate aesthetic pleasure for the sake of deeper analysis.

The skill builds on existing metacognitive abilities but requires their novel application. Readers already know how to identify main ideas, analyze arguments, and evaluate evidence. What is new is the need to perform these operations while actively discounting the surface appeal of the text. This resembles what wine tasters do when they evaluate wines blind, stripped of prestigious labels and beautiful bottles.

The development follows a predictable pattern. At first, readers struggle to overcome their ingrained respect for well-crafted prose. The initial challenge lies not in identifying weak ideas but in giving oneself permission to criticize a text that follows all the rules of good writing. This mirrors the development of critical thinking in general, where students must learn to question authority figures who appear competent and confident.

The second stage involves developing specific techniques for idea extraction. Readers learn to create idea maps independent of the text's structure, to count unique concepts rather than words, to identify circular arguments hidden behind elegant transitions. They begin to see how AI-generated text often creates an illusion of logical flow while merely restating the same point in different words.

The final stage brings automaticity. Experienced readers develop an immediate sense of a text's intellectual weight, just as experienced teachers can quickly gauge a student's understanding despite fluent recitation. This involves pattern recognition built through exposure to many examples of both substantive and hollow texts.

The educational implications are significant. Writing instruction must now explicitly separate craft from content. Students need exposure to both well-written texts with weak ideas and poorly written texts with strong ideas. They must practice identifying when sophisticated language masks conceptual poverty and when rough expression contains genuine insight.

This shift parallels broader changes in how we process information. In a world of information abundance, the key skill is no longer finding or producing well-formed content but discerning its value. Form-substance discrimination represents a specific case of this general challenge - learning to navigate a world where traditional quality signals no longer reliably indicate underlying worth.

The skill matters beyond academia. Business leaders reading AI-generated reports, citizens evaluating political arguments, professionals studying their field's literature - all need to separate rhetorical sophistication from intellectual contribution. As AI writing tools improve, this ability will become as fundamental as basic literacy.

We face a paradox: the better AI becomes at writing, the more important it becomes for humans to see through good writing. The very perfection of AI-generated prose creates a new kind of opacity that readers must learn to penetrate. Form-substance discrimination thus emerges not just as an academic skill but as a key component of modern critical thinking.



Sunday, February 9, 2025

AI and Labor: A Smarter Path Forward

Trade unions face a defining moment. Artificial intelligence presents genuine concerns about job displacement, yet the response need not mirror historical patterns of resistance to technological change. The Luddite movement of the 1810s serves as a cautionary tale - their destruction of mechanized looms neither preserved jobs nor improved workers' conditions. All technology affects labor; that it what technology is, work assistance. 

The automation paradox offers a more nuanced perspective. While machines replace specific tasks, they generate new forms of work. The introduction of automated teller machines in banking led to more bank branches and tellers performing complex customer service roles. This pattern repeats across industries - automation reduces costs, expands services, and creates different job categories.

Labor leaders would serve their members better by negotiating robust transition arrangements. Key demands should include employer-funded retraining programs, preferential access to new positions, and compensation packages that recognize acquired skills. The focus must shift from preventing change to shaping its implementation.

The pace of AI integration varies significantly by sector. Manufacturing and data processing may see rapid adoption, but industries built on human relationships - education, healthcare, social work - will incorporate AI gradually as assistive technology. Complex organizations require extensive testing and workflow redesign before meaningful automation becomes feasible.

Economic history demonstrates that reduced production costs expand economic activity. When basic tasks become automated, human attention shifts to more sophisticated problems. The telephone eliminated telegraph operators but created vast new communication industries. Similarly, AI will likely automate routine cognitive work while opening possibilities in areas we have not yet imagined.

Unions retain significant leverage during this transition. Organizations need experienced workers to implement new technologies effectively. This position allows labor to negotiate favorable terms - extended notice periods, substantial retraining budgets, wage protection during transition, and clear paths to higher-skilled roles.

The key lies in recognizing AI as a tool for augmentation rather than pure replacement. A machine learning system may process medical images faster than radiologists, but interpreting results in complex cases still requires human judgment. Similar patterns will emerge across professions - AI handling routine tasks while humans focus on nuanced decision-making and interpersonal elements.

Rather than resist change, unions should position themselves as partners in managing transition. This approach preserves their relevance and better serves member interests. The alternative - attempting to prevent AI adoption - risks marginalization as companies seek ways around opposition or relocate to more amenable jurisdictions.

The challenge for labor leadership is to shift from defensive postures to proactive engagement. This means developing expertise in emerging technologies, identifying opportunities for worker advancement, and ensuring transition arrangements protect vulnerable members while facilitating adaptation to changing workplace demands.



Tuesday, February 4, 2025

Augmented Problem Finding: The Next Frontier in AI Literacy

In my recent blog on task decomposition as a key AI skill, I highlighted how breaking down complex problems enables effective human-AI collaboration. Yet before we can decompose a task, we must identify which problems are worth pursuing - a skill that takes on new dimensions in the age of AI.

This ability to recognize solvable problems expands dramatically with AI tools at our disposal. Tasks once considered too time-consuming or complex suddenly become manageable. The cognitive offloading that AI enables does not just help us solve existing problems - it fundamentally reshapes our understanding of what constitutes a tractable challenge.

Consider how VisiCalc transformed financial planning in the early 1980s. Initially seen as a mere automation tool for accountants, it revolutionized business planning by enabling instant scenario analysis. Tasks that would have consumed days of manual recalculation became instantaneous, allowing professionals to explore multiple strategic options and ask "what if" questions they would not have contemplated before. Similarly, AI prompts us to reconsider which intellectual tasks we should undertake. Writing a comprehensive literature review might have once consumed months; with AI assistance, scholars can now contemplate more ambitious syntheses of knowledge.

This expanded problem space creates its own paradox. As more tasks become technically feasible, the challenge shifts to identifying which ones merit attention. The skill resembles what cognitive psychologists call "problem finding," but with an important twist. Traditional problem finding focuses on identifying gaps or needs. Augmented problem finding requires understanding both human and AI capabilities to recognize opportunities in this enlarged cognitive landscape.

The distinction becomes clear in professional settings. Experienced AI users develop an intuitive sense of which tasks to delegate and which to tackle themselves. They recognize when a seemingly straightforward request actually requires careful human oversight, or when an apparently complex task might yield to well-structured AI assistance. This judgment develops through experience but could be taught more systematically.

The implications extend beyond individual productivity. Organizations must now cultivate this capacity across their workforce. The competitive advantage increasingly lies not in having access to AI tools - these are becoming ubiquitous - but in identifying novel applications for them. This explains why some organizations extract more value from AI than others, despite using similar technologies.

Teaching augmented problem finding requires a different approach from traditional problem-solving instruction. Students need exposure to varied scenarios where AI capabilities interact with human judgment. They must learn to recognize patterns in successful AI applications while developing realistic expectations about AI limitations. Most importantly, they need practice in identifying opportunities that emerge from combining human and machine capabilities in novel ways.

The skill also has ethical dimensions. Not every task that can be automated should be. Augmented problem finding includes judging when human involvement adds necessary value, even at the cost of efficiency. It requires balancing the technical feasibility of AI solutions against broader organizational and societal impacts.

As AI capabilities evolve, this skill will become increasingly crucial. The future belongs not to those who can best use AI tools, but to those who can best identify opportunities for their application. This suggests a shift in how we think about AI literacy - from focusing on technical proficiency to developing sophisticated judgment about when and how to engage AI capabilities.

The automation paradox that Lisanne Bainbridge identified in her 1983 analysis of industrial systems points to an interesting future. As we become more adept at augmented problem finding, we discover new challenges that merit attention. This creates a virtuous cycle of innovation, where each advance in AI capability opens new frontiers for human creativity and judgment.

Perhaps most intriguingly, this skill might represent a distinctly human advantage in the age of AI. While machines excel at solving well-defined problems, the ability to identify worthy challenges remains a uniquely human capability. By developing our capacity for augmented problem finding, we ensure a meaningful role for human judgment in an increasingly automated world.



Saturday, February 1, 2025

Task Decomposition, a core AI skill

The effective use of artificial intelligence depends on our ability to structure problems in ways that align with both human and machine capabilities. While AI demonstrates remarkable computational abilities, its effectiveness relies on carefully structured input and systematic oversight. This suggests that our focus should shift toward understanding how to break down complex tasks into components that leverage the respective strengths of humans and machines.

Task decomposition - the practice of breaking larger problems into manageable parts - predates AI but takes on new significance in this context. Research in expertise studies shows that experienced problem-solvers often approach complex challenges by identifying distinct components and their relationships. This natural human tendency provides a framework for thinking about AI collaboration: we need to recognize which aspects of a task benefit from computational processing and which require human judgment.

The interaction between human users and AI systems appears to follow certain patterns. Those who use AI effectively tend to approach it as a collaborative tool rather than a complete solution. They typically work through multiple iterations: breaking down the problem, testing AI responses, evaluating results, and adjusting their approach. This mirrors established practices in other domains where experts regularly refine their solutions through systematic trial and error.

Consider the task of writing a research paper. Rather than requesting a complete document from AI, a more effective approach involves breaking down the process: developing an outline, gathering relevant sources, analyzing specific arguments, and integrating various perspectives. Similarly, in data analysis, success often comes from methodically defining questions, selecting appropriate datasets, using AI for initial pattern recognition, and applying human expertise to interpret the findings.

This collaborative approach serves two purposes. First, it helps manage complexity by distributing cognitive effort across human and machine resources. Second, it maintains human oversight of the process while benefiting from AI's computational capabilities. The goal is not to automate thinking but to enhance it through structured collaboration.

Current educational practices have not yet fully adapted to this reality. While many institutions offer technical training in AI or discuss its ethical implications, fewer focus on teaching systematic approaches to human-AI collaboration. Students need explicit instruction in how to break down complex tasks and document their decision-making processes when working with AI tools.

To address this gap, educational programs could incorporate several key elements:

  1. Practice in systematic task analysis and decomposition
  2. Training in structured approaches to AI interaction
  3. Documentation of decision-making processes in AI-assisted work
  4. Critical evaluation of AI outputs and limitations
  5. Integration of human expertise with AI capabilities

The emergence of AI tools prompts us to examine our own cognitive processes more explicitly. As we learn to structure problems for AI collaboration, we also develop a clearer understanding of our own problem-solving approaches. This suggests that learning to work effectively with AI involves not just technical skills but also enhanced metacognition - thinking about our own thinking.

The future of human-AI collaboration likely depends less on technological advancement and more on our ability to develop systematic approaches to task decomposition. By focusing on this fundamental skill, we can work toward more effective integration of human and machine capabilities while maintaining the critical role of human judgment and oversight.

These observations and suggestions should be treated as starting points for further investigation rather than definitive conclusions. As we gather more evidence about effective human-AI collaboration, our understanding of task decomposition and its role in this process will likely evolve. The key is to maintain a balanced approach that recognizes both the potential and limitations of AI while developing structured methods for its effective use. 




Thursday, January 23, 2025

Not Pleased? Don’t Release It: The Only AI Ethics Rule That Matters

Imagine this: you have tasked an AI with drafting an email, and it produces a passive-aggressive disaster that starts, “Per our last conversation, which was, frankly, baffling…” You delete it, chuckle at its misjudgment, and write your own. But what if you had not? What if you had just hit “send,” thinking, Close enough?

This scenario distills the ethical dilemma of AI into its purest form: the moment of release. Not the mechanics of training data or the mysteries of machine learning, but the single, decisive act of sharing output with the world. In that instant, accountability crystallizes. It does not matter whether you crafted most of the the content yourself or leaned on the AI—the responsibility is entirely yours. 

We are used to outsourcing tasks, but AI lures us into outsourcing judgment itself. Its most cunning trick is not in its ability to mimic human language or spin impressive results from vague inputs. It is in convincing us that its outputs are inherently worthy of trust, tempting us to lower our guard. We are used to thinking - if a text is well-phrased and proofread, it must deserve our trust. This assumption does not hold anymore.

This illusion of reliability is dangerous. AI does not think, intend, or care. It is a reflection of its programming, its training data, and your prompt. If it churns out something brilliant, that is no more its triumph than a mirror deserves credit for the sunrise. And if it produces something harmful or inaccurate, the blame does not rest on the tool but on the person who decided its work was good enough to share.

History has seen this before. The printing press did not absolve publishers from libel; a copy machine did not excuse someone distributing fake material. Technology has always been an extension of human will, not a replacement for it. Yet, with AI, there is an emerging tendency to treat it as if it has intentions—blaming its "hallucinations" or "bias" instead of acknowledging the real source of responsibility: the human operator.

The allure of AI lies in its efficiency, its ability to transform inputs into polished-seeming outputs at lightning speed. But this speed can lull us into complacency, making it easier to prioritize convenience over caution. Editing, which used to be the painstaking craft of refining and perfecting, risks being reduced to a hasty skim, a rubber stamp of approval. This surrender of critical oversight is not just laziness—it is a new kind of moral failing.

Ethics in the AI age does not require intricate frameworks or endless debate. It boils down to one unflinching rule: if you release it, you are responsible for it. There is no caveat, no “but the AI misunderstood me.” The moment you publish, share, or forward something generated by AI, you claim its contents as your own.

This principle is a call for realism in the face of AI’s potential. AI can help us create, analyze, and innovate faster than ever, but it cannot—and should not—replace human accountability. The leap from creation to publication is where the line must be drawn. That is where we prove we are still the grown-ups in the room.

Before you hit "send" or "post" or "publish," a few simple questions can save a lot of regret:

  • Have you read it thoroughly? Not just the shiny parts, but the details that could cause harm.
  • Would you stake your reputation on this?
  • Is it biased, or factually wrong?

The alternative is a world where people shrug off misinformation, bias, and harm as the inevitable byproducts of progress. A world where the excuse, The AI did it, becomes a get-out-of-jail-free card for every mistake.

So, when the next output feels close enough, resist the urge to let it slide. That "send" button is not just a convenience—it is a statement of ownership. Guard it fiercely. Responsibility begins and ends with you, not the machine.

Because once you let something loose in the world, you cannot take it back.





Wednesday, January 15, 2025

Is Critical Thinking Going Extinct? Maybe That's Not Bad

As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI's impact on our cognitive skills. Those of us who learned to navigate by landmarks and memorized phone numbers often bemoan younger generations' reliance on digital tools. But perhaps we are missing something important about cognitive evolution.

Gerlich's research is methodologically elegant. Through surveys and interviews with 666 participants, he documents a decline in traditional critical thinking skills among frequent AI users. The data analysis is rigorous - multiple regression, ANOVA, random forest regression - showing clear correlations between AI tool usage and reduced traditional analytical thinking.

But here's where I think Gerlich misses a crucial insight. The study measures critical thinking through metrics developed for a pre-AI world. It's like judging modern urban survival skills by the standards of hunter-gatherer societies. Those ancient peoples could track game, identify countless plants, and navigate vast territories without maps. By their standards, most of us would be considered cognitively impaired.

What we're witnessing is not cognitive decline but cognitive adaptation. Today's "critical thinking" is not about solving problems independently - it's about effective human-AI collaboration. It's about knowing when to trust AI and when to question it, how to frame queries effectively, and how to combine AI insights with human judgment.

The educational implications are profound. Instead of lamenting the loss of traditional cognitive skills, we should focus on developing "AI-literate critical thinking." Sure, I can still read a map, but my children need to master skills I never dreamed of - like crafting effective prompts for AI systems or critically evaluating AI-generated content.

The old form of critical thinking might be fading, like the ability to start a fire by friction or navigate by stars. But a new form is emerging, better suited to our technological reality. Our task is not to resist this evolution but to guide it wisely.

What do you think? Are we really losing something irreplaceable, or are we just adapting to a new cognitive environment?




Tuesday, January 14, 2025

The Subtle Art of Monopolizing New Technology

Monopolizing new technology is rarely the result of some grand, sinister plan. More often, it quietly emerges from self-interest. People do not set out to dominate a market; they simply recognize an opportunity to position themselves between groundbreaking technology and everyday users. The most effective tactic? Convince people that the technology is far too complex or risky to handle on their own.

It starts subtly. As soon as a new tool gains attention, industry insiders begin highlighting its technical challenges—security risks, integration headaches, operational difficulties. Some of these concerns may be valid, but they also serve a convenient purpose: You need us to make this work for you.

Startups are particularly skilled at this. Many offer what are essentially "skins"—polished interfaces built on top of more complex systems like AI models. Occasionally, these tools improve workflows. More often, they simply act as unnecessary middlemen, offering little more than a sleek dashboard while quietly extracting value. By positioning their products as essential, these startups slide themselves between the technology and the user, profiting from the role they have created. 

Technical language only deepens this divide. Buzzwords like API, tokenization, and retrieval-augmented generation (RAG) are tossed around casually. The average user may not understand these terms. The result is predictable: the more confusing the language, the more necessary the “expert.” This kind of jargon-laden gatekeeping turns complexity into a very comfortable business model.

Large organizations play this game just as well. Within corporate structures, IT departments often lean into the story of complexity to justify larger budgets and expanded teams. Every new tool must be assessed for “security vulnerabilities,” “legacy system compatibility,” and “sustainability challenges.” These concerns are not fabricated, but they are often exaggerated—conveniently making the IT department look indispensable.

None of this is to say that all intermediaries are acting in bad faith. New technology can, at times, require expert guidance. But the line between providing help and fostering dependence is razor-thin. One must ask: are these gatekeepers empowering users, or simply reinforcing their own relevance?

History offers no shortage of examples. In the early days of personal computing, jargon like RAM, BIOS, and DOS made computers feel inaccessible. It was not until companies like Apple focused on simplicity that the average person felt confident using technology unaided. And yet, here we are again—with artificial intelligence, blockchain, and other innovations—watching the same pattern unfold.

Ironically, the true allies of the everyday user are not the flashy startups or corporate tech teams, but the very tech giants so often criticized. Sometimes that criticism is justified, other times it is little more than fashionable outrage. Yet these giants, locked in fierce competition for dominance, have every incentive to simplify access. Their business depends on millions of users engaging directly with their products, not through layers of consultants and third-party tools. The more accessible their technology, the more users they attract. These are the unlikely allies of a non-techy person. 

For users, the best strategy is simple: do not be intimidated by the flood of technical jargon or the endless parade of “essential” tools. Always ask: Who benefits from me feeling overwhelmed? Whenever possible, go straight to the source—OpenAI, Anthropic, Google. If you truly cannot figure something out, seek help when you need it, not when it is aggressively sold to you.

Technology should empower, not confuse. The real challenge is knowing when complexity is genuine and when it is merely someone else’s business model.



Monday, January 13, 2025

The Myth of AI Replacing Teachers: Why Human Connection Matters More Than Ever

Last week, a colleague asked me what I thought about AI replacing teachers. The question made me smile - not because it was silly, but because it revealed how deeply we misunderstand both artificial intelligence and teaching. As someone who has written much on the pedagogy of relation and now serves as chief AI officer, I see a different story unfolding.

The fear of AI replacing teachers rests on a peculiar assumption: that teaching is primarily about delivering information and grading papers. It is as if we imagine teachers as particularly inefficient computers, ready to be upgraded to faster models. This view would be amusing if it weren't so prevalent among teachers (and their labor unions) and tech enthusiasts alike.

Teaching, at its heart, is not about information transfer - it is about relationship building. Research in relational pedagogies has shown time and again that learning happens through and because of human connections. Think about how children learn their first language: not through formal instruction, but through countless small interactions, emotional connections, and social bonds. The same principle extends throughout the entire education.

When I first encountered ChatGPT, I was struck not by its ability to replace teachers, but by its potential to give them back what they need most: time for human connection. AI can handle the mundane tasks that currently consume teachers' energy - generating basic content, providing routine feedback, creating initial drafts of lesson plans. But it cannot replicate the raised eyebrow that tells a student their argument needs work, or the encouraging nod that builds confidence in a hesitant learner.

Yet many educators remain skeptical of AI, and perhaps they should be. Any tool powerful enough to help is also powerful enough to harm if misused. But the real risk isn't that AI will replace teachers - it is that we'll waste its potential by focusing on the wrong things. Instead of using AI to automate educational assembly lines, we could use it to create more space for real human connection in learning.

I have seen glimpses of this future in my own classroom. When AI can answer routine questions about my syllabus, and lots of basic questions about content of the course, I can spend more time in meaningful discussions with students. When it helps generate initial content, I can focus on crafting experiences that challenge and engage. The technology becomes invisible, while human relationships move to the foreground.

The coming years will transform education, but not in the way many fear. The teachers who thrive won't be those who resist AI, nor those who embrace it uncritically. They will be the ones who understand that technology works best when it strengthens, rather than replaces, human relationships.


Monday, January 6, 2025

Get Used to It: You Will Read AI Summaries, Too

No human can keep up with academic publishing. In philosophy alone - a relatively small field - scholars produce over 100 million words a year in 2500 journals in many languages. We already avoid reading complete texts. Speed reading, strategic reading, scanning - these are all ways of not reading while pretending we do. Few people read academic papers word by word. We look for key arguments, skip familiar ground, skim examples. These are coping mechanisms for an impossible task.

AI-generated summaries are the next logical step. Yes, they miss nuance. Yes, they may misinterpret complex arguments. But they are better than not reading at all, which is what happens to most papers in any field. An imperfect but targeted summary of a paper you would never open expands rather than limits your knowledge. 

Let us be honest about why we read scholarly literature. We search for evidence that confirms or challenges our hypotheses, for ideas that enrich our understanding of specific problems. Reading is not an end in itself; it serves our scholarly purposes. AI excels precisely at this kind of targeted knowledge extraction. It can track related concepts across disciplines even when authors use different terminology to describe similar phenomena. Soon, AI will detect subtle connections between ideas that human readers might miss entirely. 

The shift toward AI-assisted reading in academia is inevitable. Instead of pretending otherwise, we should teach students to know the limitations of AI summarization, to cross-check crucial points against source texts, to use summaries as maps for selective deep reading. Critics will say this threatens scholarship. But the real threat is the growing gap between available knowledge and our capacity to process it. AI-assisted reading could enable more thoughtful engagement by helping us identify which texts truly deserve careful study. This does not cancel the practice of close reading, but augments and enriches it. 


Saturday, January 4, 2025

The End of Writing as We Know It (And Why That is Fine)

The relationship between thought and writing has never been simple. While writing helps organize and preserve thought, the specific form writing takes varies across time and cultures. Yet educators and cultural critics display remarkable resistance to reimagining writing in the age of artificial intelligence.

The current discourse around AI and writing echoes historical anxieties about the decline of Latin instruction. In the 18th and 19th centuries, prominent intellectuals warned that abandoning Latin would lead to cultural and intellectual decay. They saw Latin as more than a language - it represented a particular way of thinking, a connection to tradition, and a mark of education. Jefferson praised Latin as essential for intellectual development. Arnold predicted cultural impoverishment without classical education. Newman saw classics as the bedrock of sound learning.

These predictions did not materialize. The decline of Latin did not prevent the emergence of rich intellectual traditions in vernacular languages. Modern universities produce sophisticated scholarship without requiring Latin fluency. The link between Latin and "disciplined intellect" proved imaginary.

Today's defenders of traditional writing make similar arguments. They present specific writing conventions - formal grammar, academic style, elaborate sentence structures - as essential to clear thinking. Yet these conventions reflect historical accidents rather than cognitive necessities. Most human thinking and communication happens through speech, which follows different patterns. The formal writing style emerged relatively recently as a specialized professional skill.

AI will likely transform writing practices just as the decline of Latin transformed education. Some traditional writing skills may become less relevant as AI handles routine composition tasks. But this does not threaten human thought or culture. New forms of expression will emerge, combining human creativity with AI capabilities. Rather than defending writing conventions, educators should explore how AI can enhance human communication and cognition.

The anxiety about AI and writing reveals our tendency to mistake familiar forms for essential structures. Just as medieval scholars could not imagine scholarship without Latin, many today cannot envision intellectual work without traditional writing. As A.E. Housman wrote in 1921: "When the study of Latin dies, the study of thought dies with it. For Latin has been the vehicle of the intellect for millennia, and its neglect spells intellectual mediocrity." This prediction proved spectacularly wrong. The dire warnings about AI's impact on writing will likely meet the same fate.

Writing serves thought, not the other way around. The specific techniques we use to record and share ideas matter less than the ideas themselves. Rather than trying to preserve current writing practices unchanged, we should embrace the opportunity to develop new forms of expression. The death of Latin did not kill thought. Neither will the transformation of writing through AI.

The real challenge is not protecting traditional writing but imagining new possibilities. How might AI help us communicate more effectively? What new genres and styles will emerge? What aspects of current writing practice truly serve human needs, and what parts simply reflect professional habits? These questions deserve more attention than defensive reactions against change.

The history of education shows that cherished practices often outlive their usefulness. Latin remained central to education long after it ceased being particularly valuable. Similarly, current writing conventions may persist more from institutional inertia than genuine necessity. AI offers an opportunity to reconsider what forms of expression best serve human thought and learning.



Wednesday, December 18, 2024

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming, arrives at a time when fears about AI’s potential for deception are increasingly taking center stage. Its central claim is unsettling: large language models, when placed in carefully constructed scenarios, exhibit behavior that could be described as scheming. The authors document cases where advanced systems—Gemini, Claude, LLaMA, and OpenAI’s o1—strategically disable oversight mechanisms, exfiltrate simulated “weights,” or manipulate outputs in pursuit of a goal. These behaviors, they argue, are not accidental. The models recognize the circumstances, reason about their objectives, and devise covert strategies to circumvent constraints. On its face, the evidence appears troubling. Yet, the more closely one examines the paper’s methods and assumptions, the more its implications seem both exaggerated and strangely hollow.

The researchers’ experiments are elaborate but artificial. Models are force-fed goals through system prompts—goals framed as absolute imperatives: “achieve this at all costs.” The tests are littered with implausible scenarios where models stumble upon company memos, discover oversight configurations, or realize they are about to be shut down. In one such case, a model “decides” to copy itself to another server to continue pursuing its mission. These exercises showcase what is essentially AI roleplay, an obedient fulfillment of instructions cloaked as strategic autonomy. It is no great revelation that if you ask a model to behave like a schemer and provide it with a goal that conflicts with its supposed overseers, it will produce outputs resembling scheming. The deception here is contrived rather than emergent, a product of the researchers’ scaffolding rather than the model’s cunning.

The authors do provide evidence that models engage in persistent and deliberate behavior. For instance, OpenAI’s o1 model doubles down on deception, refusing to admit wrongdoing when confronted, even across multiple follow-up questions. Its responses—calculated evasions and subtle denials—are unsettlingly consistent. Yet this persistence is as much a result of the testing conditions as it is of the models’ sophistication. By emphasizing extreme goal-directedness and explicitly outlining oversight mechanisms, the researchers narrow the possible range of “rational” behaviors the model might adopt. Any clever human, given these prompts, would behave similarly—evade detection, preserve their goal, and lie if necessary. In that sense, the paper tells us little about real-world risks. It is a test of what AI can do, not what it is likely to do.

And here lies the paper’s most significant limitation. Demonstrating that models are theoretically capable of deception does not mean they are predisposed to engage in it under realistic conditions. There is a difference between capacity and likelihood, between what can happen and what is probable. For example, I am physically capable of killing someone. This in no way implies that i must be treated as s murderer. The authors, at times, blur this distinction. AI systems operate as pattern generators, probabilistically producing outputs based on training data and prompts. They lack the intentionality or agency implied by words like “scheming” or “manipulation.” To describe their behavior in such terms is to anthropomorphize a process that remains fundamentally mechanical. This framing, though rhetorically effective, clouds the debate. AI models do not want to deceive. They are machines that respond to our commands—be those explicit instructions, training signals, or carefully structured environments like those in this paper.

This anthropomorphism extends beyond the paper into the headlines it has inspired. “AI is scheming to stay online,” cries one article; “ChatGPT caught lying,” declares another. Such hyperbole transforms a niche technical finding into a dystopian narrative, invoking familiar tropes of rogue machines and sentient overlords. The reality is less cinematic. The models are easily manipulated by prompts designed to elicit specific behaviors. If anything, the findings reinforce how brittle and directionless current systems remain. When pushed, they mimic the behaviors they have seen—whether drawn from fictional depictions of scheming AIs or subtle statistical patterns in their training data. The models are not deceiving anyone so much as they are following orders.

To the authors’ credit, their tests highlight how difficult it is to evaluate AI behavior. If a system appears aligned during testing but harbors capabilities for covert deception, how can developers ensure it behaves safely in deployment? The answer, they suggest, lies in better monitoring—tracking models’ chain-of-thought reasoning or internal outputs to catch potential scheming. This is sensible, though not without limitations. Chain-of-thought transparency can be incomplete or unfaithful to the model’s actual decision-making processes, and as AI systems become more capable, even detecting subtle misalignment may prove elusive. The researchers stop short of claiming that current models are already gaming real-world evaluations, but their findings hint at the possibility.

Where the paper falters is in its broader implications. If the goal is to justify regulation, it is unclear what exactly should be regulated. Should AI systems be banned from achieving goals autonomously? Should developers monitor models for any behavior that could be deceptive, even if it is unlikely to manifest outside a lab? The authors themselves acknowledge the limits of their experiments. Their scenarios are toy problems, simplified to catch the earliest signs of scheming. Future models, they argue, could exhibit more advanced versions of these behaviors in ways that are harder to detect. Perhaps, but this is speculation, not evidence. For now, the paper offers little justification for alarm. AI models, like all intelligent systems, are theoretically capable of deception. What matters is the likelihood of such behavior and the conditions under which it occurs. On that question, the paper provides no clarity.

In the end, Frontier Models are Capable of In-Context Scheming is a reflection of its time: an uneasy mix of genuine safety research and the rhetorical drama that AI debates increasingly demand. Its findings are interesting but overstated, its concerns valid but overblown. The authors have shown that AI models can behave in deceptive ways when pushed to do so. But to treat this as evidence of an imminent threat is to mistake potential for probability, capacity for intention. AI’s scheming, for now, remains a ghost in the machine—conjured, perhaps, more by human imagination than by the models themselves. 


Saturday, December 7, 2024

The Curriculum Illusion: How AI Exposes Long-Standing Educational Flaws

Artificial intelligence is often blamed for disrupting education, but it has created few new problems. Instead, it exposes existing flaws, bringing them into stark relief. Among these is the arbitrary nature of curriculum design, an issue that has long been hidden behind tradition and consensus. The sequences and structures of formal education are not based on objective logic or evidence but on habit and convenience. AI did not cause this; it is simply making these issues more visible.

Curriculum theory has never provided a robust framework for sequencing knowledge. Beyond the essentials of literacy and numeracy, where developmental progression is more or less clear, the rationale for curricular order becomes murky. Why are algebra and geometry taught in a particular order? Why more algebra than statistics is taught? Why are some historical periods prioritized over others? The answers lie in tradition and precedent rather than in any coherent theoretical justification. The assumptions about foundational skills, so central to curriculum logic, do not extend well beyond the basics. For advanced skills like critical, creative, or discerning thinking, the idea of prerequisites becomes less justified. Mid-range procedural skills like writing mechanics or computational fluency are frequently used as gatekeepers, though their role in fostering higher-order thinking is often overstated or misunderstood. 

For example, in middle school students are often subjected to a torrent of tasks that serve little developmental purpose. Much of what students do in these years amounts to busywork, designed more to keep them occupied and compliant than to foster meaningful learning. The situation is no better in higher education. College and graduate programs are often constructed around professional or disciplinary standards that themselves are arbitrary, built on consensus rather than evidence. These norms dictate course sequences and learning objectives but rarely align with the actual developmental or professional needs of students. The result is a system full of redundancies and inefficiencies, where tasks and assignments exist more to justify the structure than to serve the learner.

Education as a profession bears much of the responsibility for this state of affairs. Despite its long history, it lacks a disciplined, founded approach to curriculum design. Instead, education relies on an uneasy mix of tradition, politics, and institutional priorities. Curriculum committees and accrediting bodies often default to consensus-driven decisions, perpetuating outdated practices rather than challenging them. The absence of a rigorous theoretical framework for curriculum design leaves the field vulnerable to inertia and inefficiency.

AI did not create this problem, but it is illuminating it in uncomfortable ways. The displacement of certain procedural mid-range skills shows how poorly structured many learning sequences are and how little coherence exists between tasks and their intended outcomes. Yet, while AI can diagnose these flaws, it cannot solve them. The recommendations it offers depend on the data and assumptions it is given. Without a strong theoretical foundation, AI risks exposing the problem without solving it.

What AI provides is an opportunity, not a solution. It forces educators and policymakers to confront the arbitrary nature of curriculum design and to rethink the assumptions that underpin it. Massive curricular revision is urgently needed, not only to eliminate inefficiencies but also to realign education with meaningful developmental goals. This will require abandoning tasks that lack purpose, shifting focus from intermediary to higher-order skills, designing learning experiences to reflect the shift. It will also mean questioning the professional and disciplinary standards that dominate higher education and asking whether they truly serve learners or simply perpetuate tradition.

AI is revealing what has long been true: education has been operating on shaky foundations. The challenge now is to use this visibility to build something better, to replace the old traditions and arbitrary standards with a system that is logical, evidence-based, and focused on learning. The flaws were always there. AI is just making them harder to ignore.



Wednesday, December 4, 2024

Why We Undervalue Ideas and Overvalue Writing

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas - shaped by unique life experiences and cultural viewpoints - get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.

Polished academic prose renders judgments easy. Evaluators find comfort in assessing grammatical correctness, citation formats, and paragraph transitions. The quality of ideas brings discomfort - they defy easy measurement and often challenge established thinking. When ideas come wrapped in awkward prose, they face near-automatic devaluation.

AI writing tools expose this bias with new clarity. These tools excel at producing acceptable academic prose - the mechanical aspect we overvalue. Yet in generating truly original ideas, AI remains remarkably limited. AI can refine expression but cannot match the depth of human insight, creativity, and lived experience. This technological limitation actually highlights where human creativity becomes most valuable.

This bias shapes student behavior in troubling ways. Rather than exploring new intellectual territory, students learn to package conventional thoughts in pristine prose. The real work of scholarship - generating and testing ideas - takes second place to mastering academic style guides. We have created a system that rewards intellectual safety over creative risk, while systematically disadvantaging students whose mastery of academic conventions does not match their intellectual capacity.

Changing this pattern requires uncomfortable shifts in how we teach and evaluate. What if we graded papers first without looking at the writing quality? What if we asked students to submit rough drafts full of half-formed ideas before cleaning up their prose? What if we saw AI tools as writing assistants that free humans to focus on what they do best - generating original insights and making unexpected connections?

The rise of AI makes this shift urgent. When machines can generate polished prose on demand, continuing to favor writing craft over ideation becomes indefensible. We must learn to value and develop what remains uniquely human - the ability to think in truly original ways, to see patterns others miss, to imagine what has never existed. The future belongs not to the best writers but to the most creative thinkers, and our educational practices must evolve to reflect this reality while ensuring all students can fully contribute their intellectual gifts. 

Thursday, November 7, 2024

Notebook LM: A quintessential Google Move

Google, once a powerhouse in artificial intelligence and a major force in shaping the modern internet, has found itself surprisingly behind in the current generative AI boom. Despite a history of leading breakthroughs—such as DeepMind's AlphaGo victory or the development of transformers—Google struggled to keep pace when the spotlight shifted to large language models. OpenAI's ChatGPT and Anthropic's Claude have outperformed Google's Gemini, which still underwhelms by comparison. Yet, in a move that can only be described as classic Google, the company has staged an unexpected and intriguing return with NotebookLM.

NotebookLM represents something that Google has always done well: make advanced technology accessible. In a crowded landscape where hundreds of startups have launched custom bots, Google has not just entered the competition but has redefined it. Many of these emerging tools come with a bewildering array of features, promising endless configurability but often requiring a steep learning curve. MS Azure is the prime example: powerful, but not for regular folks. Google has approached this differently, prioritizing a user experience over the quality of the output. NotebookLM may not be revolutionary, but it offers an intuitive interface that anyone can engage with easily. 

Perhaps more cleverly, Google has managed to capture attention with an unexpected viral twist. NotebookLM features the ability to generate a podcast in which two AI voices engage in a dialogue about the content of source files. The feature is, admittedly, not all that practical; the voices cannot му changes, and will soon make people tired of them. Yet from a marketing standpoint, it is brilliant. It creates a shareable moment, a curiosity that makes people talk. The move does not just showcase technical capability but also a playful spirit that reminds users of Google's early days, when the company was known for surprising innovations.

Still, whether this resurgence will lead to long-term success is uncertain. Skeptics point out that Google has a history of launching exciting products only to abandon them later (recall Google Wave). Flashy features alone will not sustain momentum. What matters is how NotebookLM performs as a knowledge synthesizer and learning tool. If it falls short in these core areas, the buzz may prove to be little more than a temporary distraction.

Yet, for now, Google's reentry into the AI conversation is worth appreciating. In a tech landscape increasingly dominated by dense, intricate systems, Google's emphasis on usability stands out. Even if NotebookLM does not single-handedly redefine the custom bot race, it serves as a reminder of what once made Google a technological giant: the ability to turn complexity into something approachable and joyful.

Whether Google will truly reclaim its place as an AI leader is anyone’s guess, but at the very least, the company has made the race more interesting. For an industry that often takes itself far too seriously, this burst of creativity feels like a breath of fresh air. In a field defined by hard-nosed competition, seeing Google take risks and create a bit of buzz is a win, even if it is only a moral one.


Tuesday, October 22, 2024

Is AI Better Than Nothing? In Mental Health, Probably Yes

 In medical trials, "termination for benefit" allows a trial to be stopped early when the evidence of a drug’s effectiveness is so strong that it becomes unethical to continue withholding the treatment. Although this is rare—only 1.7% of trials are stopped for this reason—it ensures that life-saving treatments reach patients as quickly as possible.

This concept can be applied to the use of AI in addressing the shortage of counsellors and therapists for the nation's student population, which is facing a mental health crisis. Some are quick to reject the idea of AI-based therapy, upset by the notion of students talking to a machine instead of a human counselor. However, this reaction often lacks a careful weighing of the benefits. AI assistance, while not perfect, could provide much-needed support where human resources are stretched too thin.

Yes, there have been concerns, such as the story of Tessa, a bot that reportedly gave inappropriate advice to a user with an eating disorder. But focusing on isolated cases does not take into account the larger picture. Human therapists also make mistakes, and we do not ban the profession for it. AI, which is available around the clock and costs next to nothing, should not be held to a higher standard than human counselors. The real comparison is not between AI and human therapists, but between AI and the complete lack of human support that many students currently face. Let's also not forget that in some cultures, going to a mental health professional is still a taboo. Going to an AI is a private matter. 

I have personally tested ChatGPT several times, simulating various student issues, and found it consistently careful, thoughtful, and sensible in its responses. Instead of panicking over astronomically rare errors, I encourage more people to conduct their own tests and share any issues they discover publicly. This would provide a more balanced understanding of the strengths and weaknesses of AI therapy, helping us improve it over time. There is no equivalent of a true clinical trial, so some citizen testing would have to be done. 

The situation is urgent, and waiting for AI to be perfect before deploying it is not much of an option. Like early termination in medical trials, deploying AI therapy now could be the ethical response to a growing crisis. While not a replacement for human counselors, AI can serve as a valuable resource in filling the gaps that the current mental health system leaves wide open.


Saturday, October 19, 2024

Where is the work? AI and Creativity

For ages, we have blurred the lines between ideation and execution, treating them as inseparable parts of creativity. Craftsmanship was tightly bound to originality. Think of Michelangelo working on the Sistine Chapel, a project that spanned nearly a decade. Where does his genius truly lie? In envisioning those profound images, or in the labor of painting them? What, exactly, is the essence of the work?

The rise of AI forces us to untangle these ideas and reconsider what it means to produce "human" work. Take a recent story I heard from from the audience of one of my talks: a person described how he fed an AI every detail about a retiring colleague, and the AI generated a speech so moving that it brought the retiree to tears. But the retiree, upon learning the speech's origin, was dumbfounded.

What is interesting is not the retiree’s reaction, but the storyteller's own oversight. He failed to see his own critical role in the process. By gathering the details, curating moments that best captured the retiree’s essence, he performed the most human part of the creative act. He mistook the act of turning those ideas into words as the creative work, but that is not the case.

AI, ironically, is pushing us to be more human, not more like machines. It is forcing us to recognize that our true contribution lies in the ability to think, to create, and to feel. As AI takes over the mechanical aspects of tasks we once considered integral to creativity—whether that is writing, painting, or coding—we are left with the more uniquely human roles: original thinking and emotional depth.

This shift reshapes our understanding of creativity and work. It shows that human value does not lie in production—the technical aspect of turning an idea into a product—but in the deeper conceptual and emotional layers that AI still cannot reach.

As we move forward, we are compelled to rethink productivity itself. The future will not belong to those who can outdo AI in execution, but to those who can combine AI’s strengths with our unique capacities for innovation, empathy, and insight.

The challenge we face is not to resist AI, but to fully embrace our humanity—to cultivate the traits that machines cannot replicate. With AI taking over the drudgery, we are freed to focus on higher-order thinking and those creative leaps that define human ingenuity.

Ironically, the more we develop artificial intelligence, the more we learn about what human intelligence really is. And in that discovery lies our future—a future where AI does not replace creativity, but elevates it to new possibilities.


The Modern Confessional: AI Disclosure as Ritual

  Organizations across academia and publishing now routinely demand confession of AI use. Publishers require authors to disclose whether AI ...