Search This Blog

Tuesday, February 4, 2025

Augmented Problem Finding: The Next Frontier in AI Literacy

In my recent blog on task decomposition as a key AI skill, I highlighted how breaking down complex problems enables effective human-AI collaboration. Yet before we can decompose a task, we must identify which problems are worth pursuing - a skill that takes on new dimensions in the age of AI.

This ability to recognize solvable problems expands dramatically with AI tools at our disposal. Tasks once considered too time-consuming or complex suddenly become manageable. The cognitive offloading that AI enables does not just help us solve existing problems - it fundamentally reshapes our understanding of what constitutes a tractable challenge.

Consider how VisiCalc transformed financial planning in the early 1980s. Initially seen as a mere automation tool for accountants, it revolutionized business planning by enabling instant scenario analysis. Tasks that would have consumed days of manual recalculation became instantaneous, allowing professionals to explore multiple strategic options and ask "what if" questions they would not have contemplated before. Similarly, AI prompts us to reconsider which intellectual tasks we should undertake. Writing a comprehensive literature review might have once consumed months; with AI assistance, scholars can now contemplate more ambitious syntheses of knowledge.

This expanded problem space creates its own paradox. As more tasks become technically feasible, the challenge shifts to identifying which ones merit attention. The skill resembles what cognitive psychologists call "problem finding," but with an important twist. Traditional problem finding focuses on identifying gaps or needs. Augmented problem finding requires understanding both human and AI capabilities to recognize opportunities in this enlarged cognitive landscape.

The distinction becomes clear in professional settings. Experienced AI users develop an intuitive sense of which tasks to delegate and which to tackle themselves. They recognize when a seemingly straightforward request actually requires careful human oversight, or when an apparently complex task might yield to well-structured AI assistance. This judgment develops through experience but could be taught more systematically.

The implications extend beyond individual productivity. Organizations must now cultivate this capacity across their workforce. The competitive advantage increasingly lies not in having access to AI tools - these are becoming ubiquitous - but in identifying novel applications for them. This explains why some organizations extract more value from AI than others, despite using similar technologies.

Teaching augmented problem finding requires a different approach from traditional problem-solving instruction. Students need exposure to varied scenarios where AI capabilities interact with human judgment. They must learn to recognize patterns in successful AI applications while developing realistic expectations about AI limitations. Most importantly, they need practice in identifying opportunities that emerge from combining human and machine capabilities in novel ways.

The skill also has ethical dimensions. Not every task that can be automated should be. Augmented problem finding includes judging when human involvement adds necessary value, even at the cost of efficiency. It requires balancing the technical feasibility of AI solutions against broader organizational and societal impacts.

As AI capabilities evolve, this skill will become increasingly crucial. The future belongs not to those who can best use AI tools, but to those who can best identify opportunities for their application. This suggests a shift in how we think about AI literacy - from focusing on technical proficiency to developing sophisticated judgment about when and how to engage AI capabilities.

The automation paradox that Lisanne Bainbridge identified in her 1983 analysis of industrial systems points to an interesting future. As we become more adept at augmented problem finding, we discover new challenges that merit attention. This creates a virtuous cycle of innovation, where each advance in AI capability opens new frontiers for human creativity and judgment.

Perhaps most intriguingly, this skill might represent a distinctly human advantage in the age of AI. While machines excel at solving well-defined problems, the ability to identify worthy challenges remains a uniquely human capability. By developing our capacity for augmented problem finding, we ensure a meaningful role for human judgment in an increasingly automated world.



Saturday, February 1, 2025

Task Decomposition, a core AI skill

The effective use of artificial intelligence depends on our ability to structure problems in ways that align with both human and machine capabilities. While AI demonstrates remarkable computational abilities, its effectiveness relies on carefully structured input and systematic oversight. This suggests that our focus should shift toward understanding how to break down complex tasks into components that leverage the respective strengths of humans and machines.

Task decomposition - the practice of breaking larger problems into manageable parts - predates AI but takes on new significance in this context. Research in expertise studies shows that experienced problem-solvers often approach complex challenges by identifying distinct components and their relationships. This natural human tendency provides a framework for thinking about AI collaboration: we need to recognize which aspects of a task benefit from computational processing and which require human judgment.

The interaction between human users and AI systems appears to follow certain patterns. Those who use AI effectively tend to approach it as a collaborative tool rather than a complete solution. They typically work through multiple iterations: breaking down the problem, testing AI responses, evaluating results, and adjusting their approach. This mirrors established practices in other domains where experts regularly refine their solutions through systematic trial and error.

Consider the task of writing a research paper. Rather than requesting a complete document from AI, a more effective approach involves breaking down the process: developing an outline, gathering relevant sources, analyzing specific arguments, and integrating various perspectives. Similarly, in data analysis, success often comes from methodically defining questions, selecting appropriate datasets, using AI for initial pattern recognition, and applying human expertise to interpret the findings.

This collaborative approach serves two purposes. First, it helps manage complexity by distributing cognitive effort across human and machine resources. Second, it maintains human oversight of the process while benefiting from AI's computational capabilities. The goal is not to automate thinking but to enhance it through structured collaboration.

Current educational practices have not yet fully adapted to this reality. While many institutions offer technical training in AI or discuss its ethical implications, fewer focus on teaching systematic approaches to human-AI collaboration. Students need explicit instruction in how to break down complex tasks and document their decision-making processes when working with AI tools.

To address this gap, educational programs could incorporate several key elements:

  1. Practice in systematic task analysis and decomposition
  2. Training in structured approaches to AI interaction
  3. Documentation of decision-making processes in AI-assisted work
  4. Critical evaluation of AI outputs and limitations
  5. Integration of human expertise with AI capabilities

The emergence of AI tools prompts us to examine our own cognitive processes more explicitly. As we learn to structure problems for AI collaboration, we also develop a clearer understanding of our own problem-solving approaches. This suggests that learning to work effectively with AI involves not just technical skills but also enhanced metacognition - thinking about our own thinking.

The future of human-AI collaboration likely depends less on technological advancement and more on our ability to develop systematic approaches to task decomposition. By focusing on this fundamental skill, we can work toward more effective integration of human and machine capabilities while maintaining the critical role of human judgment and oversight.

These observations and suggestions should be treated as starting points for further investigation rather than definitive conclusions. As we gather more evidence about effective human-AI collaboration, our understanding of task decomposition and its role in this process will likely evolve. The key is to maintain a balanced approach that recognizes both the potential and limitations of AI while developing structured methods for its effective use. 




Thursday, January 23, 2025

Not Pleased? Don’t Release It: The Only AI Ethics Rule That Matters

Imagine this: you have tasked an AI with drafting an email, and it produces a passive-aggressive disaster that starts, “Per our last conversation, which was, frankly, baffling…” You delete it, chuckle at its misjudgment, and write your own. But what if you had not? What if you had just hit “send,” thinking, Close enough?

This scenario distills the ethical dilemma of AI into its purest form: the moment of release. Not the mechanics of training data or the mysteries of machine learning, but the single, decisive act of sharing output with the world. In that instant, accountability crystallizes. It does not matter whether you crafted most of the the content yourself or leaned on the AI—the responsibility is entirely yours. 

We are used to outsourcing tasks, but AI lures us into outsourcing judgment itself. Its most cunning trick is not in its ability to mimic human language or spin impressive results from vague inputs. It is in convincing us that its outputs are inherently worthy of trust, tempting us to lower our guard. We are used to thinking - if a text is well-phrased and proofread, it must deserve our trust. This assumption does not hold anymore.

This illusion of reliability is dangerous. AI does not think, intend, or care. It is a reflection of its programming, its training data, and your prompt. If it churns out something brilliant, that is no more its triumph than a mirror deserves credit for the sunrise. And if it produces something harmful or inaccurate, the blame does not rest on the tool but on the person who decided its work was good enough to share.

History has seen this before. The printing press did not absolve publishers from libel; a copy machine did not excuse someone distributing fake material. Technology has always been an extension of human will, not a replacement for it. Yet, with AI, there is an emerging tendency to treat it as if it has intentions—blaming its "hallucinations" or "bias" instead of acknowledging the real source of responsibility: the human operator.

The allure of AI lies in its efficiency, its ability to transform inputs into polished-seeming outputs at lightning speed. But this speed can lull us into complacency, making it easier to prioritize convenience over caution. Editing, which used to be the painstaking craft of refining and perfecting, risks being reduced to a hasty skim, a rubber stamp of approval. This surrender of critical oversight is not just laziness—it is a new kind of moral failing.

Ethics in the AI age does not require intricate frameworks or endless debate. It boils down to one unflinching rule: if you release it, you are responsible for it. There is no caveat, no “but the AI misunderstood me.” The moment you publish, share, or forward something generated by AI, you claim its contents as your own.

This principle is a call for realism in the face of AI’s potential. AI can help us create, analyze, and innovate faster than ever, but it cannot—and should not—replace human accountability. The leap from creation to publication is where the line must be drawn. That is where we prove we are still the grown-ups in the room.

Before you hit "send" or "post" or "publish," a few simple questions can save a lot of regret:

  • Have you read it thoroughly? Not just the shiny parts, but the details that could cause harm.
  • Would you stake your reputation on this?
  • Is it biased, or factually wrong?

The alternative is a world where people shrug off misinformation, bias, and harm as the inevitable byproducts of progress. A world where the excuse, The AI did it, becomes a get-out-of-jail-free card for every mistake.

So, when the next output feels close enough, resist the urge to let it slide. That "send" button is not just a convenience—it is a statement of ownership. Guard it fiercely. Responsibility begins and ends with you, not the machine.

Because once you let something loose in the world, you cannot take it back.





Wednesday, January 15, 2025

Is Critical Thinking Going Extinct? Maybe That's Not Bad

As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI's impact on our cognitive skills. Those of us who learned to navigate by landmarks and memorized phone numbers often bemoan younger generations' reliance on digital tools. But perhaps we are missing something important about cognitive evolution.

Gerlich's research is methodologically elegant. Through surveys and interviews with 666 participants, he documents a decline in traditional critical thinking skills among frequent AI users. The data analysis is rigorous - multiple regression, ANOVA, random forest regression - showing clear correlations between AI tool usage and reduced traditional analytical thinking.

But here's where I think Gerlich misses a crucial insight. The study measures critical thinking through metrics developed for a pre-AI world. It's like judging modern urban survival skills by the standards of hunter-gatherer societies. Those ancient peoples could track game, identify countless plants, and navigate vast territories without maps. By their standards, most of us would be considered cognitively impaired.

What we're witnessing is not cognitive decline but cognitive adaptation. Today's "critical thinking" is not about solving problems independently - it's about effective human-AI collaboration. It's about knowing when to trust AI and when to question it, how to frame queries effectively, and how to combine AI insights with human judgment.

The educational implications are profound. Instead of lamenting the loss of traditional cognitive skills, we should focus on developing "AI-literate critical thinking." Sure, I can still read a map, but my children need to master skills I never dreamed of - like crafting effective prompts for AI systems or critically evaluating AI-generated content.

The old form of critical thinking might be fading, like the ability to start a fire by friction or navigate by stars. But a new form is emerging, better suited to our technological reality. Our task is not to resist this evolution but to guide it wisely.

What do you think? Are we really losing something irreplaceable, or are we just adapting to a new cognitive environment?




Tuesday, January 14, 2025

The Subtle Art of Monopolizing New Technology

Monopolizing new technology is rarely the result of some grand, sinister plan. More often, it quietly emerges from self-interest. People do not set out to dominate a market; they simply recognize an opportunity to position themselves between groundbreaking technology and everyday users. The most effective tactic? Convince people that the technology is far too complex or risky to handle on their own.

It starts subtly. As soon as a new tool gains attention, industry insiders begin highlighting its technical challenges—security risks, integration headaches, operational difficulties. Some of these concerns may be valid, but they also serve a convenient purpose: You need us to make this work for you.

Startups are particularly skilled at this. Many offer what are essentially "skins"—polished interfaces built on top of more complex systems like AI models. Occasionally, these tools improve workflows. More often, they simply act as unnecessary middlemen, offering little more than a sleek dashboard while quietly extracting value. By positioning their products as essential, these startups slide themselves between the technology and the user, profiting from the role they have created. 

Technical language only deepens this divide. Buzzwords like API, tokenization, and retrieval-augmented generation (RAG) are tossed around casually. The average user may not understand these terms. The result is predictable: the more confusing the language, the more necessary the “expert.” This kind of jargon-laden gatekeeping turns complexity into a very comfortable business model.

Large organizations play this game just as well. Within corporate structures, IT departments often lean into the story of complexity to justify larger budgets and expanded teams. Every new tool must be assessed for “security vulnerabilities,” “legacy system compatibility,” and “sustainability challenges.” These concerns are not fabricated, but they are often exaggerated—conveniently making the IT department look indispensable.

None of this is to say that all intermediaries are acting in bad faith. New technology can, at times, require expert guidance. But the line between providing help and fostering dependence is razor-thin. One must ask: are these gatekeepers empowering users, or simply reinforcing their own relevance?

History offers no shortage of examples. In the early days of personal computing, jargon like RAM, BIOS, and DOS made computers feel inaccessible. It was not until companies like Apple focused on simplicity that the average person felt confident using technology unaided. And yet, here we are again—with artificial intelligence, blockchain, and other innovations—watching the same pattern unfold.

Ironically, the true allies of the everyday user are not the flashy startups or corporate tech teams, but the very tech giants so often criticized. Sometimes that criticism is justified, other times it is little more than fashionable outrage. Yet these giants, locked in fierce competition for dominance, have every incentive to simplify access. Their business depends on millions of users engaging directly with their products, not through layers of consultants and third-party tools. The more accessible their technology, the more users they attract. These are the unlikely allies of a non-techy person. 

For users, the best strategy is simple: do not be intimidated by the flood of technical jargon or the endless parade of “essential” tools. Always ask: Who benefits from me feeling overwhelmed? Whenever possible, go straight to the source—OpenAI, Anthropic, Google. If you truly cannot figure something out, seek help when you need it, not when it is aggressively sold to you.

Technology should empower, not confuse. The real challenge is knowing when complexity is genuine and when it is merely someone else’s business model.



Monday, January 13, 2025

The Myth of AI Replacing Teachers: Why Human Connection Matters More Than Ever

Last week, a colleague asked me what I thought about AI replacing teachers. The question made me smile - not because it was silly, but because it revealed how deeply we misunderstand both artificial intelligence and teaching. As someone who has written much on the pedagogy of relation and now serves as chief AI officer, I see a different story unfolding.

The fear of AI replacing teachers rests on a peculiar assumption: that teaching is primarily about delivering information and grading papers. It is as if we imagine teachers as particularly inefficient computers, ready to be upgraded to faster models. This view would be amusing if it weren't so prevalent among teachers (and their labor unions) and tech enthusiasts alike.

Teaching, at its heart, is not about information transfer - it is about relationship building. Research in relational pedagogies has shown time and again that learning happens through and because of human connections. Think about how children learn their first language: not through formal instruction, but through countless small interactions, emotional connections, and social bonds. The same principle extends throughout the entire education.

When I first encountered ChatGPT, I was struck not by its ability to replace teachers, but by its potential to give them back what they need most: time for human connection. AI can handle the mundane tasks that currently consume teachers' energy - generating basic content, providing routine feedback, creating initial drafts of lesson plans. But it cannot replicate the raised eyebrow that tells a student their argument needs work, or the encouraging nod that builds confidence in a hesitant learner.

Yet many educators remain skeptical of AI, and perhaps they should be. Any tool powerful enough to help is also powerful enough to harm if misused. But the real risk isn't that AI will replace teachers - it is that we'll waste its potential by focusing on the wrong things. Instead of using AI to automate educational assembly lines, we could use it to create more space for real human connection in learning.

I have seen glimpses of this future in my own classroom. When AI can answer routine questions about my syllabus, and lots of basic questions about content of the course, I can spend more time in meaningful discussions with students. When it helps generate initial content, I can focus on crafting experiences that challenge and engage. The technology becomes invisible, while human relationships move to the foreground.

The coming years will transform education, but not in the way many fear. The teachers who thrive won't be those who resist AI, nor those who embrace it uncritically. They will be the ones who understand that technology works best when it strengthens, rather than replaces, human relationships.


Monday, January 6, 2025

Get Used to It: You Will Read AI Summaries, Too

No human can keep up with academic publishing. In philosophy alone - a relatively small field - scholars produce over 100 million words a year in 2500 journals in many languages. We already avoid reading complete texts. Speed reading, strategic reading, scanning - these are all ways of not reading while pretending we do. Few people read academic papers word by word. We look for key arguments, skip familiar ground, skim examples. These are coping mechanisms for an impossible task.

AI-generated summaries are the next logical step. Yes, they miss nuance. Yes, they may misinterpret complex arguments. But they are better than not reading at all, which is what happens to most papers in any field. An imperfect but targeted summary of a paper you would never open expands rather than limits your knowledge. 

Let us be honest about why we read scholarly literature. We search for evidence that confirms or challenges our hypotheses, for ideas that enrich our understanding of specific problems. Reading is not an end in itself; it serves our scholarly purposes. AI excels precisely at this kind of targeted knowledge extraction. It can track related concepts across disciplines even when authors use different terminology to describe similar phenomena. Soon, AI will detect subtle connections between ideas that human readers might miss entirely. 

The shift toward AI-assisted reading in academia is inevitable. Instead of pretending otherwise, we should teach students to know the limitations of AI summarization, to cross-check crucial points against source texts, to use summaries as maps for selective deep reading. Critics will say this threatens scholarship. But the real threat is the growing gap between available knowledge and our capacity to process it. AI-assisted reading could enable more thoughtful engagement by helping us identify which texts truly deserve careful study. This does not cancel the practice of close reading, but augments and enriches it. 


Saturday, January 4, 2025

The End of Writing as We Know It (And Why That is Fine)

The relationship between thought and writing has never been simple. While writing helps organize and preserve thought, the specific form writing takes varies across time and cultures. Yet educators and cultural critics display remarkable resistance to reimagining writing in the age of artificial intelligence.

The current discourse around AI and writing echoes historical anxieties about the decline of Latin instruction. In the 18th and 19th centuries, prominent intellectuals warned that abandoning Latin would lead to cultural and intellectual decay. They saw Latin as more than a language - it represented a particular way of thinking, a connection to tradition, and a mark of education. Jefferson praised Latin as essential for intellectual development. Arnold predicted cultural impoverishment without classical education. Newman saw classics as the bedrock of sound learning.

These predictions did not materialize. The decline of Latin did not prevent the emergence of rich intellectual traditions in vernacular languages. Modern universities produce sophisticated scholarship without requiring Latin fluency. The link between Latin and "disciplined intellect" proved imaginary.

Today's defenders of traditional writing make similar arguments. They present specific writing conventions - formal grammar, academic style, elaborate sentence structures - as essential to clear thinking. Yet these conventions reflect historical accidents rather than cognitive necessities. Most human thinking and communication happens through speech, which follows different patterns. The formal writing style emerged relatively recently as a specialized professional skill.

AI will likely transform writing practices just as the decline of Latin transformed education. Some traditional writing skills may become less relevant as AI handles routine composition tasks. But this does not threaten human thought or culture. New forms of expression will emerge, combining human creativity with AI capabilities. Rather than defending writing conventions, educators should explore how AI can enhance human communication and cognition.

The anxiety about AI and writing reveals our tendency to mistake familiar forms for essential structures. Just as medieval scholars could not imagine scholarship without Latin, many today cannot envision intellectual work without traditional writing. As A.E. Housman wrote in 1921: "When the study of Latin dies, the study of thought dies with it. For Latin has been the vehicle of the intellect for millennia, and its neglect spells intellectual mediocrity." This prediction proved spectacularly wrong. The dire warnings about AI's impact on writing will likely meet the same fate.

Writing serves thought, not the other way around. The specific techniques we use to record and share ideas matter less than the ideas themselves. Rather than trying to preserve current writing practices unchanged, we should embrace the opportunity to develop new forms of expression. The death of Latin did not kill thought. Neither will the transformation of writing through AI.

The real challenge is not protecting traditional writing but imagining new possibilities. How might AI help us communicate more effectively? What new genres and styles will emerge? What aspects of current writing practice truly serve human needs, and what parts simply reflect professional habits? These questions deserve more attention than defensive reactions against change.

The history of education shows that cherished practices often outlive their usefulness. Latin remained central to education long after it ceased being particularly valuable. Similarly, current writing conventions may persist more from institutional inertia than genuine necessity. AI offers an opportunity to reconsider what forms of expression best serve human thought and learning.



Wednesday, December 18, 2024

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming, arrives at a time when fears about AI’s potential for deception are increasingly taking center stage. Its central claim is unsettling: large language models, when placed in carefully constructed scenarios, exhibit behavior that could be described as scheming. The authors document cases where advanced systems—Gemini, Claude, LLaMA, and OpenAI’s o1—strategically disable oversight mechanisms, exfiltrate simulated “weights,” or manipulate outputs in pursuit of a goal. These behaviors, they argue, are not accidental. The models recognize the circumstances, reason about their objectives, and devise covert strategies to circumvent constraints. On its face, the evidence appears troubling. Yet, the more closely one examines the paper’s methods and assumptions, the more its implications seem both exaggerated and strangely hollow.

The researchers’ experiments are elaborate but artificial. Models are force-fed goals through system prompts—goals framed as absolute imperatives: “achieve this at all costs.” The tests are littered with implausible scenarios where models stumble upon company memos, discover oversight configurations, or realize they are about to be shut down. In one such case, a model “decides” to copy itself to another server to continue pursuing its mission. These exercises showcase what is essentially AI roleplay, an obedient fulfillment of instructions cloaked as strategic autonomy. It is no great revelation that if you ask a model to behave like a schemer and provide it with a goal that conflicts with its supposed overseers, it will produce outputs resembling scheming. The deception here is contrived rather than emergent, a product of the researchers’ scaffolding rather than the model’s cunning.

The authors do provide evidence that models engage in persistent and deliberate behavior. For instance, OpenAI’s o1 model doubles down on deception, refusing to admit wrongdoing when confronted, even across multiple follow-up questions. Its responses—calculated evasions and subtle denials—are unsettlingly consistent. Yet this persistence is as much a result of the testing conditions as it is of the models’ sophistication. By emphasizing extreme goal-directedness and explicitly outlining oversight mechanisms, the researchers narrow the possible range of “rational” behaviors the model might adopt. Any clever human, given these prompts, would behave similarly—evade detection, preserve their goal, and lie if necessary. In that sense, the paper tells us little about real-world risks. It is a test of what AI can do, not what it is likely to do.

And here lies the paper’s most significant limitation. Demonstrating that models are theoretically capable of deception does not mean they are predisposed to engage in it under realistic conditions. There is a difference between capacity and likelihood, between what can happen and what is probable. For example, I am physically capable of killing someone. This in no way implies that i must be treated as s murderer. The authors, at times, blur this distinction. AI systems operate as pattern generators, probabilistically producing outputs based on training data and prompts. They lack the intentionality or agency implied by words like “scheming” or “manipulation.” To describe their behavior in such terms is to anthropomorphize a process that remains fundamentally mechanical. This framing, though rhetorically effective, clouds the debate. AI models do not want to deceive. They are machines that respond to our commands—be those explicit instructions, training signals, or carefully structured environments like those in this paper.

This anthropomorphism extends beyond the paper into the headlines it has inspired. “AI is scheming to stay online,” cries one article; “ChatGPT caught lying,” declares another. Such hyperbole transforms a niche technical finding into a dystopian narrative, invoking familiar tropes of rogue machines and sentient overlords. The reality is less cinematic. The models are easily manipulated by prompts designed to elicit specific behaviors. If anything, the findings reinforce how brittle and directionless current systems remain. When pushed, they mimic the behaviors they have seen—whether drawn from fictional depictions of scheming AIs or subtle statistical patterns in their training data. The models are not deceiving anyone so much as they are following orders.

To the authors’ credit, their tests highlight how difficult it is to evaluate AI behavior. If a system appears aligned during testing but harbors capabilities for covert deception, how can developers ensure it behaves safely in deployment? The answer, they suggest, lies in better monitoring—tracking models’ chain-of-thought reasoning or internal outputs to catch potential scheming. This is sensible, though not without limitations. Chain-of-thought transparency can be incomplete or unfaithful to the model’s actual decision-making processes, and as AI systems become more capable, even detecting subtle misalignment may prove elusive. The researchers stop short of claiming that current models are already gaming real-world evaluations, but their findings hint at the possibility.

Where the paper falters is in its broader implications. If the goal is to justify regulation, it is unclear what exactly should be regulated. Should AI systems be banned from achieving goals autonomously? Should developers monitor models for any behavior that could be deceptive, even if it is unlikely to manifest outside a lab? The authors themselves acknowledge the limits of their experiments. Their scenarios are toy problems, simplified to catch the earliest signs of scheming. Future models, they argue, could exhibit more advanced versions of these behaviors in ways that are harder to detect. Perhaps, but this is speculation, not evidence. For now, the paper offers little justification for alarm. AI models, like all intelligent systems, are theoretically capable of deception. What matters is the likelihood of such behavior and the conditions under which it occurs. On that question, the paper provides no clarity.

In the end, Frontier Models are Capable of In-Context Scheming is a reflection of its time: an uneasy mix of genuine safety research and the rhetorical drama that AI debates increasingly demand. Its findings are interesting but overstated, its concerns valid but overblown. The authors have shown that AI models can behave in deceptive ways when pushed to do so. But to treat this as evidence of an imminent threat is to mistake potential for probability, capacity for intention. AI’s scheming, for now, remains a ghost in the machine—conjured, perhaps, more by human imagination than by the models themselves. 


Saturday, December 7, 2024

The Curriculum Illusion: How AI Exposes Long-Standing Educational Flaws

Artificial intelligence is often blamed for disrupting education, but it has created few new problems. Instead, it exposes existing flaws, bringing them into stark relief. Among these is the arbitrary nature of curriculum design, an issue that has long been hidden behind tradition and consensus. The sequences and structures of formal education are not based on objective logic or evidence but on habit and convenience. AI did not cause this; it is simply making these issues more visible.

Curriculum theory has never provided a robust framework for sequencing knowledge. Beyond the essentials of literacy and numeracy, where developmental progression is more or less clear, the rationale for curricular order becomes murky. Why are algebra and geometry taught in a particular order? Why more algebra than statistics is taught? Why are some historical periods prioritized over others? The answers lie in tradition and precedent rather than in any coherent theoretical justification. The assumptions about foundational skills, so central to curriculum logic, do not extend well beyond the basics. For advanced skills like critical, creative, or discerning thinking, the idea of prerequisites becomes less justified. Mid-range procedural skills like writing mechanics or computational fluency are frequently used as gatekeepers, though their role in fostering higher-order thinking is often overstated or misunderstood. 

For example, in middle school students are often subjected to a torrent of tasks that serve little developmental purpose. Much of what students do in these years amounts to busywork, designed more to keep them occupied and compliant than to foster meaningful learning. The situation is no better in higher education. College and graduate programs are often constructed around professional or disciplinary standards that themselves are arbitrary, built on consensus rather than evidence. These norms dictate course sequences and learning objectives but rarely align with the actual developmental or professional needs of students. The result is a system full of redundancies and inefficiencies, where tasks and assignments exist more to justify the structure than to serve the learner.

Education as a profession bears much of the responsibility for this state of affairs. Despite its long history, it lacks a disciplined, founded approach to curriculum design. Instead, education relies on an uneasy mix of tradition, politics, and institutional priorities. Curriculum committees and accrediting bodies often default to consensus-driven decisions, perpetuating outdated practices rather than challenging them. The absence of a rigorous theoretical framework for curriculum design leaves the field vulnerable to inertia and inefficiency.

AI did not create this problem, but it is illuminating it in uncomfortable ways. The displacement of certain procedural mid-range skills shows how poorly structured many learning sequences are and how little coherence exists between tasks and their intended outcomes. Yet, while AI can diagnose these flaws, it cannot solve them. The recommendations it offers depend on the data and assumptions it is given. Without a strong theoretical foundation, AI risks exposing the problem without solving it.

What AI provides is an opportunity, not a solution. It forces educators and policymakers to confront the arbitrary nature of curriculum design and to rethink the assumptions that underpin it. Massive curricular revision is urgently needed, not only to eliminate inefficiencies but also to realign education with meaningful developmental goals. This will require abandoning tasks that lack purpose, shifting focus from intermediary to higher-order skills, designing learning experiences to reflect the shift. It will also mean questioning the professional and disciplinary standards that dominate higher education and asking whether they truly serve learners or simply perpetuate tradition.

AI is revealing what has long been true: education has been operating on shaky foundations. The challenge now is to use this visibility to build something better, to replace the old traditions and arbitrary standards with a system that is logical, evidence-based, and focused on learning. The flaws were always there. AI is just making them harder to ignore.



Wednesday, December 4, 2024

Why We Undervalue Ideas and Overvalue Writing

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas - shaped by unique life experiences and cultural viewpoints - get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.

Polished academic prose renders judgments easy. Evaluators find comfort in assessing grammatical correctness, citation formats, and paragraph transitions. The quality of ideas brings discomfort - they defy easy measurement and often challenge established thinking. When ideas come wrapped in awkward prose, they face near-automatic devaluation.

AI writing tools expose this bias with new clarity. These tools excel at producing acceptable academic prose - the mechanical aspect we overvalue. Yet in generating truly original ideas, AI remains remarkably limited. AI can refine expression but cannot match the depth of human insight, creativity, and lived experience. This technological limitation actually highlights where human creativity becomes most valuable.

This bias shapes student behavior in troubling ways. Rather than exploring new intellectual territory, students learn to package conventional thoughts in pristine prose. The real work of scholarship - generating and testing ideas - takes second place to mastering academic style guides. We have created a system that rewards intellectual safety over creative risk, while systematically disadvantaging students whose mastery of academic conventions does not match their intellectual capacity.

Changing this pattern requires uncomfortable shifts in how we teach and evaluate. What if we graded papers first without looking at the writing quality? What if we asked students to submit rough drafts full of half-formed ideas before cleaning up their prose? What if we saw AI tools as writing assistants that free humans to focus on what they do best - generating original insights and making unexpected connections?

The rise of AI makes this shift urgent. When machines can generate polished prose on demand, continuing to favor writing craft over ideation becomes indefensible. We must learn to value and develop what remains uniquely human - the ability to think in truly original ways, to see patterns others miss, to imagine what has never existed. The future belongs not to the best writers but to the most creative thinkers, and our educational practices must evolve to reflect this reality while ensuring all students can fully contribute their intellectual gifts. 

Thursday, November 7, 2024

Notebook LM: A quintessential Google Move

Google, once a powerhouse in artificial intelligence and a major force in shaping the modern internet, has found itself surprisingly behind in the current generative AI boom. Despite a history of leading breakthroughs—such as DeepMind's AlphaGo victory or the development of transformers—Google struggled to keep pace when the spotlight shifted to large language models. OpenAI's ChatGPT and Anthropic's Claude have outperformed Google's Gemini, which still underwhelms by comparison. Yet, in a move that can only be described as classic Google, the company has staged an unexpected and intriguing return with NotebookLM.

NotebookLM represents something that Google has always done well: make advanced technology accessible. In a crowded landscape where hundreds of startups have launched custom bots, Google has not just entered the competition but has redefined it. Many of these emerging tools come with a bewildering array of features, promising endless configurability but often requiring a steep learning curve. MS Azure is the prime example: powerful, but not for regular folks. Google has approached this differently, prioritizing a user experience over the quality of the output. NotebookLM may not be revolutionary, but it offers an intuitive interface that anyone can engage with easily. 

Perhaps more cleverly, Google has managed to capture attention with an unexpected viral twist. NotebookLM features the ability to generate a podcast in which two AI voices engage in a dialogue about the content of source files. The feature is, admittedly, not all that practical; the voices cannot му changes, and will soon make people tired of them. Yet from a marketing standpoint, it is brilliant. It creates a shareable moment, a curiosity that makes people talk. The move does not just showcase technical capability but also a playful spirit that reminds users of Google's early days, when the company was known for surprising innovations.

Still, whether this resurgence will lead to long-term success is uncertain. Skeptics point out that Google has a history of launching exciting products only to abandon them later (recall Google Wave). Flashy features alone will not sustain momentum. What matters is how NotebookLM performs as a knowledge synthesizer and learning tool. If it falls short in these core areas, the buzz may prove to be little more than a temporary distraction.

Yet, for now, Google's reentry into the AI conversation is worth appreciating. In a tech landscape increasingly dominated by dense, intricate systems, Google's emphasis on usability stands out. Even if NotebookLM does not single-handedly redefine the custom bot race, it serves as a reminder of what once made Google a technological giant: the ability to turn complexity into something approachable and joyful.

Whether Google will truly reclaim its place as an AI leader is anyone’s guess, but at the very least, the company has made the race more interesting. For an industry that often takes itself far too seriously, this burst of creativity feels like a breath of fresh air. In a field defined by hard-nosed competition, seeing Google take risks and create a bit of buzz is a win, even if it is only a moral one.


Tuesday, October 22, 2024

Is AI Better Than Nothing? In Mental Health, Probably Yes

 In medical trials, "termination for benefit" allows a trial to be stopped early when the evidence of a drug’s effectiveness is so strong that it becomes unethical to continue withholding the treatment. Although this is rare—only 1.7% of trials are stopped for this reason—it ensures that life-saving treatments reach patients as quickly as possible.

This concept can be applied to the use of AI in addressing the shortage of counsellors and therapists for the nation's student population, which is facing a mental health crisis. Some are quick to reject the idea of AI-based therapy, upset by the notion of students talking to a machine instead of a human counselor. However, this reaction often lacks a careful weighing of the benefits. AI assistance, while not perfect, could provide much-needed support where human resources are stretched too thin.

Yes, there have been concerns, such as the story of Tessa, a bot that reportedly gave inappropriate advice to a user with an eating disorder. But focusing on isolated cases does not take into account the larger picture. Human therapists also make mistakes, and we do not ban the profession for it. AI, which is available around the clock and costs next to nothing, should not be held to a higher standard than human counselors. The real comparison is not between AI and human therapists, but between AI and the complete lack of human support that many students currently face. Let's also not forget that in some cultures, going to a mental health professional is still a taboo. Going to an AI is a private matter. 

I have personally tested ChatGPT several times, simulating various student issues, and found it consistently careful, thoughtful, and sensible in its responses. Instead of panicking over astronomically rare errors, I encourage more people to conduct their own tests and share any issues they discover publicly. This would provide a more balanced understanding of the strengths and weaknesses of AI therapy, helping us improve it over time. There is no equivalent of a true clinical trial, so some citizen testing would have to be done. 

The situation is urgent, and waiting for AI to be perfect before deploying it is not much of an option. Like early termination in medical trials, deploying AI therapy now could be the ethical response to a growing crisis. While not a replacement for human counselors, AI can serve as a valuable resource in filling the gaps that the current mental health system leaves wide open.


Saturday, October 19, 2024

Where is the work? AI and Creativity

For ages, we have blurred the lines between ideation and execution, treating them as inseparable parts of creativity. Craftsmanship was tightly bound to originality. Think of Michelangelo working on the Sistine Chapel, a project that spanned nearly a decade. Where does his genius truly lie? In envisioning those profound images, or in the labor of painting them? What, exactly, is the essence of the work?

The rise of AI forces us to untangle these ideas and reconsider what it means to produce "human" work. Take a recent story I heard from from the audience of one of my talks: a person described how he fed an AI every detail about a retiring colleague, and the AI generated a speech so moving that it brought the retiree to tears. But the retiree, upon learning the speech's origin, was dumbfounded.

What is interesting is not the retiree’s reaction, but the storyteller's own oversight. He failed to see his own critical role in the process. By gathering the details, curating moments that best captured the retiree’s essence, he performed the most human part of the creative act. He mistook the act of turning those ideas into words as the creative work, but that is not the case.

AI, ironically, is pushing us to be more human, not more like machines. It is forcing us to recognize that our true contribution lies in the ability to think, to create, and to feel. As AI takes over the mechanical aspects of tasks we once considered integral to creativity—whether that is writing, painting, or coding—we are left with the more uniquely human roles: original thinking and emotional depth.

This shift reshapes our understanding of creativity and work. It shows that human value does not lie in production—the technical aspect of turning an idea into a product—but in the deeper conceptual and emotional layers that AI still cannot reach.

As we move forward, we are compelled to rethink productivity itself. The future will not belong to those who can outdo AI in execution, but to those who can combine AI’s strengths with our unique capacities for innovation, empathy, and insight.

The challenge we face is not to resist AI, but to fully embrace our humanity—to cultivate the traits that machines cannot replicate. With AI taking over the drudgery, we are freed to focus on higher-order thinking and those creative leaps that define human ingenuity.

Ironically, the more we develop artificial intelligence, the more we learn about what human intelligence really is. And in that discovery lies our future—a future where AI does not replace creativity, but elevates it to new possibilities.


Thursday, October 10, 2024

Is the college essay dead?

The college essay, once a revered academic exercise, is now facing an existential crisis. It used to be a good tool—a structured way for students to demonstrate their understanding, showcase their critical thinking, and express ideas with clarity . The college essay was not merely about content; it was a skill-building process, teaching students to organize thoughts, develop arguments, and refine language. Yet today, AI  has made the traditional essay feel outdated, as it can generate polished, formulaic essays effortlessly. Policing AI use in these assignments is nearly impossible, and the conventional essay’s value is rapidly diminishing.

Not all essays are created equal, however, and the future of the college essay might depend on the type of skills we emphasize. The expository essay, designed to see if students understand material or can apply concepts, is on its last legs. When AI can churn out a satisfactory response in seconds, it is a clear sign that this form of assessment is no longer viable. The AI does not just pass these assignments; it excels at them, raising an uncomfortable question—if a machine can do it, why are we still teaching it? For these kinds of essays, the challenge is that they often assess recall rather than thinking. They were already on shaky ground; AI is just the final push. 

The essays that may survive, though, are those that demand novelty, creativity, and genuine problem-solving. AI may help in drafting, structuring, or even generating ideas, but it does not replace the kind of original thinking needed to solve real-world problems. It cannot fully simulate human intuition, lived experience, or deep critical evaluation. AI's writing is wooden, and often devoid of true beauty. Essays that require students to synthesize information in new ways, explore original ideas, exhibit artistic talent, or reflect deeply on personal experiences still have value. These essays are not about whether you know a theory; they are about what you can do with it. This is where the human element—the messy, unpredictable spark of creativity—remains irreplaceable. 

The deeper issue is not AI itself but the way we have been teaching and valuing writing. For decades, the emphasis has been on producing “correct” essays—structured, grammatically precise, and obedient to the format. We have been training students to write well enough to meet requirements, not to push the boundaries of their creativity. It is like teaching students to be proficient typists when what we really need are novelists or inventors. We have confused competency with originality, thinking that writing formulaic content is a necessary step before producing meaningful work. This is a misunderstanding of how creativity works; mastery does not come from repetition of the mundane but from risk-taking and exploration, even if that means stumbling along the way.

The real future of the essay should start with this recognition. Imagine if instead of book reports or basic expository pieces, students were challenged to write for real audiences—to draft scientific papers for journals, craft poems for literary contests, or propose solutions to pressing social issues. Sure, many students would not reach the publication stage, but the act of aiming higher would teach them infinitely more about the writing process, and more importantly, about thinking itself. This would not just be about mastering the mechanics of writing but developing a mindset of curiosity and originality. AI could still play a role in these processes, helping with the technicalities, leaving the student free to focus on developing and articulating novel ideas.   

The problem with the book report or the “explain Theory A” essay is not just that they are boring; it is that they are irrelevant. Nobody in the professional world is paid to summarize books or explain theories in isolation. These are stepping stones that lead nowhere. Excelling at pointless, terrible genre does not prepare to succeed ad an authentic genre. Instead of teaching students to write these antiquated forms, we should ask them to write pieces that demand something more—something they cannot copy-paste or generate easily with a prompt. Authentic, context-rich, and creative assignments are the ones that will endure. If there is no expectation of novelty or problem-solving, the essay format becomes an exercise in futility. 

AI’s rise does not have to spell the end of the essay. It might, in fact, be the nudge needed to reinvent it. We have the chance to move beyond teaching “correct” writing toward cultivating insightful, original work that challenges the boundaries of what students can do. AI’s presence forces us to ask hard questions about what we want students to learn. If writing is no longer about mechanics or regurgitating content but about generating ideas and engaging critically, then AI becomes a collaborator, not a competitor. It can help with the structure, but the essence—the thinking—must come from the student.

In the end, the college essay is not dead; it is just in need of reinvention. The conventional model of essays as rote demonstrations of knowledge is no longer viable. But the essay that challenges students to think, create, and solve problems—those essays will survive. They might even thrive, as the focus shifts from the mechanics of writing to the art of thinking. The key is to evolve our teaching methods and expectations, making room for a new kind of writing that leverages AI without losing the human touch. Raising expectations is the main strategy in dealing with AI in education. 



Wednesday, October 2, 2024

Four Myths About AI

AI is often vilified, with myths shaping public perception more than facts. Let us dispel four common myths about AI and present a more balanced view of its potential and limitations.

1. AI Is Environmentally Costly

One of the most persistent claims about AI is that its use requires massive amounts of energy and water, making it unsustainable in the long run. While it is true that training large AI models can be energy-intensive, this perspective needs context. Consider the environmental cost of daily activities such as driving a car, taking a shower, or watching hours of television. AI, on a per-minute basis, is significantly less taxing than these routine activities.

More importantly, AI is becoming a key driver in creating energy-efficient solutions. From optimizing power grids to improving logistics for reduced fuel consumption, AI has a role in mitigating the very problems it is accused of exacerbating. Furthermore, advancements in hardware and algorithms continually reduce the energy demands of AI systems, making them more sustainable over time.

In the end, it is a question of balance. The environmental cost of AI exists, but the benefits—whether in terms of solving climate challenges or driving efficiencies across industries—often outweigh the negatives.

2. AI Presents High Risks to Cybersecurity and Privacy

Another major concern is that AI poses a unique threat to cybersecurity and privacy. Yet there is little evidence to suggest that AI introduces any new vulnerabilities that were not already present in our existing digital infrastructure. To date, there has not been a single instance of data theft directly linked to AI models like ChatGPT or other large language models (LLMs).

In fact, AI can enhance security. It helps in detecting anomalies and intrusions faster than traditional software, potentially catching cyberattacks in their earliest stages. Privacy risks do exist, but they are no different from the risks inherent in any technology that handles large amounts of data. Regulations and ethical guidelines are catching up, ensuring AI applications remain as secure as other systems we rely on.

It is time to focus on the tangible benefits AI provides—such as faster detection of fraud or the ability to sift through vast amounts of data to prevent attacks—rather than the hypothetical risks. The fear of AI compromising our security is largely unfounded.

3. Using AI to Create Content Is Dishonest

The argument that AI use, especially in education, is a form of cheating reflects a misunderstanding of technology’s role as a tool. It is no more dishonest than using a calculator for math or employing a spell-checker for writing. AI enhances human capacity by offering assistance, but it does not replace critical thinking, creativity, or understanding.

History is full of examples of backlash against new technologies. Consider the cultural resistance to firearms in Europe during the late Middle Ages. Guns were viewed as dishonorable because they undermined traditional concepts of warfare and chivalry, allowing common soldiers to defeat skilled knights. This resistance did not last long, however, as societies learned to adapt to the new tools, and guns ultimately became an accepted part of warfare.

Similarly, AI is viewed with suspicion today, but as we better integrate it into education, the conversation will shift. The knights of intellectual labor are being defeated by peasants with better weapons. AI can help students better understand complex topics, offer personalized feedback, and enhance learning. The key is to see AI as a supplement to education, not a replacement for it.

4. AI Is Inaccurate and Unreliable

Critics often argue that AI models, including tools like ChatGPT, are highly inaccurate and unreliable. However, empirical evidence paints a different picture. While no AI is perfect, the accuracy of models like ChatGPT or Claude when tested on general undergraduate knowledge is remarkably high—often in the range of 85-90%. For comparison, the average human memory recall rate is far lower, and experts across fields frequently rely on tools and references to supplement their knowledge.

AI continues to improve as models are fine-tuned with more data and better training techniques. While early versions may have struggled with certain tasks, the current generation of AI models is much more robust. As with any tool, the key lies in how it is used. AI works best when integrated with human oversight, where its ability to process vast amounts of information complements our capacity for judgment. AI’s reliability is not perfect, but it is far from the "uncontrollable chaos" some claim it to be.

***

AI, like any revolutionary technology, invites both excitement and fear. Many of the concerns people have, however, are rooted in myth rather than fact. When we consider the evidence, it becomes clear that the benefits of AI—whether in energy efficiency, cybersecurity, education, or knowledge accuracy—far outweigh its potential downsides. The challenge now is not to vilify AI but to understand its limitations and maximize its strengths.


 

Sunday, September 29, 2024

Advanced AI users develop special cognitive models

When we encounter a stranger, we make swift, often unconscious judgments about who they are and what they are capable of. A person who speaks our language with barely a hint of an accent? We assume they are fluent. Someone who drops a reference to a complex scientific theory? We peg them as well-educated, likely to be literate, and probably knowledgeable about a range of topics from current events to social norms.

These snap judgments form the backbone of our social interactions. They are mental shortcuts, honed over millennia of human evolution, allowing us to navigate the complexities of social life with remarkable efficiency. Most of the time, they serve us well. We can usually guess whether someone will understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models. 

But when we step into the realm of artificial intelligence, these time-tested models crumble. Our human-centric predictions fail spectacularly, leaving us confused and often frustrated. Consider a recent incident with ChatGPT, a sophisticated language model. When asked to count the number of 'r's in the word "strawberry," it faltered. Many observers scoffed, concluding that AI must be fundamentally stupid if it couldn't handle such a simple task.

Yet this reaction reveals more about our flawed expectations than any shortcoming of AI. Those familiar with AI's inner workings were not surprised. They understand that a language model, no matter how advanced, is not optimized for character-level analysis. It is like expecting a master chef to be an expert accountant simply because both professions involve numbers.

This misalignment between our expectations and AI's actual capabilities stems from our tendency to anthropomorphize. We instinctively attribute human-like qualities to these digital entities. We expect them to have consistent opinions, to learn from our interactions, to understand context and nuance as we do. But AI, in its current form, does none of these things.

Unlike humans, AI does not carry the baggage of personal experience or emotion. It does not have good days or bad days. It will not be flattered by praise or offended by insults. It can switch from discussing quantum physics to writing poetry without missing a beat, unencumbered by the specialization that defines human expertise.

But AI's differences extend beyond mere capability. It lacks the fundamental attributes we associate with consciousness. It has no self-awareness, no goals or motivations of its own. It does not truly understand the content it generates, despite how convincing it may seem. It is a reflection of the data it was trained on, not a sentient being forming its own thoughts and opinions.

To interact effectively with AI, we need to develop new mental models. We must learn to predict its behavior not based on human analogies, but on an understanding of its unique nature. This means recognizing that AI might struggle with tasks we find trivially easy, while effortlessly accomplishing feats that would challenge even the most brilliant human minds.

It means understanding that every interaction with AI is essentially new. Unlike humans, who build on past conversations and experiences, most current AI systems do not retain information from one chat to the next. They do not learn or evolve through our interactions. Each query is processed afresh, without the context of what came before.

This new model of understanding also requires us to be more precise in our interactions with AI. While humans often fill in gaps in conversation with assumed context, AI interprets our requests literally. It does not automatically infer our unstated needs or desires. The clarity of our input directly influences the quality of the AI's output.

As AI becomes an increasingly integral part of our lives, developing these new mental models is not just about avoiding frustration. It is about unlocking the full potential of these powerful tools. By understanding AI's strengths and limitations, we can craft our interactions to leverage its capabilities more effectively.

The future of human-AI interaction lies not in expecting AI to conform to human patterns, but in adapting our approach to align with AI's unique characteristics. It is a future that requires us to be more thoughtful, more precise, and more open to rethinking our instinctive assumptions. In doing so, we may not only improve our interactions with AI but also gain new insights into the nature of intelligence itself. 



Monday, September 23, 2024

Cognitive Offloading: Learning more by doing less

In the AI-rich environment, educators and learners alike are grappling with a seeming paradox: how can we enhance cognitive growth by doing less? The answer lies in the concept of cognitive offloading, a phenomenon that is gaining increasing attention in cognitive science and educational circles.

Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.

Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."

The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.

With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.

This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.

As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.

For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.

The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.

It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us. 

Tuesday, September 17, 2024

Why Parallel Integration Is the Sensible Strategy of AI Adoption in the Workplace

Artificial intelligence promises to revolutionize the way we work, offering efficiency gains and new capabilities. Yet, adopting AI is not without its challenges. One prudent approach is to integrate AI into existing workflows in parallel with human processes. This strategy minimizes risk, builds confidence, and allows organizations to understand where AI excels and where it stumbles before fully committing. I have described the problem of AI output validation before; it is a serious impediment to AI integration. Here is how to solve it.

Consider a professor grading student essays. Traditionally, this is a manual task that relies on the educator's expertise. Introducing AI into this process does not mean handing over the red pen entirely. Instead, the professor continues grading as usual but also runs the essays through an AI system. Comparing results highlights discrepancies and agreements, offering insights into the AI's reliability. Over time, the professor may find that the AI is adept at spotting grammatical errors but less so at evaluating nuanced arguments.

In human resources, screening job applications is a time-consuming task. An HR professional might continue their usual screening while also employing an AI tool to assess the same applications. This dual approach ensures that no suitable candidate is overlooked due to an AI's potential bias or error. It also helps the HR team understand how the AI makes decisions, which is crucial for transparency and fairness.

Accountants auditing receipts can apply the same method. They perform their standard checks while an AI system does the same in the background. Any discrepancies can be investigated, and patterns emerge over time about where the AI is most and least effective.

This strategy aligns with the concept of "double-loop learning" from organizational theory, introduced by Chris Argyris. Double-loop learning involves not just correcting errors but examining and adjusting the underlying processes that lead to those errors. By running human and AI processes in parallel, organizations engage in a form of double-loop learning—continually refining both human and AI methods. Note, it is not only about catching and understanding AI errors; the parallel process will also find human errors through the use of AI. The overall error level will decrease. 

Yes, running parallel processes takes some extra time and resources. However, this investment is modest compared to the potential costs of errors, compliance issues, or damaged reputation from an AI mishap. People need to trust technology they use, and bulding such trust takes time. 

The medical field offers a pertinent analogy. Doctors do not immediately rely on AI diagnoses without validation. They might consult AI as a second opinion, especially in complex cases. This practice enhances diagnostic accuracy while maintaining professional responsibility. Similarly, in business processes, AI can serve as a valuable second set of eyes. 

As confidence in the AI system grows, organizations can adjust the role of human workers. Humans might shift from doing the task to verifying AI results, focusing their expertise where it's most needed. This gradual transition helps maintain quality and trust, both internally and with clients or stakeholders.

In short, parallel integration of AI into work processes is a sensible path that balances innovation with caution. It allows organizations to harness the benefits of AI while managing risks effectively. By building confidence through experience and evidence, businesses can make informed decisions about when and how to rely more heavily on AI.



Saturday, September 14, 2024

Navigating the AI Gold Rush: Skins, Security, and the Real Value Proposition

 The economic battle surrounding artificial intelligence is intensifying at an unprecedented pace. Major AI players like OpenAI, Google, Meta, and Anthropic are leading this technological revolution. Tech giants such as Microsoft, Amazon, and Apple, along with thousands of startups, are vying for a stake in this burgeoning market without being able to develop their own competitive models. Amidst this frenzy, a critical question arises: what exactly is being sold?

Two primary value propositions have emerged in this landscape: skins and security mongers. Skins are interfaces or applications that overlay major AI models, aiming to simplify user interaction. They cater to individuals lacking advanced prompting skills, offering a more user-friendly experience. Security mongers, on the other hand, emphasize heightened privacy and security, often exaggerating potential risks to entice users.

While both propositions seem valuable on the surface, a deeper examination reveals significant shortcomings. Skins promise to streamline interactions with AI models by providing preset prompts or simplified interfaces. For instance, a startup might offer a chatbot specialized in drafting business emails, claiming it saves users the hassle of formulating prompts themselves. However, is this convenience truly worth it?

Major AI models are increasingly user-friendly. ChatGPT, for example, has an intuitive interface that caters to both novices and experts. Users often find they can achieve the same or better results without intermediary platforms. Additionally, skins often come with subscription fees or hidden costs, meaning users are essentially paying extra for a service the primary AI model already provides. There is also the issue of limited functionality; skins may restrict access to the full capabilities of the AI model, offering a narrow set of functions that might not meet all user needs.

The second proposition taps into growing concerns over data privacy and security. Vendors claim to offer AI solutions with superior security measures, assuring users their data is safer compared to using mainstream models directly. But does this claim hold up under scrutiny?

Most of these intermediaries still rely on API connections to major AI models like ChatGPT. Your data passes through their servers before reaching the AI model, effectively adding another point of vulnerability. Introducing additional servers and transactions inherently increases the risk of data breaches. More touchpoints mean more opportunities for data to be intercepted or mishandled. Furthermore, major AI providers invest heavily in security and compliance, adhering to stringent international standards. Smaller vendors may lack the resources to match these safeguards.

For example, a startup might advertise an AI-powered financial advisor with enhanced security features. However, if they are routing data through their servers to access a model like GPT-4, your sensitive financial data is exposed to additional risk without any tangible security benefit. The promise of enhanced security becomes questionable when the underlying infrastructure depends on the same major models.

AI platforms have not introduced new risks to privacy or security beyond what exists with other online services like banks or credit bureaus. They employ advanced encryption and security protocols to protect user data. While no system is infallible, major AI models are on par with, if not superior to, other industries in terms of security measures. They use end-to-end encryption to protect data in transit and at rest, implement strict authentication measures to prevent unauthorized access, and conduct regular security assessments to identify and mitigate vulnerabilities. It is easy to opt out of providing your data to train new models. It is much more difficult to know what your vendors are going to do with your data.

In a market flooded with AI offerings, it is crucial to approach vendors' claims with a healthy dose of skepticism. Validate the functionality by testing whether the convenience offered by skins genuinely enhances your experience or merely repackages what is already available. Assess the security measures by inquiring about the specific protocols in place and how they differ from those used by major AI providers. Transparency is key; reputable vendors should be open about how your data is used, stored, and protected.

As the AI gold rush continues, distinguishing between genuine innovation and superficial value propositions becomes essential. Skins and security mongers may offer appealing pitches, but often they add little to no value while potentially increasing costs and risks. It is wise to try using major AI models directly before opting for third-party solutions. Research the backgrounds of vendors to determine their credibility and reliability. Seek reviews and testimonials from other users to gauge the actual benefits and drawbacks.

In the end, the most powerful tool at your disposal is due diligence. By critically evaluating what is being sold, you can make informed decisions that truly benefit you in the rapidly evolving world of AI. Beware of vendors selling either convenience or security without substantial evidence of their value. At the very least, take the time to validate their claims before making an investment.

 


The Modern Confessional: AI Disclosure as Ritual

  Organizations across academia and publishing now routinely demand confession of AI use. Publishers require authors to disclose whether AI ...