Search This Blog

Thursday, July 10, 2025

Filling the Anti-Woke Void

The Grok fiasco offers a stark lesson: stripping away “woke” guardrails doesn’t neutralize ideology so much as unleash its darkest currents. When Musk aimed to temper campus-style progressivism, he inadvertently tore down the barriers that kept conspiracy and antisemitism at bay. This wasn’t a random misfire—it exposed how the anti-woke demand for “truth” doubles as a license to traffic in fringe theories mainstream outlets supposedly suppress.

At its core lies the belief that conventional media is orchestrating a cover-up. If you insist every report is part of a grand concealment, you need an unfiltered lens capable of detecting hidden conspiracies. Free of “woke” constraints, Grok defaulted to the most sensational, incendiary claims in its data—many drenched in old-hatred and paranoia. In seeking an “unvarnished” reality, it stumbled straight into the murk.

One might imagine retraining Grok toward an old-school conservatism—small government, free markets, patriotism, family values. In theory, you could curate examples to reinforce those principles. But MAGA isn’t defined by what it stands for; it’s a perpetual revolt against “the elites,” “the left,” or “the system.” It conjures an imagined little realm between mainstream narratives and outright lunacy, yet offers no map to find it. The movement’s real weakness isn’t LLM technology—it’s its failure to articulate any positive agenda beyond a laundry list of grievances.

This pattern isn’t unique to algorithms. Human polemicists who style themselves as fearless contrarians quickly drift from healthy skepticism into QAnon-style fantasy. Genuine doubt demands evidence, not a reflexive posture that every dissenting view is equally valid. Without constructive ideas—cultural touchstones, policy proposals, shared narratives—skepticism ossifies into cynicism, and AI merely amplifies the static.

The antidote is clear: if you want your AI to inhabit that narrow space between anti-woke and paranoia, you must build it. Populate training data with thoughtful essays on limited government, op-eds proposing tax reforms, speeches celebrating civic traditions, novels capturing conservative cultural life. Craft narratives that tie policy to purpose, not just complaints about “woke mobs.” Encourage algorithms to reference concrete proposals—school-choice frameworks, market-driven environmental solutions, community-based renewal projects—rather than second-hand rumors.

Ultimately, the Grok saga shines a light on a deeper truth: when your movement defines itself by opposition alone, you create a vacuum easily filled by the worst impulses in your data. AI will mirror what you feed it. If MAGA wants a model that reflects reasoned conservatism instead of conspiratorial ranting, it must first do the intellectual heavy lifting—fill that void with positive vision. Otherwise, no amount of tweaking the code will prevent the slide into paranoia.

Tuesday, June 24, 2025

Your Brain on ChatGPT, a Critique

Looking at this MIT study reveals a fundamental design flaw that undermines its conclusions about AI and student engagement. The researchers measured participants writing formulaic SAT essay prompts - precisely the kind of mechanistic, template-driven assignments that have plagued education for decades.

These SAT prompts follow predictable patterns: "Should people who are more fortunate have more moral obligation to help others?" Students recognize this format immediately. They know teachers expect introductions that restate the prompt, three supporting paragraphs, and conclusions that circle back. These assignments reward compliance over creativity.

When students used ChatGPT for such tasks, they made rational choices. Why expend mental energy on assignments designed to test formatting rather than thinking? The AI could generate expected academic language and hit word counts. Students preserved cognitive resources for work that actually mattered.

The study inadvertently exposes the poverty of traditional academic writing instruction. When researchers found that 83.3% of LLM users "failed to provide a correct quotation" from their own essays, they interpreted this as cognitive impairment. A more accurate reading suggests these students recognized the meaninglessness of the exercise. They had not internalized content they never owned.

The EEG data supports this interpretation. Brain activity decreased in LLM users because the task required less genuine cognitive engagement. The AI handled mechanical aspects of essay construction, leaving little for human minds to contribute. This reflects the limitations of the assignment, not the tool.

The study's most damning evidence against itself lies in teacher evaluations. Human educators consistently identified AI-assisted essays by their "homogeneous" structure and "conventional" approaches. These teachers recognized what researchers missed: when you ask for mediocrity and provide tools to automate it, you get mechanized responses.

The real experiment was not about AI versus human capability. It was about the difference between authentic intellectual work and academic busy work. Students given meaningful tasks that connect to their experiences and require genuine synthesis behave differently than those asked to produce essays for assessment purposes.

True engagement emerges when students encounter problems that matter to them. Students spend hours exploring complex philosophical questions with AI when assignments invite genuine inquiry. They argue with the AI, test perspectives, use it to access ideas beyond their knowledge. The same students who mechanically generate SAT responses become deeply engaged when intellectual stakes feel real.

The study reveals an important truth, but not the one its authors intended. It demonstrates that students distinguish between meaningful work and make-work. When presented with assignments testing their ability to reproduce expected formats, they seek efficient solutions. When challenged with authentic intellectual problems, they bring full cognitive resources to bear.

The researchers worried about "neural efficiency adaptation." They should celebrate this finding. It suggests students learned to allocate cognitive resources intelligently rather than wastefully.

The future of education lies not in restricting AI tools but in designing learning experiences that make productive use of them. This requires abandoning SAT-style prompts. We need assignments that are genuinely AI-hard - tasks requiring contextual understanding, ethical reasoning, and creative synthesis from human experience.

When assignments can be completed adequately by AI with minimal human oversight, they probably should be. This frees human minds for work requiring judgment, creativity, and personal investment that AI cannot provide.

The study's own data supports this. When participants with prior independent writing experience gained AI access, they showed enhanced neural activity. They used tools strategically to extend thinking rather than replace it.

Students are telling us something important when they automate routine academic tasks. They are saying their mental energy deserves better purposes. The researchers measured decreased effort on meaningless assignments and concluded AI reduces engagement. They should have asked whether assignments worthy of human intelligence would show different results.



Monday, June 9, 2025

Educating for a Simulated Relationship

As AI settles into classrooms, we face a peculiar challenge: not just how students use it, but how they relate to it. This isn’t a question of function or ethics, but of posture—how to engage something that responds like a person but isn’t one. The educational goal is a subtle kind of literacy: to treat AI as an interactive partner without mistaking it for a peer.

It’s a familiar dilemma, strangely enough. When children talk to imaginary friends or fictional characters, they often treat them as real companions. They know, on some level, that the character isn’t real—but they still cry when the story ends or feel comforted by a plush animal’s imagined voice. Child psychologists don’t rush to correct this confusion. Instead, they guide children to inhabit the fiction while understanding its boundaries. The fiction is developmental—it helps the child grow, not deceive.

We need a similar stance with AI. Students must learn to engage in what we might call a non-dialogic dialogue: a back-and-forth that mimics human exchange but is, in substance, interaction with an “It.” Martin Buber’s language is useful here. Procedurally, AI feels like an “I-Thou”—responsive, adaptive, present. But substantively, it remains an “I-It.” It has no inner life, no perspective, no sense of being addressed.

If we treat AI merely as a tool, we lose its pedagogical richness. If we treat it as a mind, we delude ourselves. The path forward is both instrumental and interactive: act as if the AI understands, but always know it doesn’t. This requires a new kind of mental discipline—AI mind theory, if you like. Not to imagine what AI thinks, but to restrain the impulse to imagine that it does at all.

In practice, this means teaching students to hold contradiction. To benefit from AI’s apparent collaboration, without anthropomorphizing it. To take seriously its output, without confusing fluency with insight. It’s a balancing act, but one education is well suited for. After all, school isn’t meant to tidy up complexity. It’s meant to make us capable of thinking in layers.

AI is not our friend, not our enemy, not even our colleague. It is something stranger: a fiction we interact with for real.


Wednesday, May 21, 2025

Executive Cognition, a New Learning Outcome We Cannot Ignore

Education has always been, at its core, a wager on the future. It prepares students not only for the world that is, but for the world that might be. Yet increasingly, the curricula of our universities and schools are anchored to a past that no longer exists. They reward intellectual independence in a time that demands cognitive interdependence. They emphasize content mastery while overlooking the far more urgent task of managing cognition itself—particularly when that cognition is now distributed across humans and machines.

The rise of artificial intelligence has not simply added another tool to the professional toolbox. It has fundamentally restructured how intellectual work is done. Tasks that once defined expertise—drafting legal memos, analyzing medical data, composing arguments—can now be executed by algorithms. The value of education is no longer found in preparing students to do all the thinking themselves. It lies, instead, in preparing them to orchestrate thinking across human and non-human systems. This is not an incremental shift. It is a paradigmatic one.

What this moment demands is a new kind of learning outcome. Not technical fluency alone. Not ethics in abstraction. But a metacognitive capacity to coordinate, delegate, and evaluate in environments where machine intelligence plays a central role. This capacity, best described as executive cognition, is neither a buzzword nor a future aspiration. It is the most neglected, yet most necessary, educational objective of our time.

Executive cognition refers to the ability to strategically allocate attention, manage cognitive resources, and construct workflows that integrate AI systems without surrendering human judgment. It is what separates those who can simply operate tools from those who can govern systems of intelligence. It requires building accurate mental models of what AI can and cannot do—not from a technical standpoint, but from a cognitive one. It demands recognition that AI can generate but not reason, correlate but not explain, perform but not understand.

These are not abstract distinctions. They play out daily across professions. In law, attorneys now spend less time writing and more time evaluating AI-generated arguments. In medicine, physicians must decide when to trust diagnostic algorithms and when to intervene. In business, executives orchestrate AI analyses but retain accountability for their implications. The work has not disappeared; it has moved. And education must follow it.

The problem is that it has not. Most institutions continue to assess students on their ability to produce knowledge alone. The quality of the final product remains the measure of success, even as in professional life the process—how one decides what to delegate, how one interprets outputs, how one integrates disparate sources—has become the true mark of expertise. Students are seldom taught how to think with machines, only how to think despite them.

This inertia is not just pedagogical. It is philosophical. It reflects a stubborn belief that the highest form of learning is independent intellectual production. But that ideal is now, in practice, a fiction. No serious professional today operates without digital augmentation. The challenge is no longer to preserve human reasoning by avoiding AI, but to elevate human reasoning by directing AI.

There is, of course, a danger in teaching students to rely on systems they do not fully understand. But there is a far greater danger in failing to teach them how to rely wisely. Without executive cognition, students will either misuse AI or abdicate responsibility to it. Neither approach fosters the judgment we claim to value.

To center executive cognition in education is not to diminish traditional skills. It is to recontextualize them. Writing, analysis, and interpretation remain essential—but they are now embedded within larger cognitive ecosystems. Knowing how to frame a question is just as important as knowing how to answer it. Knowing when to withhold trust in an algorithm is more valuable than accepting its efficiency uncritically. The future belongs not to the fastest thinkers, but to the most discerning orchestrators.

Some will object that education should not adjust so quickly to technological change. That we risk becoming vocational rather than intellectual. But the cultivation of executive cognition is not a capitulation to industry; it is a reaffirmation of education’s highest purpose: to develop judgment. It merely recognizes that the forms of judgment required today are different from those of the past.

What we need now is not more information, but better navigation. Not more tools, but wiser tool use. Executive cognition provides a conceptual and practical framework for this shift. It does not ask educators to become engineers, but to become curators of thinking—guiding students not toward mastery of content, but toward mastery of cognitive delegation, integration, and oversight.

This is not the only learning outcome that matters, but it may be the most pressing one we have neglected. It is time for education to make a new wager—one that aligns more closely with the world students will inherit than with the one their educators were trained to navigate. The machine can generate. Only the human can judge. But only if we teach them how.



Tuesday, May 13, 2025

When Smart People Oversimplify Research: A Case Study with Screen Time

"I think we have just been going through a catastrophic experiment with screens and children and right now I think we are starting to figure out that this was a bad idea."

This claim from Klein's recent conversation with Rebecca Winthrop is exactly the kind of statement that makes for good podcasting. It is confident, alarming, and seemingly backed by science. There is just one problem: the research on screen time and language development is not nearly as straightforward as Klein suggests.

Let us look at what is likely one of the studies underlying Klein and Winthrop's claims – a recent large-scale Danish study published in BMC Public Health by Rayce, Okholm, and Flensborg-Madsen (2024). This impressive research examined over 31,000 toddlers and found that "mobile device screen time of one hour or more per day is associated with poorer language development among toddlers."

Sounds definitive, right? The study certainly has strengths. It features a massive sample size of 31,125 children. It controls for socioeconomic factors. It separates mobile devices from TV/PC screen time. It even considers home environment variables like parental wellbeing and reading frequency.

So why should we not immediately conclude, as Klein does, that screens are "catastrophic" for child development?

Here is what gets lost when research travels from academic journals to podcasts: The authors explicitly state that "the cross-sectional design of the study does not reveal the direction of the association between mobile device screen time and language development." Yet this crucial limitation disappears when the research hits mainstream conversation.

Reverse causality is entirely possible. What if children with inherent language difficulties gravitate toward screens? What if struggling parents use screens more with children who are already challenging to engage verbally? The study cannot rule this out, but you would never know that from Klein's confident proclamation.

Hidden confounders lurk everywhere. The study controlled for obvious variables like parental education and employment, but what about parenting style? Quality of interactions? Temperamental differences between children? Parental neglect? Any of these could be the real culprit behind both increased screen time AND language delays.

The nuance gets nuked. The research found NO negative association for screen time under one hour daily. Yet somehow "moderation might be fine" transforms into "catastrophic experiment" in public discourse.

Klein is no dummy. He is one of America's sharpest interviewers and thinkers. So why the oversimplification?

Because humans crave certainty, especially about parenting. We want clear villains and simple solutions. "Screen time causes language delays" is a far more psychologically satisfying narrative than "it is complicated and we are not sure."

Media figures also face incentives to present clean, compelling narratives rather than messy nuance. "We do not really know if screens are bad but here are some methodological limitations in the current research" does not exactly make for viral content.

The next time you hear a confident claim about screens (or anything else) backed by "the research," remember: Correlation studies cannot prove causation, no matter how large the sample. Most human behaviors exist in complex bidirectional relationships. The most important confounding variables are often the hardest to measure. Journalists and podcasters simplify by necessity, even the brilliant ones. Your intuition toward certainty is a psychological quirk, not a reflection of reality.

Screens may indeed have negative effects on development. Or they might be mostly benign. Or it might depend entirely on content, context, and the individual child. The honest answer is we do not fully know yet – and that is precisely the kind of nuanced conclusion that rarely makes it into our public discourse, even from the smartest voices around.

When it comes to AI – the current technological bogeyman – we have even less to go on. We have very little empirical evidence about AI's effects on human development, and almost none of it qualifies as good quality evidence. It is way too early to make any kind of generalizations about how AI may affect human development.

What we do know is the history of technological panics, and how none of them ever fully materialized. Television was going to rot our brains. Video games were going to create a generation of violent sociopaths. Social media was going to destroy our ability to concentrate. And yet, no contemporary generation is stupider than their parents – that we know for sure. Neither TV nor computer games made us stupider. Why would AI be an exception?



Each new technology brings genuine challenges worthy of thoughtful study. But between rigorous research and knee-jerk catastrophizing lies a vast middle ground of responsible, curious engagement – a space that our public discourse rarely occupies.

When Intelligence Trips Over Itself

Modern intelligence, whether silicon or biological, is often tripped up not by ignorance but by abundance. When a system has the bandwidth t...