Search This Blog

Thursday, April 24, 2025

An Executive Order That Misses the Point

There’s something oddly familiar about President Trump’s latest executive order on AI in education. It arrives with the usual pomp—task forces, public-private partnerships, shiny language about national competitiveness. It nods to professional development, student certifications, and the holy grail of “future readiness.” And yet, beneath the surface, it offers little that addresses the real crisis: the profound, structural disruption AI has already begun to unleash on the educational system.

At first glance, the executive order seems like a step in the right direction. Who would argue against preparing students for an AI-dominated future? The task force will do what task forces do: coordinate. Teachers will receive grants for AI training, and students might get badges or certificates that proclaim them “AI literate.” It all sounds terribly forward-thinking. But there’s a strange emptiness to the plan, like training firefighters to use a new hose while the forest burns around them.

What the order fundamentally fails to grasp is not that AI is coming—it’s that AI is already here, and it has changed everything. The real problem is not a lack of exposure to AI tools, but the complete misalignment between existing educational structures and the cognitive shift AI demands. This isn't a matter of professional development. It’s a matter of epistemology. What we teach, how we assess, and the very roles of teachers and students—these are all in flux.

A truly meaningful policy would start not with a national challenge or another conference keynote, but with funding for large-scale curriculum realignment. AI doesn’t just add another subject; it changes the assumptions under which all subjects are taught. Writing, for instance, is no longer merely a human-centered exercise in articulation—it is now entangled with generative tools that can produce text faster than students can think. The same is true for coding, design, even problem-solving. If students are using AI to generate answers, then we need to redesign assignments to emphasize process over product, collaboration over output, judgment over memorization. That’s not a tweak—it’s a reinvention.

And teachers? They're not just under-trained; they’re overwhelmed. They’re being asked to both maintain continuity and facilitate transformation, to adopt AI while resisting its most corrosive effects. Without time, resources, and genuine structural support, this is educational gaslighting: expecting miracles with nothing but webinars and a cheerful press release.

It would be tempting to chalk this up to Trumpian optics—another performance of leadership without the substance. But the failure runs deeper than that. The Biden administration, for all its technocratic polish, missed the mark too. There has been a bipartisan inability to understand the core disruption AI poses to education. This is not about helping kids “catch up” in reading and math. It is about redefining what catching up even means in a world where machines do much of the thinking.

The deeper pattern is this: a long-term habit of reforming education without understanding it. Policymakers continue to treat education as if it were a content delivery mechanism, easily reprogrammed for the next industrial wave. But education is not a transmission line—it’s an ecosystem of meaning, motivation, and identity. AI does not simply slot into that ecosystem. It changes its climate.

If the United States genuinely wants to lead in an AI-driven world, then it must do more than produce AI-savvy workers. It must invest in educators not as functionaries, but as architects of a new pedagogical order. That takes courage, not just coordination. It takes money, not just mandates.

So yes, the executive order is headed in the right direction. But it’s moving far too slowly, and on the wrong road.



Thursday, April 17, 2025

Why Education Clings to Irrelevance

There’s a particular irony in the way schools prepare us for life by arming us with tools designed for a world that no longer exists. Latin, long dead yet stubbornly alive in syllabi. Algebra, elegantly abstract but only marginally helpful in daily decisions compared to statistics, which might actually help you not get swindled by a mortgage. Even now, when artificial intelligence knocks at the schoolhouse door, the impulse is to double down on handwriting and closed-book tests, as if graphite and paranoia can hold off the machine age.

The problem isn’t just inertia. It’s something deeper—almost spiritual. Education doesn’t merely lag behind practicality. It seems to celebrate its irrelevance. Irrelevance isn’t a bug of the system. It’s the feature. Like a stubborn elder at the dinner table insisting that things were better “back in the day,” the institution of education resists change not just because it can, but because its authority is rooted in not changing.

To teach is, implicitly, to claim authority. But where does this authority come from? Not the future—that’s terra incognita. You can't be an expert in what hasn't happened yet. So the educator turns to the past, because the past has already been canonized. It has textbooks. It has footnotes. It has certainty. And nothing flatters the authority of a teacher more than certainty. Thus, to preserve this role, education must tie itself to a vision of life that is already archived.

There’s a paradox here: education is supposedly preparation for life, yet it often refuses to adapt to the life that is actually unfolding. It is conservative in the truest sense—not politically, but ontologically. It conserves the past. It prepares students for a utopia that once was, or more precisely, that someone once imagined was. The classroom becomes a time capsule, not a launchpad.

This is not entirely wrongheaded. The future is uncertain, and while we might guess at the skills it will demand—data literacy, adaptability, collaborative creativity—we cannot guarantee them. So we retreat to what we can guarantee: the conjugation of Latin verbs, the proof of triangles, the essays written in blue ink. It's a strategy of safety cloaked as rigor. If we can’t foresee what will matter tomorrow, at least we can be very confident about what mattered yesterday.

And yet, this nostalgia has consequences. It breeds irrelevance not just in content, but in spirit. When students ask, “When will I ever use this?” and teachers respond with some forced scenario involving imaginary apples or train schedules, the real answer—“you probably won’t, but we’re doing it anyway”—lurks just beneath the surface. The curriculum becomes a kind of ritual, an educational incense burned in memory of older truths.

The arrival of AI only sharpens this tension. Faced with the destabilizing presence of machines that can write, summarize, solve, and simulate, many educators panic. But rather than adapting, they retreat further. The pencil becomes a moral statement. The closed classroom a sanctuary. There’s almost a theological quality to it, as if real learning must involve a kind of suffering, and anything too efficient is suspect.

It’s tempting to dismiss this all as folly, but maybe it reflects something deeply human. Our fear of irrelevance leads us to preserve what is already irrelevant, in hopes that it might make us relevant again. In this way, education mirrors its creators: creatures of habit, haunted by the past, nervous about the future, and always unsure where real authority should lie.

Perhaps that’s the real lesson schools teach, intentionally or not. That relevance is fragile, and that meaning is something we inherit before we can create it. 


Monday, April 7, 2025

Deep Research is still more of a promise

The promise of deep research by AI tools like ChatGPT is simple: feed in a question, receive a nuanced, well-reasoned answer, complete with references and synthesis. And in some domains, it delivers admirably. When it comes to media trends, tech news, or cultural analysis, the tool works best. It sifts through the torrent of online commentary, news articles, blog posts, and social chatter to surface patterns and narratives. The real value here lies not just in the volume of data it processes, but in how the user frames the question. A clever, counterintuitive prompt can elicit insights that feel like genuine thought.

But the illusion shatters when the query turns academic. For scholarly literature reviews, this tool falters. It is not the fault of the software itself—there is no shortage of computational power or linguistic finesse. The limitation is upstream. Most academic journals sit behind expensive paywalls, historically inaccessible to large language models. The corpus they are trained on has lacked precisely the kind of data that matters most for rigorous research: peer-reviewed studies, meta-analyses, theoretical frameworks built over decades.

This, however, is beginning to change. In May 2024, Microsoft signed a $10 million deal with Taylor & Francis to license journal content for its Copilot AI. OpenAI, for its part, has secured a two-year agreement with The Associated Press and forged partnerships with European publishers like Axel Springer, Le Monde, and Prisa Media—giving ChatGPT a better grasp of reputable journalistic and scholarly content. Wiley joined the fray with a $23 million licensing deal to grant an unnamed AI developer access to its academic publishing portfolio. Even Elsevier, long a fortress of paywalled knowledge, is now channeling its scholarly data into AI collaborations.

These are significant moves. They mark a transition from aspiration to access. Once these agreements begin to reflect in AI performance, the quality of output will change markedly. A tool that can both identify the pivotal paper and incorporate it meaningfully into its reasoning would be a true research assistant—something closer to intellectual augmentation than just computational summarization.

It is still early days. Scite, for now, remains stronger at pointing users to the right academic sources, even if its analytical output lacks flair. ChatGPT and its peers, in contrast, excel at synthesis but stumble when the raw material is lacking. The ideal tool is still on the horizon.

There is an irony, nevertheless. AI, the most advanced information-processing technology ever built, has been running on the least rigorous parts of the internet. It has quoted tweets with ease but struggled to cite the peer-reviewed studies that ought to anchor serious inquiry. That is no longer a permanent condition. It is, increasingly, a transitional one.

The future of AI in research will be determined not solely by engineering breakthroughs, but by access negotiations. With major publishers now at the table, the landscape is poised for a shift. For the user, the best strategy remains what it has always been: ask sharp questions. But soon, the machine’s answers may finally rest on deeper knowledge.


Executive Cognition, a New Learning Outcome We Cannot Ignore

Education has always been, at its core, a wager on the future. It prepares students not only for the world that is, but for the world that m...