Search This Blog

Thursday, April 17, 2025

Why Education Clings to Irrelevance

There’s a particular irony in the way schools prepare us for life by arming us with tools designed for a world that no longer exists. Latin, long dead yet stubbornly alive in syllabi. Algebra, elegantly abstract but only marginally helpful in daily decisions compared to statistics, which might actually help you not get swindled by a mortgage. Even now, when artificial intelligence knocks at the schoolhouse door, the impulse is to double down on handwriting and closed-book tests, as if graphite and paranoia can hold off the machine age.

The problem isn’t just inertia. It’s something deeper—almost spiritual. Education doesn’t merely lag behind practicality. It seems to celebrate its irrelevance. Irrelevance isn’t a bug of the system. It’s the feature. Like a stubborn elder at the dinner table insisting that things were better “back in the day,” the institution of education resists change not just because it can, but because its authority is rooted in not changing.

To teach is, implicitly, to claim authority. But where does this authority come from? Not the future—that’s terra incognita. You can't be an expert in what hasn't happened yet. So the educator turns to the past, because the past has already been canonized. It has textbooks. It has footnotes. It has certainty. And nothing flatters the authority of a teacher more than certainty. Thus, to preserve this role, education must tie itself to a vision of life that is already archived.

There’s a paradox here: education is supposedly preparation for life, yet it often refuses to adapt to the life that is actually unfolding. It is conservative in the truest sense—not politically, but ontologically. It conserves the past. It prepares students for a utopia that once was, or more precisely, that someone once imagined was. The classroom becomes a time capsule, not a launchpad.

This is not entirely wrongheaded. The future is uncertain, and while we might guess at the skills it will demand—data literacy, adaptability, collaborative creativity—we cannot guarantee them. So we retreat to what we can guarantee: the conjugation of Latin verbs, the proof of triangles, the essays written in blue ink. It's a strategy of safety cloaked as rigor. If we can’t foresee what will matter tomorrow, at least we can be very confident about what mattered yesterday.

And yet, this nostalgia has consequences. It breeds irrelevance not just in content, but in spirit. When students ask, “When will I ever use this?” and teachers respond with some forced scenario involving imaginary apples or train schedules, the real answer—“you probably won’t, but we’re doing it anyway”—lurks just beneath the surface. The curriculum becomes a kind of ritual, an educational incense burned in memory of older truths.

The arrival of AI only sharpens this tension. Faced with the destabilizing presence of machines that can write, summarize, solve, and simulate, many educators panic. But rather than adapting, they retreat further. The pencil becomes a moral statement. The closed classroom a sanctuary. There’s almost a theological quality to it, as if real learning must involve a kind of suffering, and anything too efficient is suspect.

It’s tempting to dismiss this all as folly, but maybe it reflects something deeply human. Our fear of irrelevance leads us to preserve what is already irrelevant, in hopes that it might make us relevant again. In this way, education mirrors its creators: creatures of habit, haunted by the past, nervous about the future, and always unsure where real authority should lie.

Perhaps that’s the real lesson schools teach, intentionally or not. That relevance is fragile, and that meaning is something we inherit before we can create it. 


Monday, April 7, 2025

Deep Research is still more of a promise

The promise of deep research by AI tools like ChatGPT is simple: feed in a question, receive a nuanced, well-reasoned answer, complete with references and synthesis. And in some domains, it delivers admirably. When it comes to media trends, tech news, or cultural analysis, the tool works best. It sifts through the torrent of online commentary, news articles, blog posts, and social chatter to surface patterns and narratives. The real value here lies not just in the volume of data it processes, but in how the user frames the question. A clever, counterintuitive prompt can elicit insights that feel like genuine thought.

But the illusion shatters when the query turns academic. For scholarly literature reviews, this tool falters. It is not the fault of the software itself—there is no shortage of computational power or linguistic finesse. The limitation is upstream. Most academic journals sit behind expensive paywalls, historically inaccessible to large language models. The corpus they are trained on has lacked precisely the kind of data that matters most for rigorous research: peer-reviewed studies, meta-analyses, theoretical frameworks built over decades.

This, however, is beginning to change. In May 2024, Microsoft signed a $10 million deal with Taylor & Francis to license journal content for its Copilot AI. OpenAI, for its part, has secured a two-year agreement with The Associated Press and forged partnerships with European publishers like Axel Springer, Le Monde, and Prisa Media—giving ChatGPT a better grasp of reputable journalistic and scholarly content. Wiley joined the fray with a $23 million licensing deal to grant an unnamed AI developer access to its academic publishing portfolio. Even Elsevier, long a fortress of paywalled knowledge, is now channeling its scholarly data into AI collaborations.

These are significant moves. They mark a transition from aspiration to access. Once these agreements begin to reflect in AI performance, the quality of output will change markedly. A tool that can both identify the pivotal paper and incorporate it meaningfully into its reasoning would be a true research assistant—something closer to intellectual augmentation than just computational summarization.

It is still early days. Scite, for now, remains stronger at pointing users to the right academic sources, even if its analytical output lacks flair. ChatGPT and its peers, in contrast, excel at synthesis but stumble when the raw material is lacking. The ideal tool is still on the horizon.

There is an irony, nevertheless. AI, the most advanced information-processing technology ever built, has been running on the least rigorous parts of the internet. It has quoted tweets with ease but struggled to cite the peer-reviewed studies that ought to anchor serious inquiry. That is no longer a permanent condition. It is, increasingly, a transitional one.

The future of AI in research will be determined not solely by engineering breakthroughs, but by access negotiations. With major publishers now at the table, the landscape is poised for a shift. For the user, the best strategy remains what it has always been: ask sharp questions. But soon, the machine’s answers may finally rest on deeper knowledge.


Why Education Clings to Irrelevance

There’s a particular irony in the way schools prepare us for life by arming us with tools designed for a world that no longer exists. Latin,...