Search This Blog

Wednesday, August 27, 2025

Custom Bot Segregation and the Problem with a Hobbled Product

CSU’s adoption of ChatGPT Edu is, in many ways, a welcome move. The System has recognized that generative AI is no longer optional or experimental. It is part of the work students, researchers, and educators do across disciplines. Providing a dedicated version of the platform with institutional controls makes sense. But the way it has been implemented has led to a diminished version of what could have been a powerful tool.

The most immediate concern is the complete ban on third-party custom bots. Students and faculty cannot use them, and even more frustrating, they cannot share the ones they create beyond their own campus. The motivation is likely grounded in cybersecurity and privacy concerns. But the result is a flawed solution that restricts access to useful tools and blocks opportunities for creativity and professional development.

Some of the most valuable GPTs in use today come from third-party developers who specialize in specific domains. Bots that incorporate Wolfram, for instance, have become essential in areas like physics, engineering, and data science. ScholarAI and ScholarGPT are very useful in research, and not easy to replicate. There are hundreds more potentially useful tools. Not having access to those tools on the CSU platform is not just a minor technical gap. It is an educational limitation.

The problem becomes even clearer when considering what students are allowed to do with their own work. If someone builds a custom GPT in a course project, they cannot share it publicly. There is no way to include it in a digital portfolio or present it to a potential employer. The result is that their work remains trapped inside the university’s system, unable to circulate or generate value beyond the classroom.

This limitation also weakens CSU’s ability to serve the public. Take, for example, an admissions advisor who wants to create a Custom bot to help prospective or transfer students explore majors or understand credit transfers. The bot cannot be shared with anyone outside the CSU environment. In practice, the people who most need that information are blocked from using it. This cuts against the mission of outreach and access that most universities claim to support.

Faced with these limits, faculty and staff are left to find workarounds. Some are like me and now juggle two accounts, one tied to CSU’s system and another personal one that allows access to third-party tools. We have to pay for our personal accounts out of pocket. This is not sustainable, and it introduces friction into the very work the platform was meant to support.

Higher education functions best when it remains open to the world. It thrives on collaboration across institutions, partnerships with industry, and the free exchange of ideas and tools. When platforms are locked down and creativity is siloed, that spirit is lost. We are left with a version of academic life that is narrower, more cautious, and less connected.

Of course, privacy and security matter. But so does trust in the people who make the university what it is. By preventing sharing and disabling custom bots, the policy sends a message that students and faculty cannot be trusted to use these tools responsibly. It puts caution ahead of creativity and treats containment as a form of care.

The solution is not difficult. Other platforms already support safer modes of sharing, such as read-only access, limited-time links, or approval systems. CSU could adopt similar measures and preserve both privacy and openness. What is needed is not better technology, but a shift in priorities.

Custom GPTs are not distractions. They are how people are beginning to build, explain, and share knowledge. If we expect students to thrive in that environment, they need access to the real tools of the present, not a constrained version from the past.



Saturday, August 23, 2025

The Start-up Advantage and the Plain Bot Paradox

In the gold rush to AI, start-ups seem, at first glance, to have the upper hand. They are unburdened by legacy infrastructure, free from the gravitational pull of yesterday’s systems, and unshackled by customer expectations formed in a pre-AI era. They can begin with a blank canvas and sketch directly in silicon, building products that assume AI not as an add-on, but as the core substrate. These AI-native approaches are unencumbered by the need to retrofit or translate—start-ups speak the native dialect of today’s machine learning systems, while incumbents struggle with costly accents.

In contrast, larger, established companies suffer from what could be called "retrofitting fatigue." Their products, honed over decades, rest on architectures that predate the transformer model. Introducing AI into such ecosystems isn’t like adding a module; it’s more akin to attempting a heart transplant on a marathon runner mid-race. Not only must the product work post-op, it must continue to serve a massive, often demanding, user base—an asset that is both their moat and their constraint.

Yet even as start-ups celebrate their greenfield momentum, they stumble into what we might call the plain bot paradox. No matter how clever the product, if the end-user can get equivalent value from a general-purpose AI like ChatGPT, what exactly is the start-up offering? The open secret in AI product development is this: it is easier than ever to build a “custom” bot that mimics almost any vertical-specific product. The problem is not technical feasibility. It’s differentiation.

A travel-planning bot? A productivity coach? A recruiter-screening assistant? All of these are delightful until a user realizes they can recreate something just as functional using a combination of ChatGPT and a few well-worded prompts. Or worse, that OpenAI or Anthropic might quietly roll out a built-in feature next week that wipes out an entire startup category—just as the “Learn with ChatGPT” feature recently did to a slew of bespoke AI tutoring tools. This isn’t disruption. It’s preemption.

The real kicker is that start-ups not only compete with each other but also with the very platforms they’re building on. This is like opening a coffee stand on a street where Starbucks has a legal right to install a kiosk next to you at any moment—and they already own the espresso machine.

So if start-ups risk commodification and incumbents risk inertia, is anyone safe? Some large companies attempt a third route: the internal start-up. Known in management lore as a “skunk works” team—originally a term coined at Lockheed to describe a renegade engineering group—these are designed to operate with the nimbleness of a start-up but the resources of a conglomerate. But even these in-house rebels face the plain bot paradox. They too must justify why their innovation can’t be replicated by a general AI and a plug-in. A sandboxed innovation team is still building castles on the same sand.

Which brings us to a more realistic and arguably wiser path forward for incumbents: don’t chase AI gimmicks, and certainly don’t just layer AI onto old products and call it transformation. (Microsoft, bless its heart, seems to be taking this route—slathering Copilot across its suite like a condiment, hoping it will make stale workflows taste fresh again.) Instead, the challenge is to imagine and invest in products that are both fundamentally new and fundamentally anchored in the company’s core assets—distribution, brand trust, proprietary data, deep domain expertise—things no plain bot can copy overnight.

For example, a bank doesn’t need to build yet another AI budgeting assistant. It needs to ask what role it can play in a world where money advice is free and instant. Perhaps the future product isn’t a dashboard, but a financial operating system deeply integrated with the bank’s own infrastructure—automated, secure, regulated, and impossible for a start-up to replicate without decades of licensing and customer trust.

In other words, companies must bet not on AI as a bolt-on feature, but on rethinking the problems they’re uniquely positioned to solve in an AI-saturated world. This might mean fewer moonshots and more thoughtful recalibrations. It might mean killing legacy products before customers are ready, or inventing new categories that make sense only if AI is taken for granted.

The trick, perhaps, is to act like a start-up but think like an incumbent. And for start-ups? To act like an incumbent long before they become one. Because in a world of rapidly generalizing intelligence, the question is not what can be built, but what can endure.



Tuesday, August 19, 2025

Why Agentic AI Is Not What They Say It Is

There is a lot of hype around agentic AI, systems that can take a general instruction, break it into steps, and carry it through without help. The appeal is obvious: less micromanagement, more automation. But in practice, it rarely delivers.

These systems operate unsupervised. If they make a small mistake early on, they carry it forward, step by step, without noticing. By the time the result surfaces, the damage is already baked in. It looks finished but is not useful.

Humans handle complexity differently. We correct course as we go. We spot inconsistencies, hesitate when something feels off, we correct. That instinctive supervision that is often invisible, is where most of the value lies. Not in brute output, but in the few moves that shape it. 

The irony is that the more reliable and repeatable a task is, the less sense it makes to use AI. Traditional programming is better suited to predictable workflows. It is deterministic, transparent, and does not hallucinate. So if the steps are that well defined, why introduce a probabilistic system at all?

Where AI shines is in its flexibility, its ability to assist in murky, open-ended problems. But those are exactly the problems where full AI autonomy breaks down. The messier the task, the more essential human supervision becomes.

There is also cost. Agentic AI often burns through vast compute resources chasing the slightly misunderstood task. And once it is done, a human still has to step in and rerun it? burning through even more resources.

Yes, AI makes humans vastly more productive. But the idea that AI agents will soon replace humans overseeing AI feels wrong. At least I have not seen anything even remotely capable of doing so. Human supervision is not a weakness to be engineered away. It is where the human-machine blended intelligence actually happens.



Sunday, August 10, 2025

When Intelligence Trips Over Itself


Modern intelligence, whether silicon or biological, is often tripped up not by ignorance but by abundance. When a system has the bandwidth to entertain countless possibilities, it will often do so even when the problem demands only one. This is the problem of overthinking. It is not confined to anxious students before an exam or committees drafting endless reports. It now appears in machine intelligence too.

The pattern is the same. A large language model with vast parameters, trained on oceans of data, receives a simple task: write an email, solve an arithmetic puzzle, summarize a paragraph. It could apply the shortest path to the solution, but the surplus capacity tempts it into elaboration: building scaffolding for a hut, stacking analysis upon analysis until the original goal is obscured. The human version is familiar: the writer who takes three days to craft a birthday card, or the engineer who designs a spacecraft to carry groceries.

It was not supposed to be this way. The promise of AI “triage” was to select the right model for the right problem. A trivial query would go to a lightweight system, while a dense legal contract would be parsed by the heavyweight. In theory, this mirrors the human brain’s ability to recruit different mental resources depending on the task. In practice, if the triage itself is handled by a highly capable model, we are back where we started. A system too clever for its own good can overcomplicate the act of deciding how not to overcomplicate.

Before the release of the most advanced models, there was a certain blunt efficiency in older systems. They could not afford to waste cycles on ornate reasoning, so they didn’t. Just as a village carpenter without power tools cuts wood cleanly with a single saw, a smaller model works directly from inputs to outputs. The risk of convolution was minimal because convolution was beyond its means.

This limitation hints at a broader truth about intelligence: the ability to simplify is not a crude by-product of ignorance but a hallmark of mastery. Seeing the simple in the complex requires recognizing which details can be safely ignored without damaging the structure of the answer. It is a skill that mathematics prizes and that politics often lacks, where simple slogans are prized but seldom accurate.

Not all humans excel at this. Some are chronic complicators, capable of turning a dinner plan into a logistical nightmare. Others, whether through temperament or training, can cut to the core of a problem in minutes. This talent is partly instinctive, but it can be cultivated. It demands the discipline to resist showing off all that one knows, and the humility to accept that the shortest path might be the best one.

In education, this principle is often inverted. Students are rewarded for showing all their working, which is fine for demonstrating understanding but can entrench the habit of exhaustive thinking even when unnecessary. In technology, the same bias exists: “more features” is often sold as progress, even when each extra layer increases the chance of failure. The smartphone with twenty overlapping settings menus is no more “intelligent” than one with a handful of intuitive buttons.

The challenge for AI design is to embed this selective simplicity without crippling capacity. One approach is genuinely multi-tiered systems, where the triage mechanism is not a miniature genius in its own right but a deliberately constrained judge. Another is to train large models not only to produce accurate answers but also to value resource economy, much as humans learn to answer a question in an exam within the time limit.

For individuals, the lesson is parallel. High mental horsepower can be as much a liability as an asset if it is allowed to run without restraint. Some of the most effective thinkers are those who know when to stop thinking. They can hold complexity in reserve, deploying it only when the problem justifies the cost. The rest of the time they rely on heuristics, rules of thumb, and the confidence that a rough answer now may be better than a perfect answer too late.

We live in an era that celebrates maximalism: bigger models, bigger data, bigger ideas. But as both humans and machines show, sometimes the smarter move is to shrink the frame. To stop not because you cannot go further, but because you have already gone far enough.




Custom Bot Segregation and the Problem with a Hobbled Product

CSU’s adoption of ChatGPT Edu is, in many ways, a welcome move. The System has recognized that generative AI is no longer optional or experi...