Search This Blog

Friday, February 28, 2025

The Invasive Innovation, a Case of AI in Education

Disruptive innovations, as defined by Clayton Christensen, typically emerge at the margins before gradually overtaking established players. In contrast, artificial intelligence in education is not a slow-moving force of competitive displacement. It is an invasive innovation—spreading far beyond the predictions of Everett Rogers’ diffusion model, propelled not as much by institutions as by students themselves. This is not a technology waiting to be adopted. It is a reality forcing adaptation. Treating it as an optional innovation mises the point entirely. 

Educational leaders, policymakers, and institutions may wish to evaluate AI’s impact, consider best practices, and implement careful frameworks. But they are not in control of the timeline. The adoption is not up for debate. It is already happening, driven by students who see AI not as a tool of the future but as an everyday utility, as natural as search engines or calculators. The question is not when to integrate AI into education, but how to manage an environment where AI is already integrated—often in ways institutions neither designed nor anticipated.

This is why traditional approaches to educational innovations are inadequate. Normally, innovations are assessed, piloted, and then scaled. But AI in education is unfolding more like a crisis than an initiative. It requires a response more akin to a public health emergency than a strategic rollout. When COVID-19 disrupted education, schools did not spend years analyzing whether remote learning was effective. They pivoted overnight, creating imperfect but necessary solutions. AI invasion demands a similar level of urgency.

Institutional response to invasive AI must be comprehensive rather than defensive. How do we accommodate this invasive innovation? Putting AI tools directly in the hands of faculty represents a critical first step, ensuring they engage with the same technologies their students already access freely. This must be paired with robust professional development—not merely technical training but genuine pedagogical exploration that helps educators reimagine their disciplinary approaches in an AI-augmented world.

Beyond tools and training, institutions must invest in creating dedicated time and space for disciplinary teams to collaboratively revise their assignments. The goal is not to create "cheat-proof" assessments but to develop authentic tasks that meaningfully incorporate AI as a partner in learning. This demands protected time away from everyday teaching pressures, where faculty can experiment, fail, refine, and document new approaches that acknowledge AI as a permanent fixture in their educational landscape rather than an external threat to be neutralized.



Wednesday, February 26, 2025

Sidorkin Hypothesis: Beyond Procedural Fluency in AI-Augmented Instruction

The hypothesis is this: Quality AI-augmented instruction reduces emphasis on procedural skills while enhancing higher-order thinking and conceptual learning. This shift may offer an alternative pathway for cognitive offloading, which could supplement or even replace traditional procedural skills acquisition.

Cognitive load theory, developed by John Sweller in the 1980s, provides a useful framework. The theory posits that our working memory has limited capacity when dealing with new information. Sweller himself, along with many other cognitive scientists, views fluency in procedural skills as a major mechanism for cognitive offloading. When basic procedures become automatic through practice, they consume fewer working memory resources, thereby freeing up mental capacity for higher-order learning. This is why traditional education emphasizes mastering procedural skills—calculating derivatives, balancing chemical equations, applying grammatical rules—before tackling complex conceptual work.

In my view, AI tools function as an alternative cognitive offloading mechanism that can complement or even supersede traditional procedural fluency. These tools handle procedural tasks , creating cognitive space for students to engage with concepts at a deeper level without first developing procedural automaticity. Consider a student learning calculus: traditionally, substantial mental resources are dedicated to memorizing differentiation rules and practicing their application. With AI assistance handling these procedural aspects, the student can direct cognitive resources toward understanding conceptual foundations—what derivatives actually represent, how they relate to real-world phenomena, and when different approaches should be applied.

This approach necessitates reimagining assessment. Traditional writing instruction heavily rewards procedural accuracy in grammar, citation formats, and structural conventions, but in an AI-augmented educational landscape, assessment must evolve to measure the ability to prompt AI tools effectively, the quality of critical analysis, application of rhetorical strategies in novel contexts, and evaluation of AI-generated text. The focus shifts from "Can you properly format this essay?" to "Can you determine which rhetorical approach best suits your purpose, interpret complex texts critically, and evaluate whether the AI-generated analysis captures the deeper meaning of the work being examined?"

This transformation faces substantial challenges. Educators rightfully question whether students can truly grasp concepts without some level of procedural fluency. it is likely that some basic  procedural knowledge remains necessary for conceptual understanding. The optimal approach involves thoughtful integration—using AI to reduce procedural load while ensuring students develop core competencies. To put it roughly, "procedural knowledge - yes, procedural fluency - not, really."  

The equalizing potential of AI-augmented instruction cannot be overlooked. Students who traditionally struggle with procedural aspects—whether due to learning differences, educational disruptions, or lack of prior opportunities—may find themselves able to participate more fully in higher-order learning. This democratization of advanced conceptual engagement could elevate educational outcomes across diverse student populations. We stand at the threshold of potentially transforming education from a system that sorts students based on procedural proficiency to one that cultivates universally richer, more advanced learning outcomes: deeper critical analysis, more sophisticated problem formulation, nuanced ethical reasoning, and creative synthesis across domains—intellectual capabilities that more authentically reflect the complex demands of contemporary society.



Saturday, February 22, 2025

On Techno-Utopianism. Elon Musk and the Soul of Education

The recent video of Elon Musk promising AI teachers reveals a common misunderstanding among technology leaders. They see education primarily as information transfer and skills training, where an infinitely patient AI system delivers perfectly tailored content to each student. This viewpoint ignores the fundamental nature of education as a relational institution.

Since Gutenberg's invention of the printing press, motivated individuals could teach themselves almost anything. Libraries contain more knowledge than any single teacher. Yet most people do not turn into autodidacts. Why is that? The question is not how to make knowledge more accessible, but why people choose to engage with it.

Teachers generate reasons to learn through two main approaches. In more constructivist settings, they inspire curiosity and create engaging problems to solve. In mor traditional schools, they maintain authority and discipline. In most schools, there is a mixture of both. Both methods work because they establish a social framework for learning. A good teacher knows when to push and when to comfort, when to explain and when to let students struggle.

The comparison of AI to Einstein as a teacher misses the point. Teaching requires different qualities than scientific genius - the capacity to enter a relationship, to create meaningful connections, and to help students discover their own reasons for learning. An AI system, no matter how knowledgeable, cannot do any of that.

Students often study not because they find the subject inherently fascinating, but because they respect  their teacher, want to belong to a learning community, or seek to fulfill social expectations. Even negative motivations like fear of disappointing others have a distinctly human character. 

The techno-utopian vision reduces learning to information exchanges and skill assessments. This mechanistic view fails to account for the social and emotional dimensions of human development. While AI can enhance teaching by handling routine tasks, it cannot replace the essential human relationships that drive educational engagement. The future of education lies not in perfecting content delivery algorithms, but in strengthening the relational foundations of learning. 

Such overblown promises about AI in education do more harm than good. They create unnecessary anxiety among teachers and administrators, leading to resistance against even modest technological improvements. Instead of addressing real challenges in education - student engagement, equitable access, and meaningful assessment - institutions get distracted by unrealistic visions of AI-driven transformation. We need a more balanced approach that recognizes both the potential and limitations of AI in supporting, not replacing, the fundamentally human enterprise of education.



Thursday, February 20, 2025

The AI Recruiter Will See You Now

The tidy world of job applications, carefully curated CVs and anxious cover letters may soon become a relic. Every professional now leaves digital traces across the internet - their work, opinions, and achievements create detailed patterns of their capabilities. Artificial Intelligence agents will soon navigate these digital landscapes, transforming how organizations find talent.

Unlike current recruitment tools that passively wait for queries, these AI agents will actively explore the internet, following leads and making connections. They will analyze not just LinkedIn profiles, but candidates' entire digital footprint. The approach promises to solve a persistent problem in recruitment: finding qualified people who are not actively job-hunting.

The matching process will extend beyond technical qualifications. Digital footprints reveal working styles and professional values. A cybersecurity position might require someone who demonstrates consistent risk awareness; an innovation officer role might suit someone comfortable with uncertainty. AI agents could assess such traits by analyzing candidates' professional communications and public activities.

Yet this technological advance brings fresh concerns. Privacy considerations demand attention - while AI agents would analyze public information, organizations must establish clear ethical guidelines about data usage. More fundamentally, AI agents must remain sophisticated talent scouts rather than final decision makers. They can gather evidence and make recommendations, but human recruiters must evaluate suggestions within their understanding of organizational needs.

The transformation suggests a future where talent discovery becomes more equitable. AI agents could help overcome human biases by focusing on demonstrated capabilities rather than credentials or connections. The winners will be organizations that master this partnership between artificial intelligence and human judgment. The losers may be traditional recruitment agencies - unless they swiftly adapt to the new reality.





Monday, February 10, 2025

Form-substance discrimination, a new learning outcome

We have long assumed that clear writing signals clear thinking. Schools teach writing as if it were math - a rigorous exercise that develops logical thinking. Editors catch not just errors but muddled ideas. Or so the theory goes. Artificial intelligence shatters this comfortable assumption by churning out impeccably structured prose devoid of original thought.

Form-substance discrimination represents a higher-order cognitive skill, similar to what art historians develop when they learn to separate technical mastery from creative vision. Just as an art student must unlearn their initial attraction to photorealistic paintings, readers now must develop resistance to seductively clear prose. This requires a kind of cognitive inhibition - the ability to suppress immediate aesthetic pleasure for the sake of deeper analysis.

The skill builds on existing metacognitive abilities but requires their novel application. Readers already know how to identify main ideas, analyze arguments, and evaluate evidence. What is new is the need to perform these operations while actively discounting the surface appeal of the text. This resembles what wine tasters do when they evaluate wines blind, stripped of prestigious labels and beautiful bottles.

The development follows a predictable pattern. At first, readers struggle to overcome their ingrained respect for well-crafted prose. The initial challenge lies not in identifying weak ideas but in giving oneself permission to criticize a text that follows all the rules of good writing. This mirrors the development of critical thinking in general, where students must learn to question authority figures who appear competent and confident.

The second stage involves developing specific techniques for idea extraction. Readers learn to create idea maps independent of the text's structure, to count unique concepts rather than words, to identify circular arguments hidden behind elegant transitions. They begin to see how AI-generated text often creates an illusion of logical flow while merely restating the same point in different words.

The final stage brings automaticity. Experienced readers develop an immediate sense of a text's intellectual weight, just as experienced teachers can quickly gauge a student's understanding despite fluent recitation. This involves pattern recognition built through exposure to many examples of both substantive and hollow texts.

The educational implications are significant. Writing instruction must now explicitly separate craft from content. Students need exposure to both well-written texts with weak ideas and poorly written texts with strong ideas. They must practice identifying when sophisticated language masks conceptual poverty and when rough expression contains genuine insight.

This shift parallels broader changes in how we process information. In a world of information abundance, the key skill is no longer finding or producing well-formed content but discerning its value. Form-substance discrimination represents a specific case of this general challenge - learning to navigate a world where traditional quality signals no longer reliably indicate underlying worth.

The skill matters beyond academia. Business leaders reading AI-generated reports, citizens evaluating political arguments, professionals studying their field's literature - all need to separate rhetorical sophistication from intellectual contribution. As AI writing tools improve, this ability will become as fundamental as basic literacy.

We face a paradox: the better AI becomes at writing, the more important it becomes for humans to see through good writing. The very perfection of AI-generated prose creates a new kind of opacity that readers must learn to penetrate. Form-substance discrimination thus emerges not just as an academic skill but as a key component of modern critical thinking.



Sunday, February 9, 2025

AI and Labor: A Smarter Path Forward

Trade unions face a defining moment. Artificial intelligence presents genuine concerns about job displacement, yet the response need not mirror historical patterns of resistance to technological change. The Luddite movement of the 1810s serves as a cautionary tale - their destruction of mechanized looms neither preserved jobs nor improved workers' conditions. All technology affects labor; that it what technology is, work assistance. 

The automation paradox offers a more nuanced perspective. While machines replace specific tasks, they generate new forms of work. The introduction of automated teller machines in banking led to more bank branches and tellers performing complex customer service roles. This pattern repeats across industries - automation reduces costs, expands services, and creates different job categories.

Labor leaders would serve their members better by negotiating robust transition arrangements. Key demands should include employer-funded retraining programs, preferential access to new positions, and compensation packages that recognize acquired skills. The focus must shift from preventing change to shaping its implementation.

The pace of AI integration varies significantly by sector. Manufacturing and data processing may see rapid adoption, but industries built on human relationships - education, healthcare, social work - will incorporate AI gradually as assistive technology. Complex organizations require extensive testing and workflow redesign before meaningful automation becomes feasible.

Economic history demonstrates that reduced production costs expand economic activity. When basic tasks become automated, human attention shifts to more sophisticated problems. The telephone eliminated telegraph operators but created vast new communication industries. Similarly, AI will likely automate routine cognitive work while opening possibilities in areas we have not yet imagined.

Unions retain significant leverage during this transition. Organizations need experienced workers to implement new technologies effectively. This position allows labor to negotiate favorable terms - extended notice periods, substantial retraining budgets, wage protection during transition, and clear paths to higher-skilled roles.

The key lies in recognizing AI as a tool for augmentation rather than pure replacement. A machine learning system may process medical images faster than radiologists, but interpreting results in complex cases still requires human judgment. Similar patterns will emerge across professions - AI handling routine tasks while humans focus on nuanced decision-making and interpersonal elements.

Rather than resist change, unions should position themselves as partners in managing transition. This approach preserves their relevance and better serves member interests. The alternative - attempting to prevent AI adoption - risks marginalization as companies seek ways around opposition or relocate to more amenable jurisdictions.

The challenge for labor leadership is to shift from defensive postures to proactive engagement. This means developing expertise in emerging technologies, identifying opportunities for worker advancement, and ensuring transition arrangements protect vulnerable members while facilitating adaptation to changing workplace demands.



Tuesday, February 4, 2025

Augmented Problem Finding: The Next Frontier in AI Literacy

In my recent blog on task decomposition as a key AI skill, I highlighted how breaking down complex problems enables effective human-AI collaboration. Yet before we can decompose a task, we must identify which problems are worth pursuing - a skill that takes on new dimensions in the age of AI.

This ability to recognize solvable problems expands dramatically with AI tools at our disposal. Tasks once considered too time-consuming or complex suddenly become manageable. The cognitive offloading that AI enables does not just help us solve existing problems - it fundamentally reshapes our understanding of what constitutes a tractable challenge.

Consider how VisiCalc transformed financial planning in the early 1980s. Initially seen as a mere automation tool for accountants, it revolutionized business planning by enabling instant scenario analysis. Tasks that would have consumed days of manual recalculation became instantaneous, allowing professionals to explore multiple strategic options and ask "what if" questions they would not have contemplated before. Similarly, AI prompts us to reconsider which intellectual tasks we should undertake. Writing a comprehensive literature review might have once consumed months; with AI assistance, scholars can now contemplate more ambitious syntheses of knowledge.

This expanded problem space creates its own paradox. As more tasks become technically feasible, the challenge shifts to identifying which ones merit attention. The skill resembles what cognitive psychologists call "problem finding," but with an important twist. Traditional problem finding focuses on identifying gaps or needs. Augmented problem finding requires understanding both human and AI capabilities to recognize opportunities in this enlarged cognitive landscape.

The distinction becomes clear in professional settings. Experienced AI users develop an intuitive sense of which tasks to delegate and which to tackle themselves. They recognize when a seemingly straightforward request actually requires careful human oversight, or when an apparently complex task might yield to well-structured AI assistance. This judgment develops through experience but could be taught more systematically.

The implications extend beyond individual productivity. Organizations must now cultivate this capacity across their workforce. The competitive advantage increasingly lies not in having access to AI tools - these are becoming ubiquitous - but in identifying novel applications for them. This explains why some organizations extract more value from AI than others, despite using similar technologies.

Teaching augmented problem finding requires a different approach from traditional problem-solving instruction. Students need exposure to varied scenarios where AI capabilities interact with human judgment. They must learn to recognize patterns in successful AI applications while developing realistic expectations about AI limitations. Most importantly, they need practice in identifying opportunities that emerge from combining human and machine capabilities in novel ways.

The skill also has ethical dimensions. Not every task that can be automated should be. Augmented problem finding includes judging when human involvement adds necessary value, even at the cost of efficiency. It requires balancing the technical feasibility of AI solutions against broader organizational and societal impacts.

As AI capabilities evolve, this skill will become increasingly crucial. The future belongs not to those who can best use AI tools, but to those who can best identify opportunities for their application. This suggests a shift in how we think about AI literacy - from focusing on technical proficiency to developing sophisticated judgment about when and how to engage AI capabilities.

The automation paradox that Lisanne Bainbridge identified in her 1983 analysis of industrial systems points to an interesting future. As we become more adept at augmented problem finding, we discover new challenges that merit attention. This creates a virtuous cycle of innovation, where each advance in AI capability opens new frontiers for human creativity and judgment.

Perhaps most intriguingly, this skill might represent a distinctly human advantage in the age of AI. While machines excel at solving well-defined problems, the ability to identify worthy challenges remains a uniquely human capability. By developing our capacity for augmented problem finding, we ensure a meaningful role for human judgment in an increasingly automated world.



Saturday, February 1, 2025

Task Decomposition, a core AI skill

The effective use of artificial intelligence depends on our ability to structure problems in ways that align with both human and machine capabilities. While AI demonstrates remarkable computational abilities, its effectiveness relies on carefully structured input and systematic oversight. This suggests that our focus should shift toward understanding how to break down complex tasks into components that leverage the respective strengths of humans and machines.

Task decomposition - the practice of breaking larger problems into manageable parts - predates AI but takes on new significance in this context. Research in expertise studies shows that experienced problem-solvers often approach complex challenges by identifying distinct components and their relationships. This natural human tendency provides a framework for thinking about AI collaboration: we need to recognize which aspects of a task benefit from computational processing and which require human judgment.

The interaction between human users and AI systems appears to follow certain patterns. Those who use AI effectively tend to approach it as a collaborative tool rather than a complete solution. They typically work through multiple iterations: breaking down the problem, testing AI responses, evaluating results, and adjusting their approach. This mirrors established practices in other domains where experts regularly refine their solutions through systematic trial and error.

Consider the task of writing a research paper. Rather than requesting a complete document from AI, a more effective approach involves breaking down the process: developing an outline, gathering relevant sources, analyzing specific arguments, and integrating various perspectives. Similarly, in data analysis, success often comes from methodically defining questions, selecting appropriate datasets, using AI for initial pattern recognition, and applying human expertise to interpret the findings.

This collaborative approach serves two purposes. First, it helps manage complexity by distributing cognitive effort across human and machine resources. Second, it maintains human oversight of the process while benefiting from AI's computational capabilities. The goal is not to automate thinking but to enhance it through structured collaboration.

Current educational practices have not yet fully adapted to this reality. While many institutions offer technical training in AI or discuss its ethical implications, fewer focus on teaching systematic approaches to human-AI collaboration. Students need explicit instruction in how to break down complex tasks and document their decision-making processes when working with AI tools.

To address this gap, educational programs could incorporate several key elements:

  1. Practice in systematic task analysis and decomposition
  2. Training in structured approaches to AI interaction
  3. Documentation of decision-making processes in AI-assisted work
  4. Critical evaluation of AI outputs and limitations
  5. Integration of human expertise with AI capabilities

The emergence of AI tools prompts us to examine our own cognitive processes more explicitly. As we learn to structure problems for AI collaboration, we also develop a clearer understanding of our own problem-solving approaches. This suggests that learning to work effectively with AI involves not just technical skills but also enhanced metacognition - thinking about our own thinking.

The future of human-AI collaboration likely depends less on technological advancement and more on our ability to develop systematic approaches to task decomposition. By focusing on this fundamental skill, we can work toward more effective integration of human and machine capabilities while maintaining the critical role of human judgment and oversight.

These observations and suggestions should be treated as starting points for further investigation rather than definitive conclusions. As we gather more evidence about effective human-AI collaboration, our understanding of task decomposition and its role in this process will likely evolve. The key is to maintain a balanced approach that recognizes both the potential and limitations of AI while developing structured methods for its effective use. 




Deep Research is still more of a promise

The promise of deep research by AI tools like ChatGPT is simple: feed in a question, receive a nuanced, well-reasoned answer, complete with ...