Search This Blog

Wednesday, March 12, 2025

The Modern Confessional: AI Disclosure as Ritual

 

Organizations across academia and publishing now routinely demand confession of AI use. Publishers require authors to disclose whether AI tools assisted in manuscript preparation. Funding agencies insert checkboxes for AI utilization. Academic journals add disclosure statements to submission forms. None adequately explain their rationale or how this information shapes evaluation.

This peculiar practice reveals our creation of a new moral domain around AI use in knowledge production. The requirement to disclose functions precisely as Michel Foucault described confessional practices in "The History of Sexuality." Foucault argued that confession itself produces the very notion of sin it purports to address. The act of requiring disclosure creates the impression of transgression where none inherently exists.

Medieval confession did not merely document pre-existing sins - it manufactured them through the very apparatus of confession. Similarly, disclosure requirements around AI use manufacture a transgressive quality around technologies that have no inherent moral valence.

The mechanics operate almost identically. Both create categories of behavior requiring special scrutiny. Both position authority figures as arbiters of acceptability. The confessing subject experiences this manufactured transgression viscerally - the academic disclosing AI use feels compelled toward contrition without clear understanding of what offense they have committed.

Authors find themselves in impossible positions, uncertain how much assistance constitutes meaningful use. Did grammar checking through Microsoft Editor count? What about Grammarly's suggestions? The lack of clear standards transforms disclosure into guesswork.

Rather than focusing on tools, we might evaluate outputs based on established academic standards regardless of production methods. This approach acknowledges that quality, originality and intellectual integrity manifest in final products, not production processes. Technical assistance has always existed across academic work - from statistical software to citation managers to editorial help from colleagues.

Current disclosure requirements function primarily as modern confession, manufacturing transgression through the very apparatus designed to reveal it. By recognizing this dynamic, we might reimagine our approach to technological assistance in ways that foster genuine integrity rather than performative disclosure.


Friday, February 28, 2025

The Invasive Innovation, a Case of AI in Education

Disruptive innovations, as defined by Clayton Christensen, typically emerge at the margins before gradually overtaking established players. In contrast, artificial intelligence in education is not a slow-moving force of competitive displacement. It is an invasive innovation—spreading far beyond the predictions of Everett Rogers’ diffusion model, propelled not as much by institutions as by students themselves. This is not a technology waiting to be adopted. It is a reality forcing adaptation. Treating it as an optional innovation mises the point entirely. 

Educational leaders, policymakers, and institutions may wish to evaluate AI’s impact, consider best practices, and implement careful frameworks. But they are not in control of the timeline. The adoption is not up for debate. It is already happening, driven by students who see AI not as a tool of the future but as an everyday utility, as natural as search engines or calculators. The question is not when to integrate AI into education, but how to manage an environment where AI is already integrated—often in ways institutions neither designed nor anticipated.

This is why traditional approaches to educational innovations are inadequate. Normally, innovations are assessed, piloted, and then scaled. But AI in education is unfolding more like a crisis than an initiative. It requires a response more akin to a public health emergency than a strategic rollout. When COVID-19 disrupted education, schools did not spend years analyzing whether remote learning was effective. They pivoted overnight, creating imperfect but necessary solutions. AI invasion demands a similar level of urgency.

Institutional response to invasive AI must be comprehensive rather than defensive. How do we accommodate this invasive innovation? Putting AI tools directly in the hands of faculty represents a critical first step, ensuring they engage with the same technologies their students already access freely. This must be paired with robust professional development—not merely technical training but genuine pedagogical exploration that helps educators reimagine their disciplinary approaches in an AI-augmented world.

Beyond tools and training, institutions must invest in creating dedicated time and space for disciplinary teams to collaboratively revise their assignments. The goal is not to create "cheat-proof" assessments but to develop authentic tasks that meaningfully incorporate AI as a partner in learning. This demands protected time away from everyday teaching pressures, where faculty can experiment, fail, refine, and document new approaches that acknowledge AI as a permanent fixture in their educational landscape rather than an external threat to be neutralized.



Wednesday, February 26, 2025

Sidorkin Hypothesis: Beyond Procedural Fluency in AI-Augmented Instruction

The hypothesis is this: Quality AI-augmented instruction reduces emphasis on procedural skills while enhancing higher-order thinking and conceptual learning. This shift may offer an alternative pathway for cognitive offloading, which could supplement or even replace traditional procedural skills acquisition.

Cognitive load theory, developed by John Sweller in the 1980s, provides a useful framework. The theory posits that our working memory has limited capacity when dealing with new information. Sweller himself, along with many other cognitive scientists, views fluency in procedural skills as a major mechanism for cognitive offloading. When basic procedures become automatic through practice, they consume fewer working memory resources, thereby freeing up mental capacity for higher-order learning. This is why traditional education emphasizes mastering procedural skills—calculating derivatives, balancing chemical equations, applying grammatical rules—before tackling complex conceptual work.

In my view, AI tools function as an alternative cognitive offloading mechanism that can complement or even supersede traditional procedural fluency. These tools handle procedural tasks , creating cognitive space for students to engage with concepts at a deeper level without first developing procedural automaticity. Consider a student learning calculus: traditionally, substantial mental resources are dedicated to memorizing differentiation rules and practicing their application. With AI assistance handling these procedural aspects, the student can direct cognitive resources toward understanding conceptual foundations—what derivatives actually represent, how they relate to real-world phenomena, and when different approaches should be applied.

This approach necessitates reimagining assessment. Traditional writing instruction heavily rewards procedural accuracy in grammar, citation formats, and structural conventions, but in an AI-augmented educational landscape, assessment must evolve to measure the ability to prompt AI tools effectively, the quality of critical analysis, application of rhetorical strategies in novel contexts, and evaluation of AI-generated text. The focus shifts from "Can you properly format this essay?" to "Can you determine which rhetorical approach best suits your purpose, interpret complex texts critically, and evaluate whether the AI-generated analysis captures the deeper meaning of the work being examined?"

This transformation faces substantial challenges. Educators rightfully question whether students can truly grasp concepts without some level of procedural fluency. it is likely that some basic  procedural knowledge remains necessary for conceptual understanding. The optimal approach involves thoughtful integration—using AI to reduce procedural load while ensuring students develop core competencies. To put it roughly, "procedural knowledge - yes, procedural fluency - not, really."  

The equalizing potential of AI-augmented instruction cannot be overlooked. Students who traditionally struggle with procedural aspects—whether due to learning differences, educational disruptions, or lack of prior opportunities—may find themselves able to participate more fully in higher-order learning. This democratization of advanced conceptual engagement could elevate educational outcomes across diverse student populations. We stand at the threshold of potentially transforming education from a system that sorts students based on procedural proficiency to one that cultivates universally richer, more advanced learning outcomes: deeper critical analysis, more sophisticated problem formulation, nuanced ethical reasoning, and creative synthesis across domains—intellectual capabilities that more authentically reflect the complex demands of contemporary society.



Saturday, February 22, 2025

On Techno-Utopianism. Elon Musk and the Soul of Education

The recent video of Elon Musk promising AI teachers reveals a common misunderstanding among technology leaders. They see education primarily as information transfer and skills training, where an infinitely patient AI system delivers perfectly tailored content to each student. This viewpoint ignores the fundamental nature of education as a relational institution.

Since Gutenberg's invention of the printing press, motivated individuals could teach themselves almost anything. Libraries contain more knowledge than any single teacher. Yet most people do not turn into autodidacts. Why is that? The question is not how to make knowledge more accessible, but why people choose to engage with it.

Teachers generate reasons to learn through two main approaches. In more constructivist settings, they inspire curiosity and create engaging problems to solve. In mor traditional schools, they maintain authority and discipline. In most schools, there is a mixture of both. Both methods work because they establish a social framework for learning. A good teacher knows when to push and when to comfort, when to explain and when to let students struggle.

The comparison of AI to Einstein as a teacher misses the point. Teaching requires different qualities than scientific genius - the capacity to enter a relationship, to create meaningful connections, and to help students discover their own reasons for learning. An AI system, no matter how knowledgeable, cannot do any of that.

Students often study not because they find the subject inherently fascinating, but because they respect  their teacher, want to belong to a learning community, or seek to fulfill social expectations. Even negative motivations like fear of disappointing others have a distinctly human character. 

The techno-utopian vision reduces learning to information exchanges and skill assessments. This mechanistic view fails to account for the social and emotional dimensions of human development. While AI can enhance teaching by handling routine tasks, it cannot replace the essential human relationships that drive educational engagement. The future of education lies not in perfecting content delivery algorithms, but in strengthening the relational foundations of learning. 

Such overblown promises about AI in education do more harm than good. They create unnecessary anxiety among teachers and administrators, leading to resistance against even modest technological improvements. Instead of addressing real challenges in education - student engagement, equitable access, and meaningful assessment - institutions get distracted by unrealistic visions of AI-driven transformation. We need a more balanced approach that recognizes both the potential and limitations of AI in supporting, not replacing, the fundamentally human enterprise of education.



Thursday, February 20, 2025

The AI Recruiter Will See You Now

The tidy world of job applications, carefully curated CVs and anxious cover letters may soon become a relic. Every professional now leaves digital traces across the internet - their work, opinions, and achievements create detailed patterns of their capabilities. Artificial Intelligence agents will soon navigate these digital landscapes, transforming how organizations find talent.

Unlike current recruitment tools that passively wait for queries, these AI agents will actively explore the internet, following leads and making connections. They will analyze not just LinkedIn profiles, but candidates' entire digital footprint. The approach promises to solve a persistent problem in recruitment: finding qualified people who are not actively job-hunting.

The matching process will extend beyond technical qualifications. Digital footprints reveal working styles and professional values. A cybersecurity position might require someone who demonstrates consistent risk awareness; an innovation officer role might suit someone comfortable with uncertainty. AI agents could assess such traits by analyzing candidates' professional communications and public activities.

Yet this technological advance brings fresh concerns. Privacy considerations demand attention - while AI agents would analyze public information, organizations must establish clear ethical guidelines about data usage. More fundamentally, AI agents must remain sophisticated talent scouts rather than final decision makers. They can gather evidence and make recommendations, but human recruiters must evaluate suggestions within their understanding of organizational needs.

The transformation suggests a future where talent discovery becomes more equitable. AI agents could help overcome human biases by focusing on demonstrated capabilities rather than credentials or connections. The winners will be organizations that master this partnership between artificial intelligence and human judgment. The losers may be traditional recruitment agencies - unless they swiftly adapt to the new reality.





Monday, February 10, 2025

Form-substance discrimination, a new learning outcome

We have long assumed that clear writing signals clear thinking. Schools teach writing as if it were math - a rigorous exercise that develops logical thinking. Editors catch not just errors but muddled ideas. Or so the theory goes. Artificial intelligence shatters this comfortable assumption by churning out impeccably structured prose devoid of original thought.

Form-substance discrimination represents a higher-order cognitive skill, similar to what art historians develop when they learn to separate technical mastery from creative vision. Just as an art student must unlearn their initial attraction to photorealistic paintings, readers now must develop resistance to seductively clear prose. This requires a kind of cognitive inhibition - the ability to suppress immediate aesthetic pleasure for the sake of deeper analysis.

The skill builds on existing metacognitive abilities but requires their novel application. Readers already know how to identify main ideas, analyze arguments, and evaluate evidence. What is new is the need to perform these operations while actively discounting the surface appeal of the text. This resembles what wine tasters do when they evaluate wines blind, stripped of prestigious labels and beautiful bottles.

The development follows a predictable pattern. At first, readers struggle to overcome their ingrained respect for well-crafted prose. The initial challenge lies not in identifying weak ideas but in giving oneself permission to criticize a text that follows all the rules of good writing. This mirrors the development of critical thinking in general, where students must learn to question authority figures who appear competent and confident.

The second stage involves developing specific techniques for idea extraction. Readers learn to create idea maps independent of the text's structure, to count unique concepts rather than words, to identify circular arguments hidden behind elegant transitions. They begin to see how AI-generated text often creates an illusion of logical flow while merely restating the same point in different words.

The final stage brings automaticity. Experienced readers develop an immediate sense of a text's intellectual weight, just as experienced teachers can quickly gauge a student's understanding despite fluent recitation. This involves pattern recognition built through exposure to many examples of both substantive and hollow texts.

The educational implications are significant. Writing instruction must now explicitly separate craft from content. Students need exposure to both well-written texts with weak ideas and poorly written texts with strong ideas. They must practice identifying when sophisticated language masks conceptual poverty and when rough expression contains genuine insight.

This shift parallels broader changes in how we process information. In a world of information abundance, the key skill is no longer finding or producing well-formed content but discerning its value. Form-substance discrimination represents a specific case of this general challenge - learning to navigate a world where traditional quality signals no longer reliably indicate underlying worth.

The skill matters beyond academia. Business leaders reading AI-generated reports, citizens evaluating political arguments, professionals studying their field's literature - all need to separate rhetorical sophistication from intellectual contribution. As AI writing tools improve, this ability will become as fundamental as basic literacy.

We face a paradox: the better AI becomes at writing, the more important it becomes for humans to see through good writing. The very perfection of AI-generated prose creates a new kind of opacity that readers must learn to penetrate. Form-substance discrimination thus emerges not just as an academic skill but as a key component of modern critical thinking.



Sunday, February 9, 2025

AI and Labor: A Smarter Path Forward

Trade unions face a defining moment. Artificial intelligence presents genuine concerns about job displacement, yet the response need not mirror historical patterns of resistance to technological change. The Luddite movement of the 1810s serves as a cautionary tale - their destruction of mechanized looms neither preserved jobs nor improved workers' conditions. All technology affects labor; that it what technology is, work assistance. 

The automation paradox offers a more nuanced perspective. While machines replace specific tasks, they generate new forms of work. The introduction of automated teller machines in banking led to more bank branches and tellers performing complex customer service roles. This pattern repeats across industries - automation reduces costs, expands services, and creates different job categories.

Labor leaders would serve their members better by negotiating robust transition arrangements. Key demands should include employer-funded retraining programs, preferential access to new positions, and compensation packages that recognize acquired skills. The focus must shift from preventing change to shaping its implementation.

The pace of AI integration varies significantly by sector. Manufacturing and data processing may see rapid adoption, but industries built on human relationships - education, healthcare, social work - will incorporate AI gradually as assistive technology. Complex organizations require extensive testing and workflow redesign before meaningful automation becomes feasible.

Economic history demonstrates that reduced production costs expand economic activity. When basic tasks become automated, human attention shifts to more sophisticated problems. The telephone eliminated telegraph operators but created vast new communication industries. Similarly, AI will likely automate routine cognitive work while opening possibilities in areas we have not yet imagined.

Unions retain significant leverage during this transition. Organizations need experienced workers to implement new technologies effectively. This position allows labor to negotiate favorable terms - extended notice periods, substantial retraining budgets, wage protection during transition, and clear paths to higher-skilled roles.

The key lies in recognizing AI as a tool for augmentation rather than pure replacement. A machine learning system may process medical images faster than radiologists, but interpreting results in complex cases still requires human judgment. Similar patterns will emerge across professions - AI handling routine tasks while humans focus on nuanced decision-making and interpersonal elements.

Rather than resist change, unions should position themselves as partners in managing transition. This approach preserves their relevance and better serves member interests. The alternative - attempting to prevent AI adoption - risks marginalization as companies seek ways around opposition or relocate to more amenable jurisdictions.

The challenge for labor leadership is to shift from defensive postures to proactive engagement. This means developing expertise in emerging technologies, identifying opportunities for worker advancement, and ensuring transition arrangements protect vulnerable members while facilitating adaptation to changing workplace demands.



When Intelligence Trips Over Itself

Modern intelligence, whether silicon or biological, is often tripped up not by ignorance but by abundance. When a system has the bandwidth t...