Monday, July 29, 2024

AI is an Amateur Savant

Most people who use AI think it is great in general but believe it does not grasp their area of specialization very well. As an applied philosopher, I create intellectual tools to help others think through their problems. I find AI excellent at clarifying and explaining ideas, but it has never generated an original idea worth writing about. I have yet to see reports from others in any discipline that AI has independently produced groundbreaking ideas.

AI can handle large amounts of data and provide coherent, accurate responses across various fields. This ability is comparable to a well-informed amateur who has a broad understanding but lacks deep expertise. AI can recount historical facts, explain scientific principles, and offer legal insights based on data patterns, yet it falls short in deeper, more nuanced analysis.

In my case, AI can assist by summarizing existing theories or offering possible objections or additional arguments. However, it lacks the ability to generate a genuinely novel idea. I use it a lot, and not even once did it produce anything like that. This limitation stems from its reliance on pre-existing data and patterns, preventing it from achieving the level of innovation that human professionals bring to their fields. Some believe that this limitation will soon be overcome, but I do not think so. It seems to be an intrinsic limitation, a function of AI's way of training.

Professionals/experts, whether in philosophy, medicine, or history, possess a depth of understanding developed through extensive education and practical experience. They apply complex methodologies, critical thinking, and ethical considerations that AI cannot replicate. A doctor considers the patient's history and unique implications of treatments, while a professional historian places events within a broader socio-cultural context. AI, despite its capabilities, often misses these subtleties. It is, in some sense, a savant: a fast, amazing, but inexperienced thinker.

The gap between a capable amateur and a professional/expert might seem small, especially from the point of view of the amateur. However, it is huge and is rooted in the depth of expertise, critical thinking, and the ability to judge that professionals possess; it is a function of intellect, experience, and education. This gap is where educators should look to adapt the curriculum.

In education, we should focus on that gap between the amateur and the professional and conceptualize it as the ultimate learning outcome, then build new skill ladders to claim there. Students need to understand and conquer the gap between AI and a professional expert. These meta-AI skills are our true North. AI can support this learning process by providing clear explanations and diverse perspectives, but it cannot replace the nuanced understanding and innovation that human professionals offer.


Wednesday, July 24, 2024

What percentage of my text is AI-generated?

Go ahead, ask me the question. However, I would in turn ask you to specify which of the following kinds of assistance from AI you are interested in.  

  1. Distilling information into summaries
  2. Revamping and recasting content
  3. Polishing grammar, spelling, and punctuation
  4. Sparking ideas and crafting titles
  5. Conjuring additional arguments or perspectives
  6. Spotting potential counterarguments or objections
  7. Constructing and organizing content
  8. Juxtaposing points from multiple sources
  9. Scrutinizing and refining existing content
  10. Demystifying complex ideas or jargon
  11. Architecting outlines and organizational structures
  12. Fashioning examples or illustrations
  13. Tailoring content for different audiences or formats
  14. Forging hooks or attention-grabbing openings
  15. Sculpting strong conclusions or call-to-actions
  16. Unearthing relevant quotes or citations
  17. Decoding concepts in simpler terms
  18. Fleshing out brief points or ideas
  19. Trimming verbose text
  20. Honing clarity and coherence
  21. Smoothing the flow between paragraphs or sections
  22. Concocting metaphors or analogies
  23. Verifying and authenticating information
  24. Proposing synonyms or alternative phrasing
  25. Pinpointing and eliminating redundancies
  26. Diversifying sentence variety and structure
  27. Maintaining consistency in tone and style
  28. Aligning content with specific style guides
  29. Devising keywords for SEO optimization
  30. Assembling bullet points or numbered lists
  31. Bridging sections with appropriate transitions
  32. Flagging areas that need more elaboration
  33. Accentuating key takeaways or main points
  34. Formulating questions for further exploration
  35. Contextualizing with background information
  36. Envisioning visual elements or data representations
  37. Detecting potential areas of bias or subjectivity
  38. Inventing catchy titles or headlines
  39. Streamlining the logical flow of arguments
  40. Boosting text engagement and persuasiveness
  41. Rooting out and rectifying logical fallacies
  42. Imagining hypothetical scenarios or case studies
  43. Illuminating alternative perspectives on a topic
  44. Weaving in storytelling elements
  45. Uncovering gaps in research or argumentation
  46. Producing counterexamples or rebuttals
  47. Bolstering weak arguments
  48. Harmonizing tense and voice inconsistencies
  49. Composing topic sentences for paragraphs
  50. Integrating data or statistics effectively
  51. Devising analogies to explain complex concepts
  52. Injecting humor or wit
  53. Eradicating passive voice usage
  54. Compiling topic-specific vocabulary lists
  55. Enhancing paragraph transitions
  56. Untangling run-on sentences
  57. Articulating thesis statements or main arguments
  58. Infusing content with sensory details
  59. Resolving dangling modifiers
  60. Conceiving potential research questions
  61. Incorporating rhetorical devices
  62. Rectifying pronoun inconsistencies
  63. Anticipating potential counterarguments
  64. Embedding anecdotes effectively
  65. Mending comma splices
  66. Drafting potential interview questions
  67. Sprinkling in cultural references
  68. Correcting subject-verb agreement errors
  69. Designing potential survey questions
  70. Adorning text with figurative language
  71. Repositioning misplaced modifiers
  72. Brainstorming potential titles for sections or chapters
  73. Integrating expert opinions
  74. Paring down wordiness
  75. Exploring potential subtopics
  76. Weaving in statistical data
  77. Eliminating tautologies
  78. Coining potential taglines or slogans
  79. Embedding historical context
  80. Untangling mixed metaphors
  81. Developing potential FAQs and answers
  82. Incorporating scientific terminology
  83. Fixing split infinitives
  84. Generating potential discussion points
  85. Blending in technical jargon
  86. Expunging clichés
  87. Crafting potential calls-to-action
  88. Inserting industry-specific terms
  89. Replacing euphemisms
  90. Extracting potential pullout quotes
  91. Interweaving mathematical concepts
  92. Eliminating redundant phrasing
  93. Compiling potential glossary terms and definitions
  94. Introducing philosophical concepts
  95. Standardizing formatting
  96. Curating potential appendix content
  97. Incorporating legal terminology
  98. Clarifying ambiguous pronouns
  99. Cataloging potential index terms
  100. Synthesizing interdisciplinary perspectives
  101. Writing long list of AI uses for content generation



Saturday, July 20, 2024

The Three Wave Strategy of AI Implementation

Whether it's a university, a tech giant, a manufacturing company, a public utility, or a government agency, the complexity of operations can be overwhelming. To illustrate this point, Claude and I have generated a list of over 1,150 workflows typical for a large university, many of which can be further broken down into smaller, more specific processes.

Given this complexity, the question arises: Where do we start with AI implementation? The answer lies in a strategic, phased approach that considers the unique characteristics of each workflow and the organization's readiness for AI adoption.

The First Wave: Low-Hanging Fruit

The initial phase of AI implementation should focus on what we call the "low-hanging fruit" - workflows that meet three crucial criteria:

  1. Self-evident quality: The output quality is immediately obvious and doesn't require complex evaluation.
  2. Single-person control: The workflow is typically managed or executed by one individual.
  3. Ready-made AI tools: The process can be enhanced using existing AI tools without requiring specialized development. It is either using one of the primary LLM's or building a custom bot.

These criteria help identify areas where AI can quickly and effectively augment human efforts, improving efficiency and potentially enhancing the quality of service provided. Based on these criteria, here's a priority list of workflows that could be considered for the first wave of AI implementation. These are just examples:

  1. Student services
    • Student and prospective student advising of all kinds
  2. Resume and Cover Letter Review (Career Services)
    • Offering individual resume critiques
    • Assisting with cover letter development
  3. Academic Policy Development and Enforcement (Academic Affairs)
    • Drafting and revising academic policies
  4. Health Education and Outreach (Health and Wellness Services)
    • Creating and distributing health education materials
  5. Sustainability Education and Outreach (Sustainability and Environmental Initiatives)
    • Creating sustainability guides and resources for campus community
  6. Digital Marketing and Social Media Management (University Communications and Marketing)
    • Creating and curating content for various platforms
  7. Grant Proposal Development and Submission (Research and Innovation)
    • Assisting faculty with proposal writing
  8. Financial Aid Counseling (Financial Aid and Scholarships)
    • Providing one-on-one counseling sessions
    • Offering debt management and financial literacy education
  9. Alumni Communications (Alumni Relations and Development)
    • Producing alumni magazines and newsletters
  10. Scholarly Communications (Library Services)
    • Supporting faculty in publishing and copyright issues
    • Providing guidance on research impact metrics
  11. International Student and Scholar Services (International Programs and Global Engagement)
    • Providing immigration advising and document processing

This first wave serves multiple purposes. It demonstrates the proof of principle, making more stakeholders comfortable with AI integration. It also helps build internal expertise and confidence in working with AI technologies. These early successes can pave the way for more ambitious implementations in the future.

The Second Wave: Tackling Costly Workflows

Once the organization has gained experience and confidence from the first wave, it can move on to more complex and costly workflows. These are typically processes that involve significant labor, occur frequently, and have a broad scope of impact on the organization. However, it is crucial to narrow down this list based on feasibility and readiness for AI implementation.

For instance, while teaching is undoubtedly one of the most labor-intensive and impactful processes in a university, we do not yet have sufficient knowledge on how to make it significantly more efficient through AI. Some processes, like teaching, may never be fully optimized by AI because to their inherently relational nature. 

Note, this is also an opportunity to review major workflows; they often evolved over the years, and are far from ideal efficiency. AI can help review these workflows, and recommend streamlining. And of course, AI can be integrated into actually doing the work. 

The Third Wave: Enterprise-Level Solutions

Only after successfully navigating the first two waves should an organization consider enterprise-level AI solutions. These solutions have the potential to radically redefine the organization's core operations, placing AI at the center of its processes. This level of integration requires a deep understanding of AI capabilities, a clear vision of the organization's future, and a robust infrastructure to support AI-driven operations. Most importantly, it requires specialized tools and high level of security. 

The Timeline and Exceptions

This phased approach to AI implementation is not a quick process. For most large, complex organizations, it could take a couple of decades to fully realize the potential of AI across all workflows. However, there are exceptions. Some businesses with simpler and fewer workflows, such as narrowly specialized customer service operations, may be able to leapfrog straight into the third wave, especially if they have prior experience with AI technologies.

But these are the exceptions rather than the rule. For the majority of organizations, the path to comprehensive AI implementation requires a well-thought-out strategy, clear priorities, and a focus on building confidence and expertise over time.

Integrating AI into a complex organization's workflows is a marathon, not a sprint. It asks for patience, strategic thinking, and a willingness to learn and adapt. The key is to approach this journey with a clear strategy, well-defined priorities, and a commitment to building internal AI expertise. 

Abstract painting of waves

Wednesday, July 17, 2024

AI is not going to implement itself, but governments can help

The AI hype has passed, and the overexcited futurists' voices are mercifully fading away. We're now entering a practical era where AI is leveraged to boost productivity in businesses, non-profit, and public organizations. This shift brings a sobering realization: AI integration requires a meticulous, pragmatic approach to build reliable and trustworthy systems. It's a lot of work and requires some strategy.

When a single person manages a well-defined workflow, integrating AI is relatively straightforward. It's easy to incorporate AI tools like ChatGPT or Claude to assist with ad copy, reports, or applications. The beauty of these scenarios lies in their simplicity - the user acts as both operator and quality controller, immediately judging the output's effectiveness.

However, the story changes dramatically when we shift to multi-user workflows or more complex processes, where both inputs and outputs are more of a collective responsibility. I recently spoke with an Accounts Payable team who posed a challenging question: "Yes, we can see that AI can help review travel claims, but can you guarantee it's going to be 100% accurate?" I couldn't provide that guarantee; I don't have time to conduct a hundred tests, and I don't even have access to a hundred travel reports. They emphasized their need for completely audit-proof outcomes. This conversation highlighted the trust issues that arise when moving from AI enthusiasts to skeptics in larger organizations. And organizations should have a healthy group of skeptics to remain viable.

I've also recently been a fly on the wall during discussions between healthcare executives and a U.S. lawmaker. The executives explained that each AI-assisted medical procedure needs validation, which is expensive and often duplicated across multiple hospital systems. This challenge extends beyond healthcare. For instance, when using AI to crunch data in all organizations, we need to understand its reliability in analyzing large datasets, cleaning them, and handling outliers.

The problem is that no private institution can conduct the kind of comprehensive testing and validation needed to establish trust in AI systems across various industries. We cannot seriously trust claims of startups who are trying to sell a specialized product to an industry or a government organization. It's not clear how a hypothetical validation private service would monetize such an endeavor.

This is where I believe government involvement becomes crucial. Instead of obsessing with deep fakes and ethics, that's what governments should be doing. Governments can collaborate with industry experts to develop standardized benchmarks for AI reliability and performance. They could establish certification programs that act as quality marks, assuring users that AI systems have undergone rigorous testing. Moreover, government funding could support businesses, NGOs, and government agencies in conducting extensive AI testing, especially benefiting smaller organizations lacking the necessary resources.

In my view, public-private partnerships are key to navigating these challenges. By leveraging expertise from both sectors, we can develop robust testing frameworks and create dependable AI systems. This approach would pave the way for more efficient and innovative workflows across industries, ensuring that the benefits of AI are realized while maintaining trust and reliability. 

Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Tuesday, July 9, 2024

AI-Positive Pedagogy: Navigating the Great Disruption

AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.

While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.

The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.

AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.

AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.

The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.

To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.

It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.

In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.

To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.

The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.


Wednesday, June 26, 2024

On AI Shaming

Here is a new thing: AI shaming. It is a practice where individuals accuse others of using artificial intelligence to generate written content, as if such an act were inherently deceitful or somhow sinful. How fascinating, the very premise of it.

This phenomenon  reveals itself to be a fairly common logical fallacy. It is a summative dismissive argument, with a dash of ad hominem (ad machinam?) for flavor. One might wonder why the method of creation should overshadow the substance of the content. After all, we don't dismiss the works of authors who switched from quills to typewriters, do we? 

There's a certain Foucauldian quality to this practice, if one may be permitted a touch of theoryhead's indulgence. By attempting to draw a sharp line between 'acceptable' and 'unacceptable' means of content creation, some seem to be engaging in a subtle power play. It is almost as if they're trying to sell their unfamiliarity with newer technologies as a virtue. it is a rather clever way of elevating the game at which one excells, and putting down a game at which one fails. While an understandable inclination it is still a folly. 

For those of us who have embraced these new tools, such accusations are about as concerning as a light drizzle on a summer day - which is to say, entirely expected and hardly worth mentioning. If anything, it provides a certain amusement to observe the lengths to which some will go to maintain the status quo and their priviledged little spot in it. 

However, there is a more sobering concern to consider. While thick-skinned people like me might brush off such criticisms with a raised eyebrow, younger, more impressionable ones might internalise this arbitrary stigma. It would be a shame if the next generation felt compelled to hide their technological proficiency out of  fear of Luddites' bullying.

As these AI tools inevitably become more sophisticated and ubiquitous, perhaps we might redirect our energy towards more productive ends. Instead of engaging in this curious form of digital fingerpointing, we could focus on the responsible and creative use of these technologies. After all, the ideas expressed within content will always be more intriguing than the means by which they were transcribed.

To those who persist in AI shaming: by all means, knock yourelf out. Your dedication to this cause is admirable, if somewhat perplexing. Just don't be too surprised if the rest of us seem a bit distracted - we'll be busy adapting to the future while you're perfecting your fingerwagging techniques.

P.S. This text, according to QuillBot, is 0% AI-generated, however I wrote it with Claude :)

How to bullshit about bullshit

Take a look at a paper ChatGPT is bullshit, by Michael Townsen Hicks, James Humphries, Joe Slater. Other than a catchy title, the paper has a few problems of its own.

1. Focus on terminology over utility:

The paper spends considerable effort arguing that the outputs of large language models (LLMs) like ChatGPT should be classified as "bullshit" rather than "hallucinations." While this may be an interesting philosophical distinction, it arguably does little to advance our understanding of the practical implications and applications of these technologies. The authors could have devoted more attention to analyzing the actual utility and potential use cases of LLMs, which would likely be more valuable to policymakers and the public.

2. Limited practical insights:

By focusing primarily on categorizing the nature of LLM outputs, the paper misses an opportunity to provide meaningful insights into how these technologies can be effectively and responsibly utilized. A more constructive approach might involve examining specific contexts where LLMs prove useful and where they fall short, rather than broadly labeling their output as "bullshit."

3. Potential for misdirection:

While the authors argue that the term "hallucinations" is misleading, replacing it with "bullshit" may not necessarily lead to a clearer understanding of LLM behavior. Both terms are metaphorical and could potentially misguide readers about the true nature and capabilities of these systems. A more technical and nuanced explanation of how LLMs function and their limitations might be more informative.

4. Overlooking nuance:

The paper seems to take a binary approach – either LLMs are concerned with truth (which they argue against) or they are producing "bullshit." This oversimplification may overlook the nuanced ways in which LLMs can be useful for various tasks, even if they don't have an inherent concern for truth in the way humans do.

5. Lack of actionable recommendations:

While the authors critique the use of the term "hallucinations," they don't offer clear, actionable recommendations for how to better communicate about LLMs to policymakers and the public. A more constructive approach would be to propose specific ways to educate stakeholders about the capabilities and limitations of these technologies.

6. Missing the broader context:

By focusing narrowly on the philosophical categorization of LLM outputs, the paper misses an opportunity to discuss the broader implications of these technologies on society, economy, and various industries. A more comprehensive analysis of the impact and potential of LLMs would likely be more valuable to readers.

Monday, June 24, 2024

Can observational learning work with AI?

Can humans learn new problem-solving strategies simply by observing AI at work? Following Bandura's theory of observational learning, it may not be as far-fetched as it at first appears.

When humans watch AI systems solve problems or generate text, they naturally construct mental models of the underlying processes. These models, while factually wrong (AI thinking is very different from ours), can nonetheless prove useful. It is imagining yourself in teh task that can be beneficial. 

Consider a person observing an AI system summarise a lengthy academic paper. The human observer cannot directly perceive the AI's internal computations. Instead, the observer likely imagines themselves performing the task, focusing on key sentences, identifying main themes, and connecting key ideas.

This mental model, though inaccurate in representing the AI's actual mechanisms, may still enhance the observer's own summarisation skills. They might, for instance, learn to pay closer attention to introductory and concluding paragraphs, or to look for repeated phrases that signal important concepts.

Observing AI failures can be particularly instructive. When an AI system produces an erroneous or nonsensical output, it often reveals the limitations of its approach. A human observer, reflecting on these errors, might develop a more nuanced understanding of the problem at hand and devise novel strategies to overcome the AI's shortcomings.

For example, watching an AI struggle with a complex logical reasoning task might prompt a human to break the problem down into smaller, more manageable steps. This approach, inspired by the AI's limitations, could prove valuable even in contexts where AI is not involved.

To test this hypothesis rigorously, consider an experiment:

1. Select a diverse set of problem-solving tasks, ranging from creative writing to mathematical reasoning.

2. Divide participants into three groups:

  •  a) An observation group that watches AI systems attempt these tasks, including both successes and failures.
  •  b) A practice group that attempts the tasks themselves without AI involvement.
  •  c) A control group that engages in unrelated activities.

3. After the observation or practice period, test all participants on a new set of similar problems.

4. Compare the performance of the three groups, paying particular attention to novel problem-solving strategies employed by the observation group.

5. Conduct follow-up interviews to explore participants' thought processes and any conscious attempts to apply AI-inspired techniques.

Friday, June 21, 2024

Can We Learn Without Struggle in the Age of AI?

I've been pondering a question: What if our traditional understanding of cognitive growth is too narrow? We've long held onto the idea that real learning comes from struggle, from pushing against our limits, from grappling with challenges just beyond our current abilities. But what if that's not the whole story?

I'm starting to wonder if growth - real, meaningful cognitive development - might not always need the strong challenges we've assumed were necessary. And this thought has become particularly relevant as we enter the new world of AI-assisted learning.

Many of our theories about learning and development are rooted in the idea of conflict or tension. Vygotsky's Zone of Proximal Development, for instance, emphasizes the space between what a learner can do without help and what they can do with guidance. Piaget talked about cognitive dissonance as a driver of development. These Hegelian/Mamrxist heories have shaped how we think about education for decades.

But here's what I'm pondering: What if growth can happen more... gently? What if it can occur through exposure, through interaction, through a kind of cognitive osmosis that doesn't necessarily involve struggle or challenge? And importantly, what if this gentler form of learning is still deeply social and relational?

There's a lot of hand-wringing in educational circles about AI tools like ChatGPT. The worry is that by providing ready answers, these tools will short-circuit the learning process. Students won't have to struggle, so they won't really learn. I have definitely been expressing these concerns in my previous blogs. 

But I'm not so sure anymore. Let me float a hypothesis: What if AI-assisted learning doesn't dampen growth, but instead provides a different kind of cognitive experience that can still lead to meaningful development? And what if this experience, rather than being isolating, actually opens up new avenues for social learning and collaboration?

Here's an analogy that's been helpful for me in thinking about this. Remember when GPS first became widely available? There were concerns that people would never learn to navigate cities anymore, that we'd lose our sense of spatial awareness. And yet, most of us who use GPS regularly still develop a pretty good understanding of the cities we live in and visit. We might learn differently - perhaps more slowly, or with less detail - but we do learn, without all the frustrations of trying to read the map while driving, or memorize multiple turns (Left, second right, soft left again...). City driving is probably safer, but we did not get more stupid.  

The GPS doesn't prevent us from learning; it provides a different context for learning. We're not struggling with paper maps, but we're still processing spatial information, making connections, building mental models of our environment.

Could AI-assisted learning work in a similar way? Sure, students might get quick answers or produce a quick text with an AI without much effort, which feels somehow wrong (the nature of these feelings is a subject of a special consideration). But that doesn't mean they stop thinking or interacting. They will start wondering how to get better answers, produce better outputs. They will begin to notice patterns in the AI's responses. They will start to question or critique what the AI produces. That's what the human brain has evolved to do. 

Moreover, this process doesn't happen in isolation. Students will discuss their AI interactions with peers, compare outputs, collaboratively explore how to improve results. It becomes a form of social play - experimenting, sharing discoveries, building on each other's ideas. The AI becomes a tool for social learning, not a replacement for it.

In other words, the presence of AI doesn't eliminate cognitive work or social interaction - it might just shift their nature. And who's to say that this new form of cognitive and social engagement is any less valuable than the old? 

Now, I'm not saying we should throw out everything we know about learning and development. Challenge and struggle certainly have their place. But I am suggesting that maybe we need to broaden our understanding of how growth happens, recognizing that it can occur through gentler, more playful forms of social interaction and exploration.

Perhaps there's a whole spectrum of cognitive experiences that can lead to growth, ranging from intense challenge to gentle, collaborative exposure. Maybe AI-assisted learning falls somewhere on this spectrum - not replacing traditional forms of learning, but adding new dimensions to how we can develop cognitively and socially.

This is all hypothetical, of course. We're in new territory with AI, and it will take time and research to understand its full impact on learning and development. But I think it's worth considering that our assumptions about how growth happens might need updating, particularly in how we view the social aspects of learning.

What do you think? Is it possible that we've overemphasized challenge in our understanding of learning? Could AI-assisted learning open up new pathways for cognitive development and social learning that we haven't fully appreciated yet? I am just back from a conference, where we had many early childhood educators. In that tribe, there is a lot more talk about play than about forcing growth. Maybe that's why I am having these second thoughts about growth. 

I don't have definitive answers, but I think these are questions worth exploring as we navigate this new landscape of learning. After all, if there's one thing we know for sure about human cognition, it's that it's endlessly adaptable and intrinsically social. Perhaps it's time for our theories about learning to adapt as well, embracing a more collaborative, playful vision of growth in the age of AI.

Monday, June 10, 2024

Testing AI once does not make you an expert

I heard of a professor who asked ChatGPT to write a profile of himself, only to discover inaccuracies and decide that AI is unsuitable for education. Instead of reflecting on why he is not sufficiently famous, the professor blamed the AI. This reaction is like boycotting all cars after driving an old Soviet-made Lada. Dismissing AI entirely based on a couple of lazy interactions is a classic example of the overgeneralization fallacy.

Before hastily testing and dismissing, one would be well served to read about the known limitations of AI, particularly when it comes to generating content about individuals who are not well-known. AI can "hallucinate" details and citations, creating a misleading picture of reality.

The key is to approach AI with a spirit of curiosity and creativity, exploring its strengths and weaknesses through multiple tests and scenarios. By focusing on what works rather than fixating on what does not, we can begin to appreciate AI for what it is—a tool with potential that takes some skill and experience to unlock.

Also, think about your the risk to your reputation. If you are saying, "I tried, and it is crap," you are also dismissing all those other people who found it valuable as gullible fools. The failure to see that the joke is on you is a test of your hubris, and that kind of a test works on just one try. 

Thursday, June 6, 2024

Using AI without losing control

As AI becomes increasingly capable of performing tasks that were once the exclusive domain of humans, it no longer makes sense to spend  a lot of time teaching skills that AI can do better and faster. Just as we largely replaced the teaching of manual drafting with computer-aided design (CAD), we must now reassess the curriculum and remove skills that have been rendered obsolete by AI.

The way we define curriculum reflects our understanding of what it means to be human, a definition that has evolved significantly over the centuries. As machines have become more advanced, skills previously valued and developed by humans have gradually been entrusted to machines.

However, the logic behind this shift is also changing. It is no longer just a matter of what machines can do better, as they seem to be excelling at almost everything. Instead, it is about what we, as humans, choose to retain, enjoy, and feel compelled to do. It is less about competition with machines and more about the arrangement of power and authority. To maintain our human authority, we must continue to perform certain tasks.

One of the most important of these tasks is the ability to set goals and make value judgments about what should or should not be done. This is a complex skill that requires a comprehensive understanding of the world, both in its physical and social aspects, as well as the wisdom to make decisions when faced with competing values and the inherent uncertainty of the future. As AI increasingly mediates our interactions, the ability to determine the goals and direction of these interactions becomes even more crucial.

Equally important is the ability to work with AI to achieve our objectives. This process, sometimes referred to as "prompt engineering," involves iterative interaction and refinement to obtain the most accurate, appropriate, and helpful outputs from AI. Beyond technical skills, this requires robust critical thinking to assess the truthfulness and value of AI-generated content. In essence, it is the skill of editing, not merely for style but for substance, constantly asking: Is this useful? Does it improve upon what already exists? Does it positively contribute to the overall human knowledge and well-being?

These core competencies should shape the curriculum of the future: goal-setting, value judgment, critical discernment, and the ability to collaborate with AI to achieve human objectives. The specifics will evolve as AI capabilities advance, and there will be variations across disciplines.

However, the fundamental challenge remains the same: in a world increasingly shaped by artificial intelligence, we must rededicate ourselves to cultivating human intelligence in its fullest and most distinctively human expressions. Only then can we ensure that the tremendous power of AI serves to elevate humanity rather than diminish it.

Tuesday, May 21, 2024

"First try with AI"; On the advantages of organic learning

Some people advocate for structured training programs and dedicated time for AI learning, but a more organic approach is more effective and efficient.

The secret to successfully incorporating AI into your work is to simply start using it for your next task. Rather than setting aside special time for AI learning, dive right in and explore how AI can assist you in your current projects. Need to do something? Write a memo, a long email, a letter, a grant proposal? "First Try with AI."

What do you have to lose? he worst-case scenario is that you waste a little time if AI proves unhelpful for that particular task. However, in most cases, you will discover its usefulness and potential to save you some time, even if it doesn't complete the task entirely.

It's important to recognize that AI never does everything for you. Only the most mindless, bureaucratic,  compliance-related content may be primarily handled by AI. However, for the majority of tasks, you will intuitively learn the right mix of human and AI ingredients to create the best soup. This organic learning process allows you to understand the strengths and limitations of AI within the context of your specific work.

There is nothing wrong with taking courses to learn about AI, but it is worth noting that assignments in such courses often lack authenticity. Those are "pretend tasks." Even after completing a course, you would still need to learn how to transfer your new skills into real-world contexts. In contrast, an organic approach to AI learning allows you to immediately apply your knowledge within the context of your work, resulting in a more motivated, deeper, and faster learning experience.

As you gradually incorporate AI into your daily tasks, you will naturally develop a better understanding of when and how to leverage its capabilities, and where to mitigate its shortcomings. This hands-on, contextual learning approach will not only help you become more proficient in using AI but also enable you to identify new opportunities for its application within your organization.

For educational contexts, we know there is a strong correlation between instructors personally using AI and them allowing students to use it in class. We don't trust things we do not understand, which explains the unreasonably strong worries about cheating. There will be no classroom use without the personal use by instructors first. Once teachers start using it for their own purposes, their anxiety levels go down, and their creativity frees up to invent classroom uses. 

Monday, May 13, 2024

Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work

The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric implies a homogeneous substance, akin to measuring the alcohol content in a beverage. However, my recent survey suggests that academic integrity associated with AI use is far from homogeneous. The survey asked educators to evaluate the ethical implications of using AI for twelve different tasks in writing an academic paper, ranging from researching to brainstorming to editing to actually writing full sections.

The findings revealed significant variance in responses. While many respondents were comfortable with AI aiding in brainstorming ideas, they expressed reservations or outright disapproval of AI writing entire paragraphs or papers. This disparity underscores a critical issue: there is no consensus in the academic profession on what constitutes acceptable AI assistance in learning. More strikingly, within each individual's responses, there was considerable variation in how different AI uses were assessed.

Consider the implications of a tool like Turnitin reporting "50% AI-generated" content. What does this figure actually represent? It lacks context about how the AI-generated content was incorporated. For instance, a paper could be largely original, with only minor edits made by AI at the end, potentially showing a high percentage of AI contribution. Conversely, a student might contribute minimally to an essentially AI-written paper, making slight modifications to reduce the AI-detected percentage. Both scenarios could yield vastly different percentages, yet the ethical implications are markedly divergent.

The pursuit of better detection technology misses the point. The issue is not with the detection capabilities but with the construct itself. The very idea of "AI-generated text" as a unified concept is problematic. Just as a depression inventory measures various symptoms that converge on the underlying construct of depression, our methods for evaluating AI in academic work must recognize the diverse and context-dependent nature of its use. The current approach, which treats all AI contributions as equivalent, is akin to judging a book's genre by counting its words. I which Turnitin and other commercial "AI Detectors" would show just a little more integrity and stop selling us the snake oil. They must know for sure that their claims are bogus, because AI-generated text is not a valid construct to be measured. 

Instead of focusing obsessively on detecting AI-generated content, we need to shift our perspective. We should expect and require students to use AI as part of their learning process. The challenge then becomes developing assignments that not only measure the content knowledge but also the meta-AI skills and competencies necessary to navigate and leverage these tools effectively. This approach acknowledges the complexity of AI's applications and ensures it is used responsibly, promoting a learning environment that respects both the potential and the limitations of artificial intelligence.

Monday, May 6, 2024

In Education, AI is an emergency

On one hand, AI presents an exhilarating leap forward, a kind of magic wand that promises to transform how we learn and teach. On the other hand,  this glam surface lies a grittier reality—one where the very essence of learning could be at risk.

In education, the core value lies in the process itself. The act of wrestling with ideas, constructing arguments, and stumbling over complex problems is where true learning happens. If a student turns to AI to write an essay, they might technically meet the assignment's requirements, but they've bypassed the intellectual struggle critical to internalizing knowledge. This worry has only deepened in the wake of the pandemic, which already strained educational norms and exposed glaring disparities. Introducing AI into this mix feels like throwing a wrench into an already delicate machine, risking the dilution of the educational experience and fostering a generation more adept at using tools than thinking independently.

Addressing this is no minor feat. It is not about rejecting AI's benefits outright, but rather about steering its use with a careful hand. Educators must become architects of a new curriculum that anticipates AI's influence and actively incorporates it in enriching ways. Perhaps this means designing projects where AI is expected to be used by the projects are still challenging and generate growth.

However, such a transformative approach to curriculum development is a colossal task, varied across academic disciplines and leveels of education. Educators need robust support systems, time to experiment and innovate, and backup from policies that understand and address these unique challenges. Governments and educational leaders must be partners in crafting policies that nurture educationally effectiveand responsible AI use.

As I reflect on this development, I am struck by the immensity of the challenge before us. It is not just about adapting to a new tool; it is about redefining the very foundations of how we teach and learn. It is about finding a way to harness the power of AI without sacrificing the soul of education. This is a journey that will require bold experimentation, deep collaboration, and a willingness to embrace the unknown. But it is a journey we must undertake, for the stakes are too high to ignore. The future of education hangs in the balance, and it is up to us to shape it with wisdom, courage, and a steadfast commitment to the human experience of learning.

Friday, May 3, 2024

Public Money, Private Glory?

At tech events, where AI CEOs bask in the adoration, there's a conspicuous absence in the narrative: the role of public funding and research. These sectors haven't just sprung up through the ingenuity and perseverance of a few brilliant minds; they're the culmination of substantial public investment. Yet, you'd be hard-pressed to hear a word of thanks to taxpayers or governments at these glittering presentations.

The problem with this omission is twofold. Firstly, it promotes a misleading story of technological development—one where breakthroughs seem to happen through sheer brilliance rather than collaborative, incremental progress supported by public funding. This narrative can skew public perception, suggesting that technological advancement might somehow spontaneously occur without structured support. It makes the process seem more magical than methodical, glossing over the reality that innovation is usually more marathon than sprint, and certainly not a solo race.

Secondly, this narrative concentrates excessive admiration—and thus influence—in the hands of tech leaders. Celebrated as visionary and almost superhuman, these individuals often come to wield significant power, not just over their companies but within society itself. Yet, while they may be exceptional in their fields, they frequently lack broad education in social sciences and humanities, or experience in broader human affairs, areas crucial for understanding the implications of the technologies they unleash. This can lead to decisions that prioritize innovation over social impact considerations or public welfare, a risky imbalance.

The superstar culture in technology isn't just an issue of misrepresentation. It has practical consequences, potentially leading policymakers and the public to undervalue the importance of ongoing governmental support for research. If tech advancements are viewed as products of individual genius rather than results of public investment and collaboration, governments and voters might feel justified in cutting funds to these areas, mistakenly believing the private sector will fill the gap. This could slow innovation and shift the global tech landscape, especially towards countries that maintain robust public funding for research.

Acknowledging the role of public funding in technology isn't about diminishing the achievements of tech leaders—it's about painting a more complete and accurate picture of innovation. This more nuanced understanding could foster better-informed decisions regarding funding, education, and policy, ensuring the ecosystem that nurtures new technologies remains dynamic and well-supported.

Ultimately, recognizing the collective contributions to technological advancements isn't just about giving credit where it’s due. It's about ensuring a balanced narrative that neither idolizes the individual innovator nor underestimates the foundational role of public investment. By correcting this imbalance, we can encourage a more sustainable, equitable approach to technology development—one that's grounded in reality and attentive to the broader implications of rapid technological change.

Tuesday, April 23, 2024

AI revolution minus massive unemployment

The conversation on AI often revolves around efficiency and cost reduction, typically translating into fewer jobs. However, a pivotal shift in perspective—from cutting workforce to enhancing and expanding workforce capabilities—can redefine the role of AI in the corporate world. This approach not only preserves jobs but also adds significant value to customer experiences and broadens the spectrum of services and products a company can offer. 

The traditional method of dealing with technological disruption—laying off workers and hiring new ones with the necessary skills—is not only a waste of human capital but also disregards the cultural knowledge embedded within an organization's existing workforce. Retraining keeps people within the organization, allowing them to shift roles while retaining and applying their invaluable understanding of the company's ethos and operations in new ways.

The first step in a proactive workforce transformation strategy is to map out the anticipated skills and roles that will be in demand. This is not just about foreseeing the obsolescence of certain skills but identifying emerging opportunities where AI can augment human capabilities. For instance, with the rise of AI-driven analytics, there is a growing need for professionals who can interpret and leverage these insights into strategic decisions, enhancing business intelligence far beyond current levels.

Once future needs are mapped, the next step is to develop a compelling incentive structure for retraining. Traditional models of employee development often rely on mandatory training sessions that might not align with personal or immediate business goals. Instead, companies should offer tailored learning pathways that align with career progression and personal growth, supported by incentives such as bonuses, career advancement opportunities, and recognition programs. This approach not only motivates employees to embrace retraining but also aligns their development with the strategic goals of the organization.

With AI's capacity to handle repetitive and mundane tasks, employees can redirect their efforts towards more complex, creative, and meaningful work. This shift enables businesses to expand their service offerings or enhance their product features, adding significant value to what customers receive. For example, financial advisors, freed from the tedium of data analysis by AI tools, can focus on crafting bespoke investment strategies that cater to the intricate preferences and needs of their clients. Similarly, customer service representatives can use insights generated by AI to provide personalized service experiences, thereby increasing customer satisfaction and loyalty.

AI not only optimizes existing processes but also opens new avenues for innovation. For instance, in the healthcare sector, AI can manage diagnostic data with high efficiency, which allows healthcare providers to extend their services into preventive health management and personalized medicine, areas that were previously limited by resource constraints. In the retail sector, AI-enhanced data analysis can lead to the creation of highly personalized shopping experiences, with recommendations and services tailored to the individual preferences of each customer, transforming standard shopping into curated personal shopping experiences.

For successful implementation, organizations must foster a culture that views AI as a tool for empowerment rather than a threat to employment. Leadership should communicate clearly about the ways AI will be used to enhance job roles and the benefits it will bring to both employees and the company. Regular feedback loops should be established to adjust training programs based on both employee input and evolving industry demands, ensuring that retraining remains relevant and aligned with market realities.

By focusing on retraining the workforce to harness AI effectively, businesses can transform potential disruptions into opportunities for growth and innovation. This approach not only preserves jobs but also enhances them, adding unprecedented value to the company and its customers, and paving the way for a future where human ingenuity and artificial intelligence work hand in hand to achieve more than was ever possible before.

Monday, April 22, 2024

The Disruptive Potential of AI: Lessons from Clayton Christensen's Theory

As AI continues to make inroads into various industries, it is easy to dismiss its current shortcomings and remain complacent. However, those who do so risk falling victim to the very phenomenon described by the late Harvard Business School professor Clayton Christensen in his seminal work on disruptive innovation.

Christensen's theory posits that disruptive technologies often start at the bottom of the market, offering inferior performance compared to incumbent solutions. However, these technologies are typically cheaper and more accessible, allowing them to gain a foothold among less demanding customers. Over time, as the technology improves, it begins to meet the needs of more sophisticated users, eventually displacing the incumbent players entirely.

The parallels with AI are striking. Today, we may scoff at awkward AI-generated movies featuring characters with anatomical oddities or primitive music engines churning out cliched tunes. However, it would be foolish to assume that these technologies will not improve. Just as the early smartphones were no match for desktop computers, the AI of today is merely a stepping stone to more advanced systems that will rival and surpass human capabilities in various domains.

The rapid pace of investment in AI only serves to underscore this point. With billions of dollars pouring into research and development, the march of progress is inexorable. While the exact timeline remains uncertain, it is clear that AI will continue to evolve at a brisk pace, transforming industries and reshaping the nature of work itself.

In light of this reality, policymakers and leaders in government and philanthropy would be wise to start planning for a future in which the skills demanded by the job market are in a constant state of flux. Rather than clinging to the status quo, we must embrace the disruptive potential of AI and invest in education and training programs that will equip workers with the adaptability and resilience needed to thrive in an era of rapid technological change.

To ignore the lessons of Clayton Christensen's theory would be to court disaster. The question is not whether AI will disrupt our world, but rather how we will rise to meet the challenges and opportunities it presents. By proactively preparing for this future, we can ensure that the benefits of AI are widely shared and that no one is left behind in the great transformations to come. 

Sunday, April 21, 2024

The Rise of ReAIding: "I did not read it, but I understand it"

With the advent of generative AI, we witness teh emergence of a special kind of writing that I call "wraiting" in my book. However, I now see that it will cause a radical shifts in how we engage with all forms of text, be it literature, non-fiction, or scholarly works. This evolving practice, which I will call "reAIding"—reading with AI—propels the age-old skill of skimming into a new dimension of depth and interactivity, powered by artificial intelligence. Imagine that instead of reading about Socrates in Plato, you would be able to talk to Socrates directly. 

Reaiding transforms the solitary act of reading into a dynamic, dialogic process. Just reading AI-generated cliffnotes is not at all what I mean. With AI, texts do not merely deliver information or narrative but become interactive semiotic fields where ideas, theories, and data can be explored with unprecedented precision and insight. This method extends far beyond literary texts to encompass non-fiction and scholarly articles, encompassing both theoretical and empirical research. Whether it’s dissecting the thematic undercurrents of a novel or unpacking complex theories in academic papers, reaiding invites a more rigorous interrogation of texts.

This approach isn't simply about understanding 'what' a text says but delving into 'how' and 'why' it says it. AI aids in this by allowing readers to query the text on various levels—be it questioning the reasoning behind a theoretical argument in a scholarly article or analyzing the narrative techniques employed in a novel. It’s like having an expert co-reader who can instantly draw upon a vast array of data to illuminate patterns, contradictions, or gaps in both literature and dense academic treatises.

Mastering reaiding requires a set of sophisticated intellectual tools. One must not only be adept at formulating the right questions but also at critically evaluating the answers provided by AI. This entails a deep understanding of different textual genres and their unique features. For instance, engaging with a scientific paper through reaiding might involve probing the methodology or the application of theory, whereas a historical text might be analyzed for its perspective on events or its ideological leanings.

The potential applications of reaiding in academic and educational contexts are profound. Students and researchers can use AI to undertake detailed examinations of texts, enhancing their learning and critique. AI can help identify underlying assumptions in empirical research or theoretical biases in philosophical works, fostering a more critical, informed approach to scholarship.

Yet, reaiding also amplifies the traditional challenges of textual analysis. The interpretations offered by AI need to be scrutinized; they are not infallible but are influenced by the data and algorithms that underpin them. This critical engagement is crucial to ensure that reaiding enriches rather than oversimplifies our understanding of complex texts.

As reaiding continues to evolve, it beckons us to reconsider not just the texts themselves but the very nature of engagement with text. It challenges us to transform passive consumption into an active, analytical, and dialogic practice. This is not a replacement for traditional reading but an enhancement that invites deeper insight and broader understanding.

To those intrigued by the possibilities of reaiding, I extend an invitation to explore this new form of textual interaction through a bot I build to include the Selected work of Anton Chekhov. Imagine what it can do if it becomes ten times better. And it will, soon. 

Saturday, April 13, 2024

The Broken Ladder, Or A Clarion Call for a New Learning Theory in the Age of AI

As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer sufficient to explain how people now learn, and how to adjust education accordingly.

Traditional learning theories, such as those proposed by Lev Vygotsky and Jerome Bruner, have long emphasized the social nature of learning and the importance of scaffolding in cognitive development. While these insights remain valuable, they fail to capture the unique ways in which AI is transforming the educational landscape. Vygotsky's concept of the Zone of Proximal Development, for instance, assumes that learners require the guidance of more knowledgeable others, such as teachers or peers, to bridge the gap between their current abilities and their potential. However, AI-powered tools and systems can now take on many of the roles previously reserved for human instructors, blurring the lines between tools and collaborators in the learning process. Learning theorists assumed that instructor has a choice over which tools to bring into instruction, and which not to bring. Well, AI imposes itself in instruction wether we want it or not.

Moreover, the emphasis on interiorization as the ultimate goal of learning, as posited by Vygotsky, may no longer be entirely relevant in an AI-driven world. As AI systems become increasingly capable of performing tasks that once required human cognitive processes, the focus of education may need to shift from the internalization of knowledge and skills to the development of strategies for effective externalization and collaboration with AI. In other words, the aim of education shifts from an individual learner to a symbiosis of a human and a machine.  

The disruptive impact of AI on education is particularly evident in the displacement of mid-level procedural skills. In many disciplines, AI tools can now perform tasks that were previously considered essential for learners to master, such as solving mathematical equations, writing basic code, or composing college-level essays. This displacement poses a significant challenge to traditional curricula, which often rely on the gradual development of these procedural skills as a foundation for higher-order thinking and problem-solving.

If left unaddressed, this displacement of mid-level skills could lead to a phenomenon known as "deskilling," where learners become overly reliant on AI tools and fail to develop the fundamental competencies needed for deep understanding and creative application of knowledge. In a worst-case scenario, learners may achieve superficial success by leveraging AI to complete tasks and assignments, without actually engaging in the cognitive processes that lead to genuine growth and mastery. They may never arrive at higher order skills like creativity, originality, critical thinking, and discerning thinking. 

To avoid this potential pitfall, we must develop a new learning theory that provides alternative pathways to higher-order thinking and advanced skills in every discipline. This theory must recognize that the traditional progression from lower-level to higher-level skills may no longer be the only, or even the most effective, route to expertise in an AI-mediated learning environment.

Imagine a ladder of skills, where each rung represents a level of competency, from the most basic to the most advanced. Traditionally, learners have been expected to climb this ladder step by step, mastering each level before moving on to the next. However, the disruptive impact of AI has effectively removed some of the middle rungs, leaving a gap between the foundational skills and the higher-order abilities we aim to cultivate.

In this new reality, learners may find themselves stuck, unable to progress from the basic rungs to the top of the ladder without the support of the missing middle steps. Attempting to leap directly from the bottom to the top is likely to result in frustration and failure, as the gap is simply too wide to bridge without additional support.

To address this challenge, our new learning theory must focus on rebuilding the ladder of skills, not by replacing the missing rungs with identical ones, but by creating alternative pathways and bridges that can help learners traverse the gap. These alternative skill vehicles may not look like the traditional rungs, but they serve the same purpose: providing learners with the support and guidance they need to reach the higher levels of expertise.

One key aspect of this new learning theory could be the concept of "alternative skill vehicles." Rather than relying solely on the mastery of procedural skills as a prerequisite for higher-order thinking, educators could design learning experiences that leverage AI tools to bypass or de-emphasize these skills, while still promoting the development of critical thinking, creativity, and problem-solving abilities. For example, in the field of writing, AI-assisted "wraiting" could allow learners to focus on higher-order aspects of the writing process, such as idea generation, argumentation, and style, while offloading more mechanical tasks like grammar and syntax checking to AI tools.

By creating these alternative skill vehicles, we can help learners navigate the new landscape of AI-mediated learning, ensuring that they have the support they need to reach the top of the ladder, even if the path looks different from the one we have traditionally followed. 

Another crucial component of a new learning theory for the age of AI would be the cultivation of "blended intelligence." This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Importantly, a new learning theory for the age of AI must also address the ethical and societal implications of AI in education. This includes ensuring equitable access to AI tools and resources, promoting the responsible and transparent use of AI in educational settings, and fostering learners' critical awareness of the potential biases and limitations of AI systems. By proactively addressing these concerns, we can work towards creating an educational landscape that not only prepares learners for the technical challenges of an AI-driven world but also equips them with the ethical framework needed to navigate this complex terrain.

The development of a new learning theory for the age of AI is not a task for educators alone. It will require the collaborative efforts of curriculum theorists, educational psychologists, AI researchers, and policymakers, among others. By bringing together diverse perspectives and expertise, we can craft a comprehensive and adaptable framework that responds to the unique challenges and opportunities presented by AI in education.

The imperative for this new learning theory is clear. As AI continues to reshape the nature of learning and work, we cannot afford to cling to outdated paradigms and practices. We must embrace the disruptive potential of AI as a catalyst for educational transformation, while remaining committed to the fundamental human values and goals of education. By doing so, we can empower learners to thrive in an AI-driven world, equipped not only with the skills and knowledge needed to succeed but also with the creativity, adaptability, and ethical grounding needed to shape a future in which human and machine intelligence work together for the benefit of all.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...