Tuesday, July 9, 2024

AI-Positive Pedagogy: Navigating the Great Disruption

AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.

While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.

The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.

AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.

AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.

The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.

To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.

It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.

In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.

To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.

The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.


Wednesday, June 26, 2024

On AI Shaming

Here is a new thing: AI shaming. It is a practice where individuals accuse others of using artificial intelligence to generate written content, as if such an act were inherently deceitful or somhow sinful. How fascinating, the very premise of it.

This phenomenon  reveals itself to be a fairly common logical fallacy. It is a summative dismissive argument, with a dash of ad hominem (ad machinam?) for flavor. One might wonder why the method of creation should overshadow the substance of the content. After all, we don't dismiss the works of authors who switched from quills to typewriters, do we? 

There's a certain Foucauldian quality to this practice, if one may be permitted a touch of theoryhead's indulgence. By attempting to draw a sharp line between 'acceptable' and 'unacceptable' means of content creation, some seem to be engaging in a subtle power play. It is almost as if they're trying to sell their unfamiliarity with newer technologies as a virtue. it is a rather clever way of elevating the game at which one excells, and putting down a game at which one fails. While an understandable inclination it is still a folly. 

For those of us who have embraced these new tools, such accusations are about as concerning as a light drizzle on a summer day - which is to say, entirely expected and hardly worth mentioning. If anything, it provides a certain amusement to observe the lengths to which some will go to maintain the status quo and their priviledged little spot in it. 

However, there is a more sobering concern to consider. While thick-skinned people like me might brush off such criticisms with a raised eyebrow, younger, more impressionable ones might internalise this arbitrary stigma. It would be a shame if the next generation felt compelled to hide their technological proficiency out of  fear of Luddites' bullying.

As these AI tools inevitably become more sophisticated and ubiquitous, perhaps we might redirect our energy towards more productive ends. Instead of engaging in this curious form of digital fingerpointing, we could focus on the responsible and creative use of these technologies. After all, the ideas expressed within content will always be more intriguing than the means by which they were transcribed.

To those who persist in AI shaming: by all means, knock yourelf out. Your dedication to this cause is admirable, if somewhat perplexing. Just don't be too surprised if the rest of us seem a bit distracted - we'll be busy adapting to the future while you're perfecting your fingerwagging techniques.

P.S. This text, according to QuillBot, is 0% AI-generated, however I wrote it with Claude :)

How to bullshit about bullshit

Take a look at a paper ChatGPT is bullshit, by Michael Townsen Hicks, James Humphries, Joe Slater. Other than a catchy title, the paper has a few problems of its own.

1. Focus on terminology over utility:

The paper spends considerable effort arguing that the outputs of large language models (LLMs) like ChatGPT should be classified as "bullshit" rather than "hallucinations." While this may be an interesting philosophical distinction, it arguably does little to advance our understanding of the practical implications and applications of these technologies. The authors could have devoted more attention to analyzing the actual utility and potential use cases of LLMs, which would likely be more valuable to policymakers and the public.

2. Limited practical insights:

By focusing primarily on categorizing the nature of LLM outputs, the paper misses an opportunity to provide meaningful insights into how these technologies can be effectively and responsibly utilized. A more constructive approach might involve examining specific contexts where LLMs prove useful and where they fall short, rather than broadly labeling their output as "bullshit."

3. Potential for misdirection:

While the authors argue that the term "hallucinations" is misleading, replacing it with "bullshit" may not necessarily lead to a clearer understanding of LLM behavior. Both terms are metaphorical and could potentially misguide readers about the true nature and capabilities of these systems. A more technical and nuanced explanation of how LLMs function and their limitations might be more informative.

4. Overlooking nuance:

The paper seems to take a binary approach – either LLMs are concerned with truth (which they argue against) or they are producing "bullshit." This oversimplification may overlook the nuanced ways in which LLMs can be useful for various tasks, even if they don't have an inherent concern for truth in the way humans do.

5. Lack of actionable recommendations:

While the authors critique the use of the term "hallucinations," they don't offer clear, actionable recommendations for how to better communicate about LLMs to policymakers and the public. A more constructive approach would be to propose specific ways to educate stakeholders about the capabilities and limitations of these technologies.

6. Missing the broader context:

By focusing narrowly on the philosophical categorization of LLM outputs, the paper misses an opportunity to discuss the broader implications of these technologies on society, economy, and various industries. A more comprehensive analysis of the impact and potential of LLMs would likely be more valuable to readers.

Monday, June 24, 2024

Can observational learning work with AI?

Can humans learn new problem-solving strategies simply by observing AI at work? Following Bandura's theory of observational learning, it may not be as far-fetched as it at first appears.

When humans watch AI systems solve problems or generate text, they naturally construct mental models of the underlying processes. These models, while factually wrong (AI thinking is very different from ours), can nonetheless prove useful. It is imagining yourself in teh task that can be beneficial. 

Consider a person observing an AI system summarise a lengthy academic paper. The human observer cannot directly perceive the AI's internal computations. Instead, the observer likely imagines themselves performing the task, focusing on key sentences, identifying main themes, and connecting key ideas.

This mental model, though inaccurate in representing the AI's actual mechanisms, may still enhance the observer's own summarisation skills. They might, for instance, learn to pay closer attention to introductory and concluding paragraphs, or to look for repeated phrases that signal important concepts.

Observing AI failures can be particularly instructive. When an AI system produces an erroneous or nonsensical output, it often reveals the limitations of its approach. A human observer, reflecting on these errors, might develop a more nuanced understanding of the problem at hand and devise novel strategies to overcome the AI's shortcomings.

For example, watching an AI struggle with a complex logical reasoning task might prompt a human to break the problem down into smaller, more manageable steps. This approach, inspired by the AI's limitations, could prove valuable even in contexts where AI is not involved.

To test this hypothesis rigorously, consider an experiment:

1. Select a diverse set of problem-solving tasks, ranging from creative writing to mathematical reasoning.

2. Divide participants into three groups:

  •  a) An observation group that watches AI systems attempt these tasks, including both successes and failures.
  •  b) A practice group that attempts the tasks themselves without AI involvement.
  •  c) A control group that engages in unrelated activities.

3. After the observation or practice period, test all participants on a new set of similar problems.

4. Compare the performance of the three groups, paying particular attention to novel problem-solving strategies employed by the observation group.

5. Conduct follow-up interviews to explore participants' thought processes and any conscious attempts to apply AI-inspired techniques.

Friday, June 21, 2024

Can We Learn Without Struggle in the Age of AI?

I've been pondering a question: What if our traditional understanding of cognitive growth is too narrow? We've long held onto the idea that real learning comes from struggle, from pushing against our limits, from grappling with challenges just beyond our current abilities. But what if that's not the whole story?

I'm starting to wonder if growth - real, meaningful cognitive development - might not always need the strong challenges we've assumed were necessary. And this thought has become particularly relevant as we enter the new world of AI-assisted learning.

Many of our theories about learning and development are rooted in the idea of conflict or tension. Vygotsky's Zone of Proximal Development, for instance, emphasizes the space between what a learner can do without help and what they can do with guidance. Piaget talked about cognitive dissonance as a driver of development. These Hegelian/Mamrxist heories have shaped how we think about education for decades.

But here's what I'm pondering: What if growth can happen more... gently? What if it can occur through exposure, through interaction, through a kind of cognitive osmosis that doesn't necessarily involve struggle or challenge? And importantly, what if this gentler form of learning is still deeply social and relational?

There's a lot of hand-wringing in educational circles about AI tools like ChatGPT. The worry is that by providing ready answers, these tools will short-circuit the learning process. Students won't have to struggle, so they won't really learn. I have definitely been expressing these concerns in my previous blogs. 

But I'm not so sure anymore. Let me float a hypothesis: What if AI-assisted learning doesn't dampen growth, but instead provides a different kind of cognitive experience that can still lead to meaningful development? And what if this experience, rather than being isolating, actually opens up new avenues for social learning and collaboration?

Here's an analogy that's been helpful for me in thinking about this. Remember when GPS first became widely available? There were concerns that people would never learn to navigate cities anymore, that we'd lose our sense of spatial awareness. And yet, most of us who use GPS regularly still develop a pretty good understanding of the cities we live in and visit. We might learn differently - perhaps more slowly, or with less detail - but we do learn, without all the frustrations of trying to read the map while driving, or memorize multiple turns (Left, second right, soft left again...). City driving is probably safer, but we did not get more stupid.  

The GPS doesn't prevent us from learning; it provides a different context for learning. We're not struggling with paper maps, but we're still processing spatial information, making connections, building mental models of our environment.

Could AI-assisted learning work in a similar way? Sure, students might get quick answers or produce a quick text with an AI without much effort, which feels somehow wrong (the nature of these feelings is a subject of a special consideration). But that doesn't mean they stop thinking or interacting. They will start wondering how to get better answers, produce better outputs. They will begin to notice patterns in the AI's responses. They will start to question or critique what the AI produces. That's what the human brain has evolved to do. 

Moreover, this process doesn't happen in isolation. Students will discuss their AI interactions with peers, compare outputs, collaboratively explore how to improve results. It becomes a form of social play - experimenting, sharing discoveries, building on each other's ideas. The AI becomes a tool for social learning, not a replacement for it.

In other words, the presence of AI doesn't eliminate cognitive work or social interaction - it might just shift their nature. And who's to say that this new form of cognitive and social engagement is any less valuable than the old? 

Now, I'm not saying we should throw out everything we know about learning and development. Challenge and struggle certainly have their place. But I am suggesting that maybe we need to broaden our understanding of how growth happens, recognizing that it can occur through gentler, more playful forms of social interaction and exploration.

Perhaps there's a whole spectrum of cognitive experiences that can lead to growth, ranging from intense challenge to gentle, collaborative exposure. Maybe AI-assisted learning falls somewhere on this spectrum - not replacing traditional forms of learning, but adding new dimensions to how we can develop cognitively and socially.

This is all hypothetical, of course. We're in new territory with AI, and it will take time and research to understand its full impact on learning and development. But I think it's worth considering that our assumptions about how growth happens might need updating, particularly in how we view the social aspects of learning.

What do you think? Is it possible that we've overemphasized challenge in our understanding of learning? Could AI-assisted learning open up new pathways for cognitive development and social learning that we haven't fully appreciated yet? I am just back from a conference, where we had many early childhood educators. In that tribe, there is a lot more talk about play than about forcing growth. Maybe that's why I am having these second thoughts about growth. 

I don't have definitive answers, but I think these are questions worth exploring as we navigate this new landscape of learning. After all, if there's one thing we know for sure about human cognition, it's that it's endlessly adaptable and intrinsically social. Perhaps it's time for our theories about learning to adapt as well, embracing a more collaborative, playful vision of growth in the age of AI.

Monday, June 10, 2024

Testing AI once does not make you an expert

I heard of a professor who asked ChatGPT to write a profile of himself, only to discover inaccuracies and decide that AI is unsuitable for education. Instead of reflecting on why he is not sufficiently famous, the professor blamed the AI. This reaction is like boycotting all cars after driving an old Soviet-made Lada. Dismissing AI entirely based on a couple of lazy interactions is a classic example of the overgeneralization fallacy.

Before hastily testing and dismissing, one would be well served to read about the known limitations of AI, particularly when it comes to generating content about individuals who are not well-known. AI can "hallucinate" details and citations, creating a misleading picture of reality.

The key is to approach AI with a spirit of curiosity and creativity, exploring its strengths and weaknesses through multiple tests and scenarios. By focusing on what works rather than fixating on what does not, we can begin to appreciate AI for what it is—a tool with potential that takes some skill and experience to unlock.

Also, think about your the risk to your reputation. If you are saying, "I tried, and it is crap," you are also dismissing all those other people who found it valuable as gullible fools. The failure to see that the joke is on you is a test of your hubris, and that kind of a test works on just one try. 

Thursday, June 6, 2024

Using AI without losing control

As AI becomes increasingly capable of performing tasks that were once the exclusive domain of humans, it no longer makes sense to spend  a lot of time teaching skills that AI can do better and faster. Just as we largely replaced the teaching of manual drafting with computer-aided design (CAD), we must now reassess the curriculum and remove skills that have been rendered obsolete by AI.

The way we define curriculum reflects our understanding of what it means to be human, a definition that has evolved significantly over the centuries. As machines have become more advanced, skills previously valued and developed by humans have gradually been entrusted to machines.

However, the logic behind this shift is also changing. It is no longer just a matter of what machines can do better, as they seem to be excelling at almost everything. Instead, it is about what we, as humans, choose to retain, enjoy, and feel compelled to do. It is less about competition with machines and more about the arrangement of power and authority. To maintain our human authority, we must continue to perform certain tasks.

One of the most important of these tasks is the ability to set goals and make value judgments about what should or should not be done. This is a complex skill that requires a comprehensive understanding of the world, both in its physical and social aspects, as well as the wisdom to make decisions when faced with competing values and the inherent uncertainty of the future. As AI increasingly mediates our interactions, the ability to determine the goals and direction of these interactions becomes even more crucial.

Equally important is the ability to work with AI to achieve our objectives. This process, sometimes referred to as "prompt engineering," involves iterative interaction and refinement to obtain the most accurate, appropriate, and helpful outputs from AI. Beyond technical skills, this requires robust critical thinking to assess the truthfulness and value of AI-generated content. In essence, it is the skill of editing, not merely for style but for substance, constantly asking: Is this useful? Does it improve upon what already exists? Does it positively contribute to the overall human knowledge and well-being?

These core competencies should shape the curriculum of the future: goal-setting, value judgment, critical discernment, and the ability to collaborate with AI to achieve human objectives. The specifics will evolve as AI capabilities advance, and there will be variations across disciplines.

However, the fundamental challenge remains the same: in a world increasingly shaped by artificial intelligence, we must rededicate ourselves to cultivating human intelligence in its fullest and most distinctively human expressions. Only then can we ensure that the tremendous power of AI serves to elevate humanity rather than diminish it.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...