Showing posts with label Theory. Show all posts
Showing posts with label Theory. Show all posts

Saturday, October 19, 2024

Where is the work? AI and Creativity

For ages, we have blurred the lines between ideation and execution, treating them as inseparable parts of creativity. Craftsmanship was tightly bound to originality. Think of Michelangelo working on the Sistine Chapel, a project that spanned nearly a decade. Where does his genius truly lie? In envisioning those profound images, or in the labor of painting them? What, exactly, is the essence of the work?

The rise of AI forces us to untangle these ideas and reconsider what it means to produce "human" work. Take a recent story I heard from from the audience of one of my talks: a person described how he fed an AI every detail about a retiring colleague, and the AI generated a speech so moving that it brought the retiree to tears. But the retiree, upon learning the speech's origin, was dumbfounded.

What is interesting is not the retiree’s reaction, but the storyteller's own oversight. He failed to see his own critical role in the process. By gathering the details, curating moments that best captured the retiree’s essence, he performed the most human part of the creative act. He mistook the act of turning those ideas into words as the creative work, but that is not the case.

AI, ironically, is pushing us to be more human, not more like machines. It is forcing us to recognize that our true contribution lies in the ability to think, to create, and to feel. As AI takes over the mechanical aspects of tasks we once considered integral to creativity—whether that is writing, painting, or coding—we are left with the more uniquely human roles: original thinking and emotional depth.

This shift reshapes our understanding of creativity and work. It shows that human value does not lie in production—the technical aspect of turning an idea into a product—but in the deeper conceptual and emotional layers that AI still cannot reach.

As we move forward, we are compelled to rethink productivity itself. The future will not belong to those who can outdo AI in execution, but to those who can combine AI’s strengths with our unique capacities for innovation, empathy, and insight.

The challenge we face is not to resist AI, but to fully embrace our humanity—to cultivate the traits that machines cannot replicate. With AI taking over the drudgery, we are freed to focus on higher-order thinking and those creative leaps that define human ingenuity.

Ironically, the more we develop artificial intelligence, the more we learn about what human intelligence really is. And in that discovery lies our future—a future where AI does not replace creativity, but elevates it to new possibilities.


Sunday, September 29, 2024

Advanced AI users develop special cognitive models

When we encounter a stranger, we make swift, often unconscious judgments about who they are and what they are capable of. A person who speaks our language with barely a hint of an accent? We assume they are fluent. Someone who drops a reference to a complex scientific theory? We peg them as well-educated, likely to be literate, and probably knowledgeable about a range of topics from current events to social norms.

These snap judgments form the backbone of our social interactions. They are mental shortcuts, honed over millennia of human evolution, allowing us to navigate the complexities of social life with remarkable efficiency. Most of the time, they serve us well. We can usually guess whether someone will understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models. 

But when we step into the realm of artificial intelligence, these time-tested models crumble. Our human-centric predictions fail spectacularly, leaving us confused and often frustrated. Consider a recent incident with ChatGPT, a sophisticated language model. When asked to count the number of 'r's in the word "strawberry," it faltered. Many observers scoffed, concluding that AI must be fundamentally stupid if it couldn't handle such a simple task.

Yet this reaction reveals more about our flawed expectations than any shortcoming of AI. Those familiar with AI's inner workings were not surprised. They understand that a language model, no matter how advanced, is not optimized for character-level analysis. It is like expecting a master chef to be an expert accountant simply because both professions involve numbers.

This misalignment between our expectations and AI's actual capabilities stems from our tendency to anthropomorphize. We instinctively attribute human-like qualities to these digital entities. We expect them to have consistent opinions, to learn from our interactions, to understand context and nuance as we do. But AI, in its current form, does none of these things.

Unlike humans, AI does not carry the baggage of personal experience or emotion. It does not have good days or bad days. It will not be flattered by praise or offended by insults. It can switch from discussing quantum physics to writing poetry without missing a beat, unencumbered by the specialization that defines human expertise.

But AI's differences extend beyond mere capability. It lacks the fundamental attributes we associate with consciousness. It has no self-awareness, no goals or motivations of its own. It does not truly understand the content it generates, despite how convincing it may seem. It is a reflection of the data it was trained on, not a sentient being forming its own thoughts and opinions.

To interact effectively with AI, we need to develop new mental models. We must learn to predict its behavior not based on human analogies, but on an understanding of its unique nature. This means recognizing that AI might struggle with tasks we find trivially easy, while effortlessly accomplishing feats that would challenge even the most brilliant human minds.

It means understanding that every interaction with AI is essentially new. Unlike humans, who build on past conversations and experiences, most current AI systems do not retain information from one chat to the next. They do not learn or evolve through our interactions. Each query is processed afresh, without the context of what came before.

This new model of understanding also requires us to be more precise in our interactions with AI. While humans often fill in gaps in conversation with assumed context, AI interprets our requests literally. It does not automatically infer our unstated needs or desires. The clarity of our input directly influences the quality of the AI's output.

As AI becomes an increasingly integral part of our lives, developing these new mental models is not just about avoiding frustration. It is about unlocking the full potential of these powerful tools. By understanding AI's strengths and limitations, we can craft our interactions to leverage its capabilities more effectively.

The future of human-AI interaction lies not in expecting AI to conform to human patterns, but in adapting our approach to align with AI's unique characteristics. It is a future that requires us to be more thoughtful, more precise, and more open to rethinking our instinctive assumptions. In doing so, we may not only improve our interactions with AI but also gain new insights into the nature of intelligence itself. 



Monday, September 23, 2024

Cognitive Offloading: Learning more by doing less

In the AI-rich environment, educators and learners alike are grappling with a seeming paradox: how can we enhance cognitive growth by doing less? The answer lies in the concept of cognitive offloading, a phenomenon that is gaining increasing attention in cognitive science and educational circles.

Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.

Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."

The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.

With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.

This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.

As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.

For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.

The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.

It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us. 

Friday, August 23, 2024

Filling Voids, Not Replacing Human Experts

The debate over artificial intelligence replacing human experts often centers on a binary question: Can AI do a better job than a human? This framing is understandable but overly simplistic. The reality is that in many contexts, the competition is not between AI and people—it is between AI and nothing at all. When viewed through this lens, the value of AI becomes clearer. It is not about pitting machines against human expertise; it is about addressing the voids left by a lack of available service.

Consider healthcare, particularly in underserved areas. It is a truism that a qualified doctor’s advice is better than anything an AI could provide. But what if you live in a rural village where the nearest doctor is hundreds of miles away? Or in a developing country where medical professionals are stretched thin? Suddenly, the prospect of AI-driven medical advice does not seem like a compromise; it feels like a lifeline. While AI lacks the nuanced judgment of an experienced physician, it can provide basic diagnostics, suggest treatments, or alert patients to symptoms that warrant urgent attention. In such scenarios, AI does not replace a doctor—it replaces the silence of inaccessibility with something, however imperfect.

Another case in point is mental health counseling. In many parts of the world, even in affluent countries, mental health services are woefully inadequate. Students at universities often face wait times ranging from weeks to months just to speak with a counselor. During that limbo, the option to interact with an AI, even one with obvious limitations, can be a critical stopgap. It is not about AI outperforming a trained therapist but offering a form of support when no other is available. It can provide coping strategies, lend a sympathetic ear, or guide someone to emergency services. Here, AI does not replace therapy; it provides something valuable in the absence of timely human support.

Education offers another case for AI’s gap-filling potential. Tutoring is an essential resource, but access to quality tutors is often limited, mainly because it is expensive. Universities might offer tutoring services, but they are frequently understaffed or employ peer tutors. Office hours with professors or teaching assistants can be similarly constrained. AI can step into this void. Chatting with an AI about a difficult concept or problem set might not equal the depth of understanding gained from a one-on-one session with a human tutor, but it is unquestionably better than struggling alone. AI does not compete with tutors; it extends their reach into spaces they cannot physically or temporally cover.

The same logic applies to a range of other fields. Legal advice, financial planning, career coaching—all are areas where AI has the potential to add significant value, not by outstripping human expertise but by offering something in environments where professional advice is out of reach. Imagine a low-income individual navigating legal complexities without the means to hire an attorney. An AI could provide at least basic guidance, clarify legal jargon, and suggest possible actions. All of it must be done with proper disclaimers. It is not a substitute for legal representation, but it is a world better than the alternative: no help at all.

In embracing this non-competing stance, we shift the narrative. The role of AI is not to replace human experts but to step in where human services are scarce or nonexistent. The true potential of AI lies in its ability to democratize access to essential services that many people currently go without. When AI is viewed as a bridge rather than a rival, its utility becomes much more evident. AI does not have to be better than a person to be valuable; it just should be better than the void it fills.



Monday, August 19, 2024

The Right to Leapfrog: Redefining Educational Equity in the Age of AI

AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.

This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.

Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.

The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.

This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.

Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.

As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education. 


Friday, August 9, 2024

Authorship, Automation, and Answerability

In the ongoing debate about the ethical use of AI, two main concerns stand out—one superficial and one profound. The first concern, often highlighted, is about the authenticity of authorship, with fears that AI-generated content might mislead us about who the true author is. However, this worry is largely misguided. It stems from a historically limited, Western-centric notion of authorship that blurs the line between the origin of ideas and the craft of their representation.

Take the legacy of Steve Jobs. He wasn’t celebrated for personally assembling each iPhone, but for his vision and design that brought the device to life. In our industrial world, the act of making things is not inherently authorial—designing them is. Why should it be any different with text, code, or images? If I designed this text, and used advanced tools to produce it, why am I not still the author? The shock many feel towards AI’s ability to generate content is akin to the upheaval experienced by 19th-century bootmakers during the Industrial Revolution. Automation has simply extended its reach into the realms of writing, coding, and art. The craftsmanship is replaced by automation, but the core principle remains: take pride in the ideas, not in the mechanics of their production. There is no inherent authorship in the latter.

But here’s where Mikhail Bakhtin’s notion of answerability helps our understanding of the true ethical stakes. While responsibility is often about fulfilling obligations or being held accountable after the fact, answerability is about our ongoing, active engagement with the world and the people in it. It is not just about who gets credit for the content; it is about recognizing that every action, every word, and every piece of AI-generated content occurs within a web of relationships. We are answerable to others because our creations—whether authored by human hands or machine algorithms—affect them.

The real concern, then, lies in the issue of answerability. AI-generated content often appears polished, convincing, and ready for immediate consumption. This creates a dangerous temptation to release such content into the world without thorough scrutiny. Here is where the ethical stakes rise significantly. AI may produce work that looks and sounds credible, but this does not guarantee that it is unbiased, meaningful, or truthful. It maybe garbage polluting the infosphere at best, or an outward harmful fake at worst. The ease of content creation does not absolve us of the responsibility to ensure its quality and integrity, and more importantly, it doesn’t free us from the answerability we have to the world around us.

This is the message we need to instill in our students, professionals, and anyone working with AI: you are still accountable and answerable for what you produce, even if a machine does the heavy lifting. Releasing AI-generated content without critical evaluation is akin to conjuring a spell without understanding its consequences. Like a magician wielding powerful but unpredictable magic, or a novice driver behind the wheel of a truck instead of a bicycle, the stakes have been raised. The tools at our disposal are more potent than ever, and with that power comes a heightened level of answerability.

In essence, the ethical debate surrounding AI shuold not be about the authorship of the craft but shuold be about the integrity and impact of the output. The real challenge is ensuring that what we create with these advanced tools is not only innovative but also responsible and answerable. As we continue to integrate AI into more aspects of our lives, we must focus less on who—or what—authored the content and more on the ethical implications of releasing it into the world. This is where the true ethical discourse lies, and it is here that our attention should be firmly fixed.


Thursday, August 8, 2024

The Cognitive Leap Theory

With the arrival of AI, education is experiencing a profound shift, one that requires a rethinking of how we design and implement learning activities. This shift is captured in the cognitive leap theory, which posits that AI is not just an add-on to traditional education but a transformative force that redefines the learning process itself. The Cognitive Leap theory is a core part of a larger AI-positive pedagogy framework.

Traditionally, educational activities have been structured around original or revised Bloom’s Taxonomy, a framework that organizes cognitive skills from basic recall of facts (Remember) to higher-order skills like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a sequential progression, Bloom himself never insisted on a strict hierarchy. In fact, with the integration of AI into the classroom, the importance of these skills is being rebalanced. The higher-order skills, particularly those involving critical evaluation, are gaining prominence in ways that were previously unimaginable.

In an AI-positive pedagogical approach, the focus shifts from merely applying and analyzing information—tasks typically associated with mid-level cognitive engagement—to critically evaluating and improving AI-generated outputs. This represents a significant cognitive leap. Instead of simply completing tasks, students are now challenged to scrutinize AI outputs for accuracy, bias, and effectiveness in communication. This shift not only fosters deeper cognitive engagement but also prepares students to navigate the complex landscape of AI-driven information.

A key component of this approach is the development of meta-AI skills. These skills encompass the ability to formulate effective (rich) inquiries or prompts for AI, to inject original ideas into these prompts, and, crucially, to critically assess the AI’s responses. This assessment is not a one-time task but part of an iterative loop where students evaluate, re-prompt, and refine until the output meets a high standard of quality. This process not only sharpens their analytical skills but also enhances their creative abilities, as they learn to think critically about the inputs and outputs of AI systems.

Moreover, the traditional view that learning progresses linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced classroom, evaluation and creation are no longer the endpoints of learning but are increasingly becoming the starting points. Students must begin by evaluating AI-generated content and then proceed to improve it, a process that requires a deep understanding of context, an awareness of potential biases, and the ability to communicate effectively. This reordering of cognitive priorities is at the heart of the cognitive leap theory, which emphasizes that the future of education lies in teaching students not just to perform tasks but to engage in higher-order thinking at every stage of the learning process.

The implications of this shift are serious. Educators must rethink how they design assignments, moving away from traditional task-based assessments toward activities that challenge students to evaluate and improve upon AI-generated outputs. This requires a new kind of pedagogy, one that is flexible, iterative, and deeply engaged with the possibilities and limitations of AI.

By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself. 


Monday, July 29, 2024

AI is an Amateur Savant

Most people who use AI think it is great in general but believe it does not grasp their area of specialization very well. As an applied philosopher, I create intellectual tools to help others think through their problems. I find AI excellent at clarifying and explaining ideas, but it has never generated an original idea worth writing about. I have yet to see reports from others in any discipline that AI has independently produced groundbreaking ideas.

AI can handle large amounts of data and provide coherent, accurate responses across various fields. This ability is comparable to a well-informed amateur who has a broad understanding but lacks deep expertise. AI can recount historical facts, explain scientific principles, and offer legal insights based on data patterns, yet it falls short in deeper, more nuanced analysis.

In my case, AI can assist by summarizing existing theories or offering possible objections or additional arguments. However, it lacks the ability to generate a genuinely novel idea. I use it a lot, and not even once did it produce anything like that. This limitation stems from its reliance on pre-existing data and patterns, preventing it from achieving the level of innovation that human professionals bring to their fields. Some believe that this limitation will soon be overcome, but I do not think so. It seems to be an intrinsic limitation, a function of AI's way of training.

Professionals/experts, whether in philosophy, medicine, or history, possess a depth of understanding developed through extensive education and practical experience. They apply complex methodologies, critical thinking, and ethical considerations that AI cannot replicate. A doctor considers the patient's history and unique implications of treatments, while a professional historian places events within a broader socio-cultural context. AI, despite its capabilities, often misses these subtleties. It is, in some sense, a savant: a fast, amazing, but inexperienced thinker.

The gap between a capable amateur and a professional/expert might seem small, especially from the point of view of the amateur. However, it is huge and is rooted in the depth of expertise, critical thinking, and the ability to judge that professionals possess; it is a function of intellect, experience, and education. This gap is where educators should look to adapt the curriculum.

In education, we should focus on that gap between the amateur and the professional and conceptualize it as the ultimate learning outcome, then build new skill ladders to claim there. Students need to understand and conquer the gap between AI and a professional expert. These meta-AI skills are our true North. AI can support this learning process by providing clear explanations and diverse perspectives, but it cannot replace the nuanced understanding and innovation that human professionals offer.


Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Tuesday, July 9, 2024

AI-Positive Pedagogy: Navigating the Great Disruption

AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.

While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.

The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.

AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.

AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.

The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.

To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.

It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.

In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.

To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.

The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.


Wednesday, June 26, 2024

On AI Shaming

Here is a new thing: AI shaming. It is a practice where individuals accuse others of using artificial intelligence to generate written content, as if such an act were inherently deceitful or somhow sinful. How fascinating, the very premise of it.

This phenomenon  reveals itself to be a fairly common logical fallacy. It is a summative dismissive argument, with a dash of ad hominem (ad machinam?) for flavor. One might wonder why the method of creation should overshadow the substance of the content. After all, we don't dismiss the works of authors who switched from quills to typewriters, do we? 

There's a certain Foucauldian quality to this practice, if one may be permitted a touch of theoryhead's indulgence. By attempting to draw a sharp line between 'acceptable' and 'unacceptable' means of content creation, some seem to be engaging in a subtle power play. It is almost as if they're trying to sell their unfamiliarity with newer technologies as a virtue. it is a rather clever way of elevating the game at which one excells, and putting down a game at which one fails. While an understandable inclination it is still a folly. 

For those of us who have embraced these new tools, such accusations are about as concerning as a light drizzle on a summer day - which is to say, entirely expected and hardly worth mentioning. If anything, it provides a certain amusement to observe the lengths to which some will go to maintain the status quo and their priviledged little spot in it. 

However, there is a more sobering concern to consider. While thick-skinned people like me might brush off such criticisms with a raised eyebrow, younger, more impressionable ones might internalise this arbitrary stigma. It would be a shame if the next generation felt compelled to hide their technological proficiency out of  fear of Luddites' bullying.

As these AI tools inevitably become more sophisticated and ubiquitous, perhaps we might redirect our energy towards more productive ends. Instead of engaging in this curious form of digital fingerpointing, we could focus on the responsible and creative use of these technologies. After all, the ideas expressed within content will always be more intriguing than the means by which they were transcribed.

To those who persist in AI shaming: by all means, knock yourelf out. Your dedication to this cause is admirable, if somewhat perplexing. Just don't be too surprised if the rest of us seem a bit distracted - we'll be busy adapting to the future while you're perfecting your fingerwagging techniques.

P.S. This text, according to QuillBot, is 0% AI-generated, however I wrote it with Claude :)

How to bullshit about bullshit

Take a look at a paper ChatGPT is bullshit, by Michael Townsen Hicks, James Humphries, Joe Slater. Other than a catchy title, the paper has a few problems of its own.

1. Focus on terminology over utility:

The paper spends considerable effort arguing that the outputs of large language models (LLMs) like ChatGPT should be classified as "bullshit" rather than "hallucinations." While this may be an interesting philosophical distinction, it arguably does little to advance our understanding of the practical implications and applications of these technologies. The authors could have devoted more attention to analyzing the actual utility and potential use cases of LLMs, which would likely be more valuable to policymakers and the public.

2. Limited practical insights:

By focusing primarily on categorizing the nature of LLM outputs, the paper misses an opportunity to provide meaningful insights into how these technologies can be effectively and responsibly utilized. A more constructive approach might involve examining specific contexts where LLMs prove useful and where they fall short, rather than broadly labeling their output as "bullshit."

3. Potential for misdirection:

While the authors argue that the term "hallucinations" is misleading, replacing it with "bullshit" may not necessarily lead to a clearer understanding of LLM behavior. Both terms are metaphorical and could potentially misguide readers about the true nature and capabilities of these systems. A more technical and nuanced explanation of how LLMs function and their limitations might be more informative.

4. Overlooking nuance:

The paper seems to take a binary approach – either LLMs are concerned with truth (which they argue against) or they are producing "bullshit." This oversimplification may overlook the nuanced ways in which LLMs can be useful for various tasks, even if they don't have an inherent concern for truth in the way humans do.

5. Lack of actionable recommendations:

While the authors critique the use of the term "hallucinations," they don't offer clear, actionable recommendations for how to better communicate about LLMs to policymakers and the public. A more constructive approach would be to propose specific ways to educate stakeholders about the capabilities and limitations of these technologies.

6. Missing the broader context:

By focusing narrowly on the philosophical categorization of LLM outputs, the paper misses an opportunity to discuss the broader implications of these technologies on society, economy, and various industries. A more comprehensive analysis of the impact and potential of LLMs would likely be more valuable to readers.

Monday, June 24, 2024

Can observational learning work with AI?

Can humans learn new problem-solving strategies simply by observing AI at work? Following Bandura's theory of observational learning, it may not be as far-fetched as it at first appears.

When humans watch AI systems solve problems or generate text, they naturally construct mental models of the underlying processes. These models, while factually wrong (AI thinking is very different from ours), can nonetheless prove useful. It is imagining yourself in teh task that can be beneficial. 

Consider a person observing an AI system summarise a lengthy academic paper. The human observer cannot directly perceive the AI's internal computations. Instead, the observer likely imagines themselves performing the task, focusing on key sentences, identifying main themes, and connecting key ideas.

This mental model, though inaccurate in representing the AI's actual mechanisms, may still enhance the observer's own summarisation skills. They might, for instance, learn to pay closer attention to introductory and concluding paragraphs, or to look for repeated phrases that signal important concepts.

Observing AI failures can be particularly instructive. When an AI system produces an erroneous or nonsensical output, it often reveals the limitations of its approach. A human observer, reflecting on these errors, might develop a more nuanced understanding of the problem at hand and devise novel strategies to overcome the AI's shortcomings.

For example, watching an AI struggle with a complex logical reasoning task might prompt a human to break the problem down into smaller, more manageable steps. This approach, inspired by the AI's limitations, could prove valuable even in contexts where AI is not involved.

To test this hypothesis rigorously, consider an experiment:

1. Select a diverse set of problem-solving tasks, ranging from creative writing to mathematical reasoning.

2. Divide participants into three groups:

  •  a) An observation group that watches AI systems attempt these tasks, including both successes and failures.
  •  b) A practice group that attempts the tasks themselves without AI involvement.
  •  c) A control group that engages in unrelated activities.

3. After the observation or practice period, test all participants on a new set of similar problems.

4. Compare the performance of the three groups, paying particular attention to novel problem-solving strategies employed by the observation group.

5. Conduct follow-up interviews to explore participants' thought processes and any conscious attempts to apply AI-inspired techniques.

Friday, June 21, 2024

Can We Learn Without Struggle in the Age of AI?

I've been pondering a question: What if our traditional understanding of cognitive growth is too narrow? We've long held onto the idea that real learning comes from struggle, from pushing against our limits, from grappling with challenges just beyond our current abilities. But what if that's not the whole story?

I'm starting to wonder if growth - real, meaningful cognitive development - might not always need the strong challenges we've assumed were necessary. And this thought has become particularly relevant as we enter the new world of AI-assisted learning.

Many of our theories about learning and development are rooted in the idea of conflict or tension. Vygotsky's Zone of Proximal Development, for instance, emphasizes the space between what a learner can do without help and what they can do with guidance. Piaget talked about cognitive dissonance as a driver of development. These Hegelian/Mamrxist heories have shaped how we think about education for decades.

But here's what I'm pondering: What if growth can happen more... gently? What if it can occur through exposure, through interaction, through a kind of cognitive osmosis that doesn't necessarily involve struggle or challenge? And importantly, what if this gentler form of learning is still deeply social and relational?

There's a lot of hand-wringing in educational circles about AI tools like ChatGPT. The worry is that by providing ready answers, these tools will short-circuit the learning process. Students won't have to struggle, so they won't really learn. I have definitely been expressing these concerns in my previous blogs. 

But I'm not so sure anymore. Let me float a hypothesis: What if AI-assisted learning doesn't dampen growth, but instead provides a different kind of cognitive experience that can still lead to meaningful development? And what if this experience, rather than being isolating, actually opens up new avenues for social learning and collaboration?

Here's an analogy that's been helpful for me in thinking about this. Remember when GPS first became widely available? There were concerns that people would never learn to navigate cities anymore, that we'd lose our sense of spatial awareness. And yet, most of us who use GPS regularly still develop a pretty good understanding of the cities we live in and visit. We might learn differently - perhaps more slowly, or with less detail - but we do learn, without all the frustrations of trying to read the map while driving, or memorize multiple turns (Left, second right, soft left again...). City driving is probably safer, but we did not get more stupid.  

The GPS doesn't prevent us from learning; it provides a different context for learning. We're not struggling with paper maps, but we're still processing spatial information, making connections, building mental models of our environment.

Could AI-assisted learning work in a similar way? Sure, students might get quick answers or produce a quick text with an AI without much effort, which feels somehow wrong (the nature of these feelings is a subject of a special consideration). But that doesn't mean they stop thinking or interacting. They will start wondering how to get better answers, produce better outputs. They will begin to notice patterns in the AI's responses. They will start to question or critique what the AI produces. That's what the human brain has evolved to do. 

Moreover, this process doesn't happen in isolation. Students will discuss their AI interactions with peers, compare outputs, collaboratively explore how to improve results. It becomes a form of social play - experimenting, sharing discoveries, building on each other's ideas. The AI becomes a tool for social learning, not a replacement for it.

In other words, the presence of AI doesn't eliminate cognitive work or social interaction - it might just shift their nature. And who's to say that this new form of cognitive and social engagement is any less valuable than the old? 

Now, I'm not saying we should throw out everything we know about learning and development. Challenge and struggle certainly have their place. But I am suggesting that maybe we need to broaden our understanding of how growth happens, recognizing that it can occur through gentler, more playful forms of social interaction and exploration.

Perhaps there's a whole spectrum of cognitive experiences that can lead to growth, ranging from intense challenge to gentle, collaborative exposure. Maybe AI-assisted learning falls somewhere on this spectrum - not replacing traditional forms of learning, but adding new dimensions to how we can develop cognitively and socially.

This is all hypothetical, of course. We're in new territory with AI, and it will take time and research to understand its full impact on learning and development. But I think it's worth considering that our assumptions about how growth happens might need updating, particularly in how we view the social aspects of learning.

What do you think? Is it possible that we've overemphasized challenge in our understanding of learning? Could AI-assisted learning open up new pathways for cognitive development and social learning that we haven't fully appreciated yet? I am just back from a conference, where we had many early childhood educators. In that tribe, there is a lot more talk about play than about forcing growth. Maybe that's why I am having these second thoughts about growth. 

I don't have definitive answers, but I think these are questions worth exploring as we navigate this new landscape of learning. After all, if there's one thing we know for sure about human cognition, it's that it's endlessly adaptable and intrinsically social. Perhaps it's time for our theories about learning to adapt as well, embracing a more collaborative, playful vision of growth in the age of AI.

Thursday, June 6, 2024

Using AI without losing control

As AI becomes increasingly capable of performing tasks that were once the exclusive domain of humans, it no longer makes sense to spend  a lot of time teaching skills that AI can do better and faster. Just as we largely replaced the teaching of manual drafting with computer-aided design (CAD), we must now reassess the curriculum and remove skills that have been rendered obsolete by AI.

The way we define curriculum reflects our understanding of what it means to be human, a definition that has evolved significantly over the centuries. As machines have become more advanced, skills previously valued and developed by humans have gradually been entrusted to machines.

However, the logic behind this shift is also changing. It is no longer just a matter of what machines can do better, as they seem to be excelling at almost everything. Instead, it is about what we, as humans, choose to retain, enjoy, and feel compelled to do. It is less about competition with machines and more about the arrangement of power and authority. To maintain our human authority, we must continue to perform certain tasks.

One of the most important of these tasks is the ability to set goals and make value judgments about what should or should not be done. This is a complex skill that requires a comprehensive understanding of the world, both in its physical and social aspects, as well as the wisdom to make decisions when faced with competing values and the inherent uncertainty of the future. As AI increasingly mediates our interactions, the ability to determine the goals and direction of these interactions becomes even more crucial.

Equally important is the ability to work with AI to achieve our objectives. This process, sometimes referred to as "prompt engineering," involves iterative interaction and refinement to obtain the most accurate, appropriate, and helpful outputs from AI. Beyond technical skills, this requires robust critical thinking to assess the truthfulness and value of AI-generated content. In essence, it is the skill of editing, not merely for style but for substance, constantly asking: Is this useful? Does it improve upon what already exists? Does it positively contribute to the overall human knowledge and well-being?

These core competencies should shape the curriculum of the future: goal-setting, value judgment, critical discernment, and the ability to collaborate with AI to achieve human objectives. The specifics will evolve as AI capabilities advance, and there will be variations across disciplines.

However, the fundamental challenge remains the same: in a world increasingly shaped by artificial intelligence, we must rededicate ourselves to cultivating human intelligence in its fullest and most distinctively human expressions. Only then can we ensure that the tremendous power of AI serves to elevate humanity rather than diminish it.

Monday, April 22, 2024

The Disruptive Potential of AI: Lessons from Clayton Christensen's Theory

As AI continues to make inroads into various industries, it is easy to dismiss its current shortcomings and remain complacent. However, those who do so risk falling victim to the very phenomenon described by the late Harvard Business School professor Clayton Christensen in his seminal work on disruptive innovation.

Christensen's theory posits that disruptive technologies often start at the bottom of the market, offering inferior performance compared to incumbent solutions. However, these technologies are typically cheaper and more accessible, allowing them to gain a foothold among less demanding customers. Over time, as the technology improves, it begins to meet the needs of more sophisticated users, eventually displacing the incumbent players entirely.

The parallels with AI are striking. Today, we may scoff at awkward AI-generated movies featuring characters with anatomical oddities or primitive music engines churning out cliched tunes. However, it would be foolish to assume that these technologies will not improve. Just as the early smartphones were no match for desktop computers, the AI of today is merely a stepping stone to more advanced systems that will rival and surpass human capabilities in various domains.

The rapid pace of investment in AI only serves to underscore this point. With billions of dollars pouring into research and development, the march of progress is inexorable. While the exact timeline remains uncertain, it is clear that AI will continue to evolve at a brisk pace, transforming industries and reshaping the nature of work itself.

In light of this reality, policymakers and leaders in government and philanthropy would be wise to start planning for a future in which the skills demanded by the job market are in a constant state of flux. Rather than clinging to the status quo, we must embrace the disruptive potential of AI and invest in education and training programs that will equip workers with the adaptability and resilience needed to thrive in an era of rapid technological change.

To ignore the lessons of Clayton Christensen's theory would be to court disaster. The question is not whether AI will disrupt our world, but rather how we will rise to meet the challenges and opportunities it presents. By proactively preparing for this future, we can ensure that the benefits of AI are widely shared and that no one is left behind in the great transformations to come. 

Sunday, April 21, 2024

The Rise of ReAIding: "I did not read it, but I understand it"

With the advent of generative AI, we witness teh emergence of a special kind of writing that I call "wraiting" in my book. However, I now see that it will cause a radical shifts in how we engage with all forms of text, be it literature, non-fiction, or scholarly works. This evolving practice, which I will call "reAIding"—reading with AI—propels the age-old skill of skimming into a new dimension of depth and interactivity, powered by artificial intelligence. Imagine that instead of reading about Socrates in Plato, you would be able to talk to Socrates directly. 

Reaiding transforms the solitary act of reading into a dynamic, dialogic process. Just reading AI-generated cliffnotes is not at all what I mean. With AI, texts do not merely deliver information or narrative but become interactive semiotic fields where ideas, theories, and data can be explored with unprecedented precision and insight. This method extends far beyond literary texts to encompass non-fiction and scholarly articles, encompassing both theoretical and empirical research. Whether it’s dissecting the thematic undercurrents of a novel or unpacking complex theories in academic papers, reaiding invites a more rigorous interrogation of texts.

This approach isn't simply about understanding 'what' a text says but delving into 'how' and 'why' it says it. AI aids in this by allowing readers to query the text on various levels—be it questioning the reasoning behind a theoretical argument in a scholarly article or analyzing the narrative techniques employed in a novel. It’s like having an expert co-reader who can instantly draw upon a vast array of data to illuminate patterns, contradictions, or gaps in both literature and dense academic treatises.

Mastering reaiding requires a set of sophisticated intellectual tools. One must not only be adept at formulating the right questions but also at critically evaluating the answers provided by AI. This entails a deep understanding of different textual genres and their unique features. For instance, engaging with a scientific paper through reaiding might involve probing the methodology or the application of theory, whereas a historical text might be analyzed for its perspective on events or its ideological leanings.

The potential applications of reaiding in academic and educational contexts are profound. Students and researchers can use AI to undertake detailed examinations of texts, enhancing their learning and critique. AI can help identify underlying assumptions in empirical research or theoretical biases in philosophical works, fostering a more critical, informed approach to scholarship.

Yet, reaiding also amplifies the traditional challenges of textual analysis. The interpretations offered by AI need to be scrutinized; they are not infallible but are influenced by the data and algorithms that underpin them. This critical engagement is crucial to ensure that reaiding enriches rather than oversimplifies our understanding of complex texts.

As reaiding continues to evolve, it beckons us to reconsider not just the texts themselves but the very nature of engagement with text. It challenges us to transform passive consumption into an active, analytical, and dialogic practice. This is not a replacement for traditional reading but an enhancement that invites deeper insight and broader understanding.

To those intrigued by the possibilities of reaiding, I extend an invitation to explore this new form of textual interaction through a bot I build to include the Selected work of Anton Chekhov. Imagine what it can do if it becomes ten times better. And it will, soon. 

Saturday, April 13, 2024

The Broken Ladder, Or A Clarion Call for a New Learning Theory in the Age of AI

As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer sufficient to explain how people now learn, and how to adjust education accordingly.

Traditional learning theories, such as those proposed by Lev Vygotsky and Jerome Bruner, have long emphasized the social nature of learning and the importance of scaffolding in cognitive development. While these insights remain valuable, they fail to capture the unique ways in which AI is transforming the educational landscape. Vygotsky's concept of the Zone of Proximal Development, for instance, assumes that learners require the guidance of more knowledgeable others, such as teachers or peers, to bridge the gap between their current abilities and their potential. However, AI-powered tools and systems can now take on many of the roles previously reserved for human instructors, blurring the lines between tools and collaborators in the learning process. Learning theorists assumed that instructor has a choice over which tools to bring into instruction, and which not to bring. Well, AI imposes itself in instruction wether we want it or not.

Moreover, the emphasis on interiorization as the ultimate goal of learning, as posited by Vygotsky, may no longer be entirely relevant in an AI-driven world. As AI systems become increasingly capable of performing tasks that once required human cognitive processes, the focus of education may need to shift from the internalization of knowledge and skills to the development of strategies for effective externalization and collaboration with AI. In other words, the aim of education shifts from an individual learner to a symbiosis of a human and a machine.  

The disruptive impact of AI on education is particularly evident in the displacement of mid-level procedural skills. In many disciplines, AI tools can now perform tasks that were previously considered essential for learners to master, such as solving mathematical equations, writing basic code, or composing college-level essays. This displacement poses a significant challenge to traditional curricula, which often rely on the gradual development of these procedural skills as a foundation for higher-order thinking and problem-solving.

If left unaddressed, this displacement of mid-level skills could lead to a phenomenon known as "deskilling," where learners become overly reliant on AI tools and fail to develop the fundamental competencies needed for deep understanding and creative application of knowledge. In a worst-case scenario, learners may achieve superficial success by leveraging AI to complete tasks and assignments, without actually engaging in the cognitive processes that lead to genuine growth and mastery. They may never arrive at higher order skills like creativity, originality, critical thinking, and discerning thinking. 

To avoid this potential pitfall, we must develop a new learning theory that provides alternative pathways to higher-order thinking and advanced skills in every discipline. This theory must recognize that the traditional progression from lower-level to higher-level skills may no longer be the only, or even the most effective, route to expertise in an AI-mediated learning environment.

Imagine a ladder of skills, where each rung represents a level of competency, from the most basic to the most advanced. Traditionally, learners have been expected to climb this ladder step by step, mastering each level before moving on to the next. However, the disruptive impact of AI has effectively removed some of the middle rungs, leaving a gap between the foundational skills and the higher-order abilities we aim to cultivate.

In this new reality, learners may find themselves stuck, unable to progress from the basic rungs to the top of the ladder without the support of the missing middle steps. Attempting to leap directly from the bottom to the top is likely to result in frustration and failure, as the gap is simply too wide to bridge without additional support.

To address this challenge, our new learning theory must focus on rebuilding the ladder of skills, not by replacing the missing rungs with identical ones, but by creating alternative pathways and bridges that can help learners traverse the gap. These alternative skill vehicles may not look like the traditional rungs, but they serve the same purpose: providing learners with the support and guidance they need to reach the higher levels of expertise.

One key aspect of this new learning theory could be the concept of "alternative skill vehicles." Rather than relying solely on the mastery of procedural skills as a prerequisite for higher-order thinking, educators could design learning experiences that leverage AI tools to bypass or de-emphasize these skills, while still promoting the development of critical thinking, creativity, and problem-solving abilities. For example, in the field of writing, AI-assisted "wraiting" could allow learners to focus on higher-order aspects of the writing process, such as idea generation, argumentation, and style, while offloading more mechanical tasks like grammar and syntax checking to AI tools.

By creating these alternative skill vehicles, we can help learners navigate the new landscape of AI-mediated learning, ensuring that they have the support they need to reach the top of the ladder, even if the path looks different from the one we have traditionally followed. 

Another crucial component of a new learning theory for the age of AI would be the cultivation of "blended intelligence." This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Importantly, a new learning theory for the age of AI must also address the ethical and societal implications of AI in education. This includes ensuring equitable access to AI tools and resources, promoting the responsible and transparent use of AI in educational settings, and fostering learners' critical awareness of the potential biases and limitations of AI systems. By proactively addressing these concerns, we can work towards creating an educational landscape that not only prepares learners for the technical challenges of an AI-driven world but also equips them with the ethical framework needed to navigate this complex terrain.

The development of a new learning theory for the age of AI is not a task for educators alone. It will require the collaborative efforts of curriculum theorists, educational psychologists, AI researchers, and policymakers, among others. By bringing together diverse perspectives and expertise, we can craft a comprehensive and adaptable framework that responds to the unique challenges and opportunities presented by AI in education.

The imperative for this new learning theory is clear. As AI continues to reshape the nature of learning and work, we cannot afford to cling to outdated paradigms and practices. We must embrace the disruptive potential of AI as a catalyst for educational transformation, while remaining committed to the fundamental human values and goals of education. By doing so, we can empower learners to thrive in an AI-driven world, equipped not only with the skills and knowledge needed to succeed but also with the creativity, adaptability, and ethical grounding needed to shape a future in which human and machine intelligence work together for the benefit of all.

Sunday, March 17, 2024

The Honest Conversation on AI in Education We're Not Having

As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. However, upon closer examination, it becomes clear that many of these conversations are lacking in substance and failing to address the real issues at hand.

Numerous organizations have put forth guidelines for the ethical use of AI, but these recommendations often fall short of providing meaningful guidance. Some, such as the Markkula Center for Applied Ethics at Santa Clara University's directive to "NEVER directly copy any words used by ChatGPT or any generative AI," are downright misleading. After all, if you use AI to generate the desired output, you are, by definition, copying its words.

Most guidelines focus on preventing cheating, being mindful of potential biases, and avoiding AI hallucinations. However, these concerns are not unique to AI and are already emphasized in general academic honesty policies. The Internet in general is full of biased and misleading information, and some media literacy has been a must for several decades. So why the need for new, AI-specific guidelines?

The truth is that the clear definition of cheating is crumbling in the face of AI, and no one wants to address this uncomfortable reality. Clearly, the laxy prompt practice is bad. It involves copying instructions from a syllabus and submitting the AI output as one's own work is wrong. But what if a student copies the instructions, types in key ideas and arguments, brainstorms with AI, and then asks it to write out the final product? Is this still cheating? What if theidea is actually brilliant? The answer depends on the skill being assessed. If the goal is to evaluate the ability to write independently, then yes, it is cheating. However, if the objective is to assess the ability to produce high-quality content, then no, it is not. Let's not pretent the things are clear-cut; they are not. 

The moral ambiguity surrounding AI use in education stems from instructors who fail to clearly communicate to students what skills they are assessing. Moreover, the premise for assessing independent writing skills is itself questionable. In an AI-driven future, who will need this skill? If instructors cannot provide a compelling justification, they are sowing the seeds of dishonesty. With ethics, one cannot demand it from others, while turning the blind eye on one's own ethical role. It is a two-way street in educational relation as it is in any other one. 

Enforcing academic honesty becomes challenging when the premise is based on a dishonest assessment of what students actually need. Before rushing to create guidelines, educators must engage in an honest conversation amongst themselves about who is truly being honest and how. 

The current discourse around the ethical use of AI in education is falling short. Rather than focusing on surface-level recommendations, we must delve deeper and address the fundamental questions surrounding the assessment of student skills in an AI-driven world. Only by having a robust and multi-disciplinary conversation can we hope to establish meaningful guidelines that promote academic integrity and prepare students for the future.

Tuesday, March 5, 2024

Let's hallucinate, take the kids along

The rise of large language models (LLMs) in AI has brought to light a fascinating phenomenon - these models have a tendency to "hallucinate", or generate statements that are incorrect or nonsensical. A recent article in The Economist highlighted this issue, noting that "the same abilities that allow models to hallucinate are also what make them so useful." Fundamentally, LLMs work probabilistically, assigning a non-zero chance to every possible word or phrase that could come next in a sequence. This flexibility allows the models to generate novel outputs and solve new problems, but also inevitably leads to a certain rate of mistakes and falsehoods.

Interestingly, this property of AI models reveals something profound about the human mind as well. Our remarkable ability to imagine, create and solve problems is inextricably linked to our capacity for error. Just like LLMs, human thinking is fundamentally generative and probabilistic - we are constantly making predictions and filling in gaps based on prior knowledge and contextual cues. And in doing so, we inevitably make mistakes, jumping to conclusions and seeing patterns where none exist. In a sense, "hallucination" is a built-in feature of human cognition, not a bug.

This insight has important implications for how we approach education and learning. Too often, our educational systems are overly focused on eliminating errors and inculcating "correct" answers. While accuracy is certainly important, an excessive emphasis on being right all the time can stifle creativity and limit our ability to generate novel ideas and solutions. To truly tap into the power of the human mind, we need to create space for productive mistakes and flights of imagination.

So perhaps we should spend less time trying to prevent students from ever being wrong, and more time teaching them how to recover from errors, distinguish fact from fantasy, and harness their imaginative abilities in positive ways. By embracing a bit of beneficial "hallucination", we may actually enhance our ability to discover truth and expand the boundaries of human knowledge. The key is striking the right balance - letting our minds roam free, while also exercising our critical faculties to rein in our fantasies when needed. In this way, we can learn from the foibles of AI to better understand and cultivate the powers of our own marvelous minds.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...