Showing posts with label Learning. Show all posts
Showing posts with label Learning. Show all posts

Saturday, December 7, 2024

The Curriculum Illusion: How AI Exposes Long-Standing Educational Flaws

Artificial intelligence is often blamed for disrupting education, but it has created few new problems. Instead, it exposes existing flaws, bringing them into stark relief. Among these is the arbitrary nature of curriculum design, an issue that has long been hidden behind tradition and consensus. The sequences and structures of formal education are not based on objective logic or evidence but on habit and convenience. AI did not cause this; it is simply making these issues more visible.

Curriculum theory has never provided a robust framework for sequencing knowledge. Beyond the essentials of literacy and numeracy, where developmental progression is more or less clear, the rationale for curricular order becomes murky. Why are algebra and geometry taught in a particular order? Why more algebra than statistics is taught? Why are some historical periods prioritized over others? The answers lie in tradition and precedent rather than in any coherent theoretical justification. The assumptions about foundational skills, so central to curriculum logic, do not extend well beyond the basics. For advanced skills like critical, creative, or discerning thinking, the idea of prerequisites becomes less justified. Mid-range procedural skills like writing mechanics or computational fluency are frequently used as gatekeepers, though their role in fostering higher-order thinking is often overstated or misunderstood. 

For example, in middle school students are often subjected to a torrent of tasks that serve little developmental purpose. Much of what students do in these years amounts to busywork, designed more to keep them occupied and compliant than to foster meaningful learning. The situation is no better in higher education. College and graduate programs are often constructed around professional or disciplinary standards that themselves are arbitrary, built on consensus rather than evidence. These norms dictate course sequences and learning objectives but rarely align with the actual developmental or professional needs of students. The result is a system full of redundancies and inefficiencies, where tasks and assignments exist more to justify the structure than to serve the learner.

Education as a profession bears much of the responsibility for this state of affairs. Despite its long history, it lacks a disciplined, founded approach to curriculum design. Instead, education relies on an uneasy mix of tradition, politics, and institutional priorities. Curriculum committees and accrediting bodies often default to consensus-driven decisions, perpetuating outdated practices rather than challenging them. The absence of a rigorous theoretical framework for curriculum design leaves the field vulnerable to inertia and inefficiency.

AI did not create this problem, but it is illuminating it in uncomfortable ways. The displacement of certain procedural mid-range skills shows how poorly structured many learning sequences are and how little coherence exists between tasks and their intended outcomes. Yet, while AI can diagnose these flaws, it cannot solve them. The recommendations it offers depend on the data and assumptions it is given. Without a strong theoretical foundation, AI risks exposing the problem without solving it.

What AI provides is an opportunity, not a solution. It forces educators and policymakers to confront the arbitrary nature of curriculum design and to rethink the assumptions that underpin it. Massive curricular revision is urgently needed, not only to eliminate inefficiencies but also to realign education with meaningful developmental goals. This will require abandoning tasks that lack purpose, shifting focus from intermediary to higher-order skills, designing learning experiences to reflect the shift. It will also mean questioning the professional and disciplinary standards that dominate higher education and asking whether they truly serve learners or simply perpetuate tradition.

AI is revealing what has long been true: education has been operating on shaky foundations. The challenge now is to use this visibility to build something better, to replace the old traditions and arbitrary standards with a system that is logical, evidence-based, and focused on learning. The flaws were always there. AI is just making them harder to ignore.



Monday, September 23, 2024

Cognitive Offloading: Learning more by doing less

In the AI-rich environment, educators and learners alike are grappling with a seeming paradox: how can we enhance cognitive growth by doing less? The answer lies in the concept of cognitive offloading, a phenomenon that is gaining increasing attention in cognitive science and educational circles.

Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.

Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."

The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.

With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.

This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.

As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.

For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.

The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.

It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us. 

Tuesday, September 17, 2024

Why Parallel Integration Is the Sensible Strategy of AI Adoption in the Workplace

Artificial intelligence promises to revolutionize the way we work, offering efficiency gains and new capabilities. Yet, adopting AI is not without its challenges. One prudent approach is to integrate AI into existing workflows in parallel with human processes. This strategy minimizes risk, builds confidence, and allows organizations to understand where AI excels and where it stumbles before fully committing. I have described the problem of AI output validation before; it is a serious impediment to AI integration. Here is how to solve it.

Consider a professor grading student essays. Traditionally, this is a manual task that relies on the educator's expertise. Introducing AI into this process does not mean handing over the red pen entirely. Instead, the professor continues grading as usual but also runs the essays through an AI system. Comparing results highlights discrepancies and agreements, offering insights into the AI's reliability. Over time, the professor may find that the AI is adept at spotting grammatical errors but less so at evaluating nuanced arguments.

In human resources, screening job applications is a time-consuming task. An HR professional might continue their usual screening while also employing an AI tool to assess the same applications. This dual approach ensures that no suitable candidate is overlooked due to an AI's potential bias or error. It also helps the HR team understand how the AI makes decisions, which is crucial for transparency and fairness.

Accountants auditing receipts can apply the same method. They perform their standard checks while an AI system does the same in the background. Any discrepancies can be investigated, and patterns emerge over time about where the AI is most and least effective.

This strategy aligns with the concept of "double-loop learning" from organizational theory, introduced by Chris Argyris. Double-loop learning involves not just correcting errors but examining and adjusting the underlying processes that lead to those errors. By running human and AI processes in parallel, organizations engage in a form of double-loop learning—continually refining both human and AI methods. Note, it is not only about catching and understanding AI errors; the parallel process will also find human errors through the use of AI. The overall error level will decrease. 

Yes, running parallel processes takes some extra time and resources. However, this investment is modest compared to the potential costs of errors, compliance issues, or damaged reputation from an AI mishap. People need to trust technology they use, and bulding such trust takes time. 

The medical field offers a pertinent analogy. Doctors do not immediately rely on AI diagnoses without validation. They might consult AI as a second opinion, especially in complex cases. This practice enhances diagnostic accuracy while maintaining professional responsibility. Similarly, in business processes, AI can serve as a valuable second set of eyes. 

As confidence in the AI system grows, organizations can adjust the role of human workers. Humans might shift from doing the task to verifying AI results, focusing their expertise where it's most needed. This gradual transition helps maintain quality and trust, both internally and with clients or stakeholders.

In short, parallel integration of AI into work processes is a sensible path that balances innovation with caution. It allows organizations to harness the benefits of AI while managing risks effectively. By building confidence through experience and evidence, businesses can make informed decisions about when and how to rely more heavily on AI.



Saturday, September 7, 2024

AI in Education Research: Are We Asking the Right Questions?

A recent preprint titled "Generative AI Can Harm Learning" has attracted significant attention in education and technology circles. The study, conducted by researchers from the University of Pennsylvania, examines the impact of GPT-4 based AI tutors on high school students' math performance. While the research is well-designed and executed, its premise and conclusions deserve closer scrutiny.

The study finds that students who had access to a standard GPT-4 interface (GPT Base) performed significantly better on practice problems, but when that access was removed, they actually performed worse on exams compared to students who never had AI assistance. Interestingly, students who used a specially designed AI tutor with learning safeguards (GPT Tutor) performed similarly to the control group on exams. While these results are intriguing, we need to take a step back and consider the broader implications.

The researchers should be commended for tackling an important topic. As AI becomes more prevalent in education, understanding its effects on learning is crucial. The study's methodology appears sound, with a good sample size and appropriate controls. However, the conclusions drawn from the results may be somewhat misleading.

Consider an analogy: Imagine a study that taught one group of students to use calculators for arithmetic, while another group learned traditional pencil-and-paper methods. If you then tested both groups without calculators, of course the calculator-trained group would likely perform worse. But does this mean calculators "harm learning"? Or does it simply mean we are testing the wrong skills?

The real question we should be asking is: Are we preparing students for a world without AI assistance, or a world where AI is ubiquitous? Just as we do not expect most adults to perform complex calculations without digital aids, we may need to reconsider what math skills are truly essential in an AI-augmented world.

The study's focus on performance in traditional, unassisted exams may be missing the point. What would be far more interesting is an examination of how AI tutoring affects higher-level math reasoning, problem-solving strategies, or conceptual understanding. These skills are likely to remain relevant even in a world where AI can handle routine calculations and problem-solving.

Moreover, the study's title, "Generative AI Can Harm Learning," may be overstating the case. What the study really shows is that reliance on standard AI interfaces without developing underlying skills can lead to poor performance when that AI is unavailable. However, it also demonstrates that carefully designed AI tutoring systems can potentially mitigate these negative effects. This nuanced finding highlights the importance of thoughtful AI integration in educational settings.

While this study provides valuable data and raises important questions, we should be cautious about interpreting its results too broadly. Instead of seeing AI as a potential harm to learning, we might instead ask how we can best integrate AI tools into education to enhance deeper understanding and problem-solving skills. The goal should be to prepare students for a future where AI is a ubiquitous tool, not to protect them from it.

As we continue to explore the intersection of AI and education, studies like this one are crucial. However, we must ensure that our research questions and methodologies evolve along with the technology landscape. Only then can we truly understand how to harness AI's potential to enhance, rather than hinder, learning.


Friday, August 23, 2024

Filling Voids, Not Replacing Human Experts

The debate over artificial intelligence replacing human experts often centers on a binary question: Can AI do a better job than a human? This framing is understandable but overly simplistic. The reality is that in many contexts, the competition is not between AI and people—it is between AI and nothing at all. When viewed through this lens, the value of AI becomes clearer. It is not about pitting machines against human expertise; it is about addressing the voids left by a lack of available service.

Consider healthcare, particularly in underserved areas. It is a truism that a qualified doctor’s advice is better than anything an AI could provide. But what if you live in a rural village where the nearest doctor is hundreds of miles away? Or in a developing country where medical professionals are stretched thin? Suddenly, the prospect of AI-driven medical advice does not seem like a compromise; it feels like a lifeline. While AI lacks the nuanced judgment of an experienced physician, it can provide basic diagnostics, suggest treatments, or alert patients to symptoms that warrant urgent attention. In such scenarios, AI does not replace a doctor—it replaces the silence of inaccessibility with something, however imperfect.

Another case in point is mental health counseling. In many parts of the world, even in affluent countries, mental health services are woefully inadequate. Students at universities often face wait times ranging from weeks to months just to speak with a counselor. During that limbo, the option to interact with an AI, even one with obvious limitations, can be a critical stopgap. It is not about AI outperforming a trained therapist but offering a form of support when no other is available. It can provide coping strategies, lend a sympathetic ear, or guide someone to emergency services. Here, AI does not replace therapy; it provides something valuable in the absence of timely human support.

Education offers another case for AI’s gap-filling potential. Tutoring is an essential resource, but access to quality tutors is often limited, mainly because it is expensive. Universities might offer tutoring services, but they are frequently understaffed or employ peer tutors. Office hours with professors or teaching assistants can be similarly constrained. AI can step into this void. Chatting with an AI about a difficult concept or problem set might not equal the depth of understanding gained from a one-on-one session with a human tutor, but it is unquestionably better than struggling alone. AI does not compete with tutors; it extends their reach into spaces they cannot physically or temporally cover.

The same logic applies to a range of other fields. Legal advice, financial planning, career coaching—all are areas where AI has the potential to add significant value, not by outstripping human expertise but by offering something in environments where professional advice is out of reach. Imagine a low-income individual navigating legal complexities without the means to hire an attorney. An AI could provide at least basic guidance, clarify legal jargon, and suggest possible actions. All of it must be done with proper disclaimers. It is not a substitute for legal representation, but it is a world better than the alternative: no help at all.

In embracing this non-competing stance, we shift the narrative. The role of AI is not to replace human experts but to step in where human services are scarce or nonexistent. The true potential of AI lies in its ability to democratize access to essential services that many people currently go without. When AI is viewed as a bridge rather than a rival, its utility becomes much more evident. AI does not have to be better than a person to be valuable; it just should be better than the void it fills.



Monday, August 19, 2024

The Right to Leapfrog: Redefining Educational Equity in the Age of AI

AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.

This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.

Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.

The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.

This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.

Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.

As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education. 


Thursday, August 8, 2024

The Cognitive Leap Theory

With the arrival of AI, education is experiencing a profound shift, one that requires a rethinking of how we design and implement learning activities. This shift is captured in the cognitive leap theory, which posits that AI is not just an add-on to traditional education but a transformative force that redefines the learning process itself. The Cognitive Leap theory is a core part of a larger AI-positive pedagogy framework.

Traditionally, educational activities have been structured around original or revised Bloom’s Taxonomy, a framework that organizes cognitive skills from basic recall of facts (Remember) to higher-order skills like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a sequential progression, Bloom himself never insisted on a strict hierarchy. In fact, with the integration of AI into the classroom, the importance of these skills is being rebalanced. The higher-order skills, particularly those involving critical evaluation, are gaining prominence in ways that were previously unimaginable.

In an AI-positive pedagogical approach, the focus shifts from merely applying and analyzing information—tasks typically associated with mid-level cognitive engagement—to critically evaluating and improving AI-generated outputs. This represents a significant cognitive leap. Instead of simply completing tasks, students are now challenged to scrutinize AI outputs for accuracy, bias, and effectiveness in communication. This shift not only fosters deeper cognitive engagement but also prepares students to navigate the complex landscape of AI-driven information.

A key component of this approach is the development of meta-AI skills. These skills encompass the ability to formulate effective (rich) inquiries or prompts for AI, to inject original ideas into these prompts, and, crucially, to critically assess the AI’s responses. This assessment is not a one-time task but part of an iterative loop where students evaluate, re-prompt, and refine until the output meets a high standard of quality. This process not only sharpens their analytical skills but also enhances their creative abilities, as they learn to think critically about the inputs and outputs of AI systems.

Moreover, the traditional view that learning progresses linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced classroom, evaluation and creation are no longer the endpoints of learning but are increasingly becoming the starting points. Students must begin by evaluating AI-generated content and then proceed to improve it, a process that requires a deep understanding of context, an awareness of potential biases, and the ability to communicate effectively. This reordering of cognitive priorities is at the heart of the cognitive leap theory, which emphasizes that the future of education lies in teaching students not just to perform tasks but to engage in higher-order thinking at every stage of the learning process.

The implications of this shift are serious. Educators must rethink how they design assignments, moving away from traditional task-based assessments toward activities that challenge students to evaluate and improve upon AI-generated outputs. This requires a new kind of pedagogy, one that is flexible, iterative, and deeply engaged with the possibilities and limitations of AI.

By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself. 


Monday, July 29, 2024

AI is an Amateur Savant

Most people who use AI think it is great in general but believe it does not grasp their area of specialization very well. As an applied philosopher, I create intellectual tools to help others think through their problems. I find AI excellent at clarifying and explaining ideas, but it has never generated an original idea worth writing about. I have yet to see reports from others in any discipline that AI has independently produced groundbreaking ideas.

AI can handle large amounts of data and provide coherent, accurate responses across various fields. This ability is comparable to a well-informed amateur who has a broad understanding but lacks deep expertise. AI can recount historical facts, explain scientific principles, and offer legal insights based on data patterns, yet it falls short in deeper, more nuanced analysis.

In my case, AI can assist by summarizing existing theories or offering possible objections or additional arguments. However, it lacks the ability to generate a genuinely novel idea. I use it a lot, and not even once did it produce anything like that. This limitation stems from its reliance on pre-existing data and patterns, preventing it from achieving the level of innovation that human professionals bring to their fields. Some believe that this limitation will soon be overcome, but I do not think so. It seems to be an intrinsic limitation, a function of AI's way of training.

Professionals/experts, whether in philosophy, medicine, or history, possess a depth of understanding developed through extensive education and practical experience. They apply complex methodologies, critical thinking, and ethical considerations that AI cannot replicate. A doctor considers the patient's history and unique implications of treatments, while a professional historian places events within a broader socio-cultural context. AI, despite its capabilities, often misses these subtleties. It is, in some sense, a savant: a fast, amazing, but inexperienced thinker.

The gap between a capable amateur and a professional/expert might seem small, especially from the point of view of the amateur. However, it is huge and is rooted in the depth of expertise, critical thinking, and the ability to judge that professionals possess; it is a function of intellect, experience, and education. This gap is where educators should look to adapt the curriculum.

In education, we should focus on that gap between the amateur and the professional and conceptualize it as the ultimate learning outcome, then build new skill ladders to claim there. Students need to understand and conquer the gap between AI and a professional expert. These meta-AI skills are our true North. AI can support this learning process by providing clear explanations and diverse perspectives, but it cannot replace the nuanced understanding and innovation that human professionals offer.


Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Monday, June 24, 2024

Can observational learning work with AI?

Can humans learn new problem-solving strategies simply by observing AI at work? Following Bandura's theory of observational learning, it may not be as far-fetched as it at first appears.

When humans watch AI systems solve problems or generate text, they naturally construct mental models of the underlying processes. These models, while factually wrong (AI thinking is very different from ours), can nonetheless prove useful. It is imagining yourself in teh task that can be beneficial. 

Consider a person observing an AI system summarise a lengthy academic paper. The human observer cannot directly perceive the AI's internal computations. Instead, the observer likely imagines themselves performing the task, focusing on key sentences, identifying main themes, and connecting key ideas.

This mental model, though inaccurate in representing the AI's actual mechanisms, may still enhance the observer's own summarisation skills. They might, for instance, learn to pay closer attention to introductory and concluding paragraphs, or to look for repeated phrases that signal important concepts.

Observing AI failures can be particularly instructive. When an AI system produces an erroneous or nonsensical output, it often reveals the limitations of its approach. A human observer, reflecting on these errors, might develop a more nuanced understanding of the problem at hand and devise novel strategies to overcome the AI's shortcomings.

For example, watching an AI struggle with a complex logical reasoning task might prompt a human to break the problem down into smaller, more manageable steps. This approach, inspired by the AI's limitations, could prove valuable even in contexts where AI is not involved.

To test this hypothesis rigorously, consider an experiment:

1. Select a diverse set of problem-solving tasks, ranging from creative writing to mathematical reasoning.

2. Divide participants into three groups:

  •  a) An observation group that watches AI systems attempt these tasks, including both successes and failures.
  •  b) A practice group that attempts the tasks themselves without AI involvement.
  •  c) A control group that engages in unrelated activities.

3. After the observation or practice period, test all participants on a new set of similar problems.

4. Compare the performance of the three groups, paying particular attention to novel problem-solving strategies employed by the observation group.

5. Conduct follow-up interviews to explore participants' thought processes and any conscious attempts to apply AI-inspired techniques.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...