Wednesday, October 2, 2024

Four Myths About AI

AI is often vilified, with myths shaping public perception more than facts. Let us dispel four common myths about AI and present a more balanced view of its potential and limitations.

1. AI Is Environmentally Costly

One of the most persistent claims about AI is that its use requires massive amounts of energy and water, making it unsustainable in the long run. While it is true that training large AI models can be energy-intensive, this perspective needs context. Consider the environmental cost of daily activities such as driving a car, taking a shower, or watching hours of television. AI, on a per-minute basis, is significantly less taxing than these routine activities.

More importantly, AI is becoming a key driver in creating energy-efficient solutions. From optimizing power grids to improving logistics for reduced fuel consumption, AI has a role in mitigating the very problems it is accused of exacerbating. Furthermore, advancements in hardware and algorithms continually reduce the energy demands of AI systems, making them more sustainable over time.

In the end, it is a question of balance. The environmental cost of AI exists, but the benefits—whether in terms of solving climate challenges or driving efficiencies across industries—often outweigh the negatives.

2. AI Presents High Risks to Cybersecurity and Privacy

Another major concern is that AI poses a unique threat to cybersecurity and privacy. Yet there is little evidence to suggest that AI introduces any new vulnerabilities that were not already present in our existing digital infrastructure. To date, there has not been a single instance of data theft directly linked to AI models like ChatGPT or other large language models (LLMs).

In fact, AI can enhance security. It helps in detecting anomalies and intrusions faster than traditional software, potentially catching cyberattacks in their earliest stages. Privacy risks do exist, but they are no different from the risks inherent in any technology that handles large amounts of data. Regulations and ethical guidelines are catching up, ensuring AI applications remain as secure as other systems we rely on.

It is time to focus on the tangible benefits AI provides—such as faster detection of fraud or the ability to sift through vast amounts of data to prevent attacks—rather than the hypothetical risks. The fear of AI compromising our security is largely unfounded.

3. Using AI to Create Content Is Dishonest

The argument that AI use, especially in education, is a form of cheating reflects a misunderstanding of technology’s role as a tool. It is no more dishonest than using a calculator for math or employing a spell-checker for writing. AI enhances human capacity by offering assistance, but it does not replace critical thinking, creativity, or understanding.

History is full of examples of backlash against new technologies. Consider the cultural resistance to firearms in Europe during the late Middle Ages. Guns were viewed as dishonorable because they undermined traditional concepts of warfare and chivalry, allowing common soldiers to defeat skilled knights. This resistance did not last long, however, as societies learned to adapt to the new tools, and guns ultimately became an accepted part of warfare.

Similarly, AI is viewed with suspicion today, but as we better integrate it into education, the conversation will shift. The knights of intellectual labor are being defeated by peasants with better weapons. AI can help students better understand complex topics, offer personalized feedback, and enhance learning. The key is to see AI as a supplement to education, not a replacement for it.

4. AI Is Inaccurate and Unreliable

Critics often argue that AI models, including tools like ChatGPT, are highly inaccurate and unreliable. However, empirical evidence paints a different picture. While no AI is perfect, the accuracy of models like ChatGPT or Claude when tested on general undergraduate knowledge is remarkably high—often in the range of 85-90%. For comparison, the average human memory recall rate is far lower, and experts across fields frequently rely on tools and references to supplement their knowledge.

AI continues to improve as models are fine-tuned with more data and better training techniques. While early versions may have struggled with certain tasks, the current generation of AI models is much more robust. As with any tool, the key lies in how it is used. AI works best when integrated with human oversight, where its ability to process vast amounts of information complements our capacity for judgment. AI’s reliability is not perfect, but it is far from the "uncontrollable chaos" some claim it to be.

***

AI, like any revolutionary technology, invites both excitement and fear. Many of the concerns people have, however, are rooted in myth rather than fact. When we consider the evidence, it becomes clear that the benefits of AI—whether in energy efficiency, cybersecurity, education, or knowledge accuracy—far outweigh its potential downsides. The challenge now is not to vilify AI but to understand its limitations and maximize its strengths.


 

Sunday, September 29, 2024

Advanced AI users develop special cognitive models

When we encounter a stranger, we make swift, often unconscious judgments about who they are and what they are capable of. A person who speaks our language with barely a hint of an accent? We assume they are fluent. Someone who drops a reference to a complex scientific theory? We peg them as well-educated, likely to be literate, and probably knowledgeable about a range of topics from current events to social norms.

These snap judgments form the backbone of our social interactions. They are mental shortcuts, honed over millennia of human evolution, allowing us to navigate the complexities of social life with remarkable efficiency. Most of the time, they serve us well. We can usually guess whether someone will understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models. 

But when we step into the realm of artificial intelligence, these time-tested models crumble. Our human-centric predictions fail spectacularly, leaving us confused and often frustrated. Consider a recent incident with ChatGPT, a sophisticated language model. When asked to count the number of 'r's in the word "strawberry," it faltered. Many observers scoffed, concluding that AI must be fundamentally stupid if it couldn't handle such a simple task.

Yet this reaction reveals more about our flawed expectations than any shortcoming of AI. Those familiar with AI's inner workings were not surprised. They understand that a language model, no matter how advanced, is not optimized for character-level analysis. It is like expecting a master chef to be an expert accountant simply because both professions involve numbers.

This misalignment between our expectations and AI's actual capabilities stems from our tendency to anthropomorphize. We instinctively attribute human-like qualities to these digital entities. We expect them to have consistent opinions, to learn from our interactions, to understand context and nuance as we do. But AI, in its current form, does none of these things.

Unlike humans, AI does not carry the baggage of personal experience or emotion. It does not have good days or bad days. It will not be flattered by praise or offended by insults. It can switch from discussing quantum physics to writing poetry without missing a beat, unencumbered by the specialization that defines human expertise.

But AI's differences extend beyond mere capability. It lacks the fundamental attributes we associate with consciousness. It has no self-awareness, no goals or motivations of its own. It does not truly understand the content it generates, despite how convincing it may seem. It is a reflection of the data it was trained on, not a sentient being forming its own thoughts and opinions.

To interact effectively with AI, we need to develop new mental models. We must learn to predict its behavior not based on human analogies, but on an understanding of its unique nature. This means recognizing that AI might struggle with tasks we find trivially easy, while effortlessly accomplishing feats that would challenge even the most brilliant human minds.

It means understanding that every interaction with AI is essentially new. Unlike humans, who build on past conversations and experiences, most current AI systems do not retain information from one chat to the next. They do not learn or evolve through our interactions. Each query is processed afresh, without the context of what came before.

This new model of understanding also requires us to be more precise in our interactions with AI. While humans often fill in gaps in conversation with assumed context, AI interprets our requests literally. It does not automatically infer our unstated needs or desires. The clarity of our input directly influences the quality of the AI's output.

As AI becomes an increasingly integral part of our lives, developing these new mental models is not just about avoiding frustration. It is about unlocking the full potential of these powerful tools. By understanding AI's strengths and limitations, we can craft our interactions to leverage its capabilities more effectively.

The future of human-AI interaction lies not in expecting AI to conform to human patterns, but in adapting our approach to align with AI's unique characteristics. It is a future that requires us to be more thoughtful, more precise, and more open to rethinking our instinctive assumptions. In doing so, we may not only improve our interactions with AI but also gain new insights into the nature of intelligence itself. 



Monday, September 23, 2024

Cognitive Offloading: Learning more by doing less

In the AI-rich environment, educators and learners alike are grappling with a seeming paradox: how can we enhance cognitive growth by doing less? The answer lies in the concept of cognitive offloading, a phenomenon that is gaining increasing attention in cognitive science and educational circles.

Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.

Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."

The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.

With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.

This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.

As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.

For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.

The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.

It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us. 

Tuesday, September 17, 2024

Why Parallel Integration Is the Sensible Strategy of AI Adoption in the Workplace

Artificial intelligence promises to revolutionize the way we work, offering efficiency gains and new capabilities. Yet, adopting AI is not without its challenges. One prudent approach is to integrate AI into existing workflows in parallel with human processes. This strategy minimizes risk, builds confidence, and allows organizations to understand where AI excels and where it stumbles before fully committing. I have described the problem of AI output validation before; it is a serious impediment to AI integration. Here is how to solve it.

Consider a professor grading student essays. Traditionally, this is a manual task that relies on the educator's expertise. Introducing AI into this process does not mean handing over the red pen entirely. Instead, the professor continues grading as usual but also runs the essays through an AI system. Comparing results highlights discrepancies and agreements, offering insights into the AI's reliability. Over time, the professor may find that the AI is adept at spotting grammatical errors but less so at evaluating nuanced arguments.

In human resources, screening job applications is a time-consuming task. An HR professional might continue their usual screening while also employing an AI tool to assess the same applications. This dual approach ensures that no suitable candidate is overlooked due to an AI's potential bias or error. It also helps the HR team understand how the AI makes decisions, which is crucial for transparency and fairness.

Accountants auditing receipts can apply the same method. They perform their standard checks while an AI system does the same in the background. Any discrepancies can be investigated, and patterns emerge over time about where the AI is most and least effective.

This strategy aligns with the concept of "double-loop learning" from organizational theory, introduced by Chris Argyris. Double-loop learning involves not just correcting errors but examining and adjusting the underlying processes that lead to those errors. By running human and AI processes in parallel, organizations engage in a form of double-loop learning—continually refining both human and AI methods. Note, it is not only about catching and understanding AI errors; the parallel process will also find human errors through the use of AI. The overall error level will decrease. 

Yes, running parallel processes takes some extra time and resources. However, this investment is modest compared to the potential costs of errors, compliance issues, or damaged reputation from an AI mishap. People need to trust technology they use, and bulding such trust takes time. 

The medical field offers a pertinent analogy. Doctors do not immediately rely on AI diagnoses without validation. They might consult AI as a second opinion, especially in complex cases. This practice enhances diagnostic accuracy while maintaining professional responsibility. Similarly, in business processes, AI can serve as a valuable second set of eyes. 

As confidence in the AI system grows, organizations can adjust the role of human workers. Humans might shift from doing the task to verifying AI results, focusing their expertise where it's most needed. This gradual transition helps maintain quality and trust, both internally and with clients or stakeholders.

In short, parallel integration of AI into work processes is a sensible path that balances innovation with caution. It allows organizations to harness the benefits of AI while managing risks effectively. By building confidence through experience and evidence, businesses can make informed decisions about when and how to rely more heavily on AI.



Saturday, September 14, 2024

Navigating the AI Gold Rush: Skins, Security, and the Real Value Proposition

 The economic battle surrounding artificial intelligence is intensifying at an unprecedented pace. Major AI players like OpenAI, Google, Meta, and Anthropic are leading this technological revolution. Tech giants such as Microsoft, Amazon, and Apple, along with thousands of startups, are vying for a stake in this burgeoning market without being able to develop their own competitive models. Amidst this frenzy, a critical question arises: what exactly is being sold?

Two primary value propositions have emerged in this landscape: skins and security mongers. Skins are interfaces or applications that overlay major AI models, aiming to simplify user interaction. They cater to individuals lacking advanced prompting skills, offering a more user-friendly experience. Security mongers, on the other hand, emphasize heightened privacy and security, often exaggerating potential risks to entice users.

While both propositions seem valuable on the surface, a deeper examination reveals significant shortcomings. Skins promise to streamline interactions with AI models by providing preset prompts or simplified interfaces. For instance, a startup might offer a chatbot specialized in drafting business emails, claiming it saves users the hassle of formulating prompts themselves. However, is this convenience truly worth it?

Major AI models are increasingly user-friendly. ChatGPT, for example, has an intuitive interface that caters to both novices and experts. Users often find they can achieve the same or better results without intermediary platforms. Additionally, skins often come with subscription fees or hidden costs, meaning users are essentially paying extra for a service the primary AI model already provides. There is also the issue of limited functionality; skins may restrict access to the full capabilities of the AI model, offering a narrow set of functions that might not meet all user needs.

The second proposition taps into growing concerns over data privacy and security. Vendors claim to offer AI solutions with superior security measures, assuring users their data is safer compared to using mainstream models directly. But does this claim hold up under scrutiny?

Most of these intermediaries still rely on API connections to major AI models like ChatGPT. Your data passes through their servers before reaching the AI model, effectively adding another point of vulnerability. Introducing additional servers and transactions inherently increases the risk of data breaches. More touchpoints mean more opportunities for data to be intercepted or mishandled. Furthermore, major AI providers invest heavily in security and compliance, adhering to stringent international standards. Smaller vendors may lack the resources to match these safeguards.

For example, a startup might advertise an AI-powered financial advisor with enhanced security features. However, if they are routing data through their servers to access a model like GPT-4, your sensitive financial data is exposed to additional risk without any tangible security benefit. The promise of enhanced security becomes questionable when the underlying infrastructure depends on the same major models.

AI platforms have not introduced new risks to privacy or security beyond what exists with other online services like banks or credit bureaus. They employ advanced encryption and security protocols to protect user data. While no system is infallible, major AI models are on par with, if not superior to, other industries in terms of security measures. They use end-to-end encryption to protect data in transit and at rest, implement strict authentication measures to prevent unauthorized access, and conduct regular security assessments to identify and mitigate vulnerabilities. It is easy to opt out of providing your data to train new models. It is much more difficult to know what your vendors are going to do with your data.

In a market flooded with AI offerings, it is crucial to approach vendors' claims with a healthy dose of skepticism. Validate the functionality by testing whether the convenience offered by skins genuinely enhances your experience or merely repackages what is already available. Assess the security measures by inquiring about the specific protocols in place and how they differ from those used by major AI providers. Transparency is key; reputable vendors should be open about how your data is used, stored, and protected.

As the AI gold rush continues, distinguishing between genuine innovation and superficial value propositions becomes essential. Skins and security mongers may offer appealing pitches, but often they add little to no value while potentially increasing costs and risks. It is wise to try using major AI models directly before opting for third-party solutions. Research the backgrounds of vendors to determine their credibility and reliability. Seek reviews and testimonials from other users to gauge the actual benefits and drawbacks.

In the end, the most powerful tool at your disposal is due diligence. By critically evaluating what is being sold, you can make informed decisions that truly benefit you in the rapidly evolving world of AI. Beware of vendors selling either convenience or security without substantial evidence of their value. At the very least, take the time to validate their claims before making an investment.

 


Thursday, September 12, 2024

The Stealth AI Adoption

In modern workplaces, a quiet trend is taking hold: employees are secretly adopting artificial intelligence tools to enhance their work. Whether it is writing, designing, coding, or creating content, many are leveraging AI without informing their bosses. This “stealth AI adoption” is likely more widespread than managers realize.

Consider Alex, a software developer at a bustling tech firm. To streamline his coding process, Alex uses an AI assistant that can generate snippets of code in seconds. This tool not only saves him hours each week but also allows him to tackle more complex projects. However, Alex keeps this AI helper under wraps. Why? He has two choices: use the extra time for personal activities or take on additional work to appear more productive than his peers. There is no actual incentive to admit the use of AI. In some shops, cybersecurity people will come after you, if you confess. 

This hidden use of AI offers clear benefits for employees. Saving a few hours each week is tempting, whether for personal pursuits or to discreetly boost one’s workload. As a result, many organizations might be underestimating how extensively AI is being integrated into daily tasks.

Productivity can be measured in two ways: doing the same work with fewer people or doing more with the same number. The latter is a healthier, more sustainable approach. To achieve true success, organizations should aim to do more with their existing workforce rather than cutting staff. However, the stealth adoption of AI complicates this goal.

When employees use AI tools without disclosure, organizations miss out on opportunities to harness these technologies strategically. Without knowing how AI is being utilized, companies can not provide proper training or integrate AI into their workflows effectively. This fragmented approach can lead to missed efficiency gains and a lack of cohesive progress.

To foster a productive and innovative environment, companies need to build trust with their employees. Here is how:

  1. Reassure Employees: Let your team know that adopting AI will not lead to layoffs. Emphasize that AI is a tool to help them do their jobs better, not a replacement for their roles. In unionized environments, a conversation with labor leaders would be wise. 

  2. Create Incentives for Disclosure: Encourage employees to share the AI tools they are using by offering rewards or recognition. This transparency can help management understand how AI is being integrated and identify best practices.

  3. Do More with the Same People: Focus on expanding the scope of work and fostering innovation rather than cutting positions. This approach not only boosts morale but also drives the organization forward.

By building trust and creating a supportive environment, organizations can turn stealth AI adoption into a strategic advantage. Employees will feel comfortable sharing their AI discoveries, allowing organizations to implement these tools effectively and sustainably.

As we move further into the AI-driven era, organizations must address this hidden trend. Encouraging transparency about AI tools and developing clear strategies for their use can ensure that productivity gains are real and sustainable. Until then, the silent spread of AI will keep reshaping workplaces, one undisclosed tool at a time. 



Saturday, September 7, 2024

AI in Education Research: Are We Asking the Right Questions?

A recent preprint titled "Generative AI Can Harm Learning" has attracted significant attention in education and technology circles. The study, conducted by researchers from the University of Pennsylvania, examines the impact of GPT-4 based AI tutors on high school students' math performance. While the research is well-designed and executed, its premise and conclusions deserve closer scrutiny.

The study finds that students who had access to a standard GPT-4 interface (GPT Base) performed significantly better on practice problems, but when that access was removed, they actually performed worse on exams compared to students who never had AI assistance. Interestingly, students who used a specially designed AI tutor with learning safeguards (GPT Tutor) performed similarly to the control group on exams. While these results are intriguing, we need to take a step back and consider the broader implications.

The researchers should be commended for tackling an important topic. As AI becomes more prevalent in education, understanding its effects on learning is crucial. The study's methodology appears sound, with a good sample size and appropriate controls. However, the conclusions drawn from the results may be somewhat misleading.

Consider an analogy: Imagine a study that taught one group of students to use calculators for arithmetic, while another group learned traditional pencil-and-paper methods. If you then tested both groups without calculators, of course the calculator-trained group would likely perform worse. But does this mean calculators "harm learning"? Or does it simply mean we are testing the wrong skills?

The real question we should be asking is: Are we preparing students for a world without AI assistance, or a world where AI is ubiquitous? Just as we do not expect most adults to perform complex calculations without digital aids, we may need to reconsider what math skills are truly essential in an AI-augmented world.

The study's focus on performance in traditional, unassisted exams may be missing the point. What would be far more interesting is an examination of how AI tutoring affects higher-level math reasoning, problem-solving strategies, or conceptual understanding. These skills are likely to remain relevant even in a world where AI can handle routine calculations and problem-solving.

Moreover, the study's title, "Generative AI Can Harm Learning," may be overstating the case. What the study really shows is that reliance on standard AI interfaces without developing underlying skills can lead to poor performance when that AI is unavailable. However, it also demonstrates that carefully designed AI tutoring systems can potentially mitigate these negative effects. This nuanced finding highlights the importance of thoughtful AI integration in educational settings.

While this study provides valuable data and raises important questions, we should be cautious about interpreting its results too broadly. Instead of seeing AI as a potential harm to learning, we might instead ask how we can best integrate AI tools into education to enhance deeper understanding and problem-solving skills. The goal should be to prepare students for a future where AI is a ubiquitous tool, not to protect them from it.

As we continue to explore the intersection of AI and education, studies like this one are crucial. However, we must ensure that our research questions and methodologies evolve along with the technology landscape. Only then can we truly understand how to harness AI's potential to enhance, rather than hinder, learning.


Thursday, August 29, 2024

Why Newsom should veto SB 1047

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) might appear as a forward-thinking approach to regulating AI, but it overlooks a crucial reality: we lack the infrastructure to implement its provisions effectively. While some companies will inevitably claim they can audit AI systems and evaluate safety protocols, their motivations will often be driven by profit rather than genuine expertise.

Moreover, the burdens imposed by this bill will disproportionately affect smaller developers, particularly those on college campuses or within startups, who simply cannot afford the additional costs. This will stifle innovation, further entrenching the dominance of large tech companies and discouraging new entrants from participating in the AI landscape.

Before implementing such heavy-handed regulations, California must first focus on developing clear standards and building the capacity to enforce them. Without this groundwork, the bill will do more harm than good, leading to increased monopolization and a chilling effect on the very innovation it seeks to protect. The Governor should veto this bill and advocate for a more measured, phased approach that prioritizes the development of standards and capacity before regulation.

Friday, August 23, 2024

Filling Voids, Not Replacing Human Experts

The debate over artificial intelligence replacing human experts often centers on a binary question: Can AI do a better job than a human? This framing is understandable but overly simplistic. The reality is that in many contexts, the competition is not between AI and people—it is between AI and nothing at all. When viewed through this lens, the value of AI becomes clearer. It is not about pitting machines against human expertise; it is about addressing the voids left by a lack of available service.

Consider healthcare, particularly in underserved areas. It is a truism that a qualified doctor’s advice is better than anything an AI could provide. But what if you live in a rural village where the nearest doctor is hundreds of miles away? Or in a developing country where medical professionals are stretched thin? Suddenly, the prospect of AI-driven medical advice does not seem like a compromise; it feels like a lifeline. While AI lacks the nuanced judgment of an experienced physician, it can provide basic diagnostics, suggest treatments, or alert patients to symptoms that warrant urgent attention. In such scenarios, AI does not replace a doctor—it replaces the silence of inaccessibility with something, however imperfect.

Another case in point is mental health counseling. In many parts of the world, even in affluent countries, mental health services are woefully inadequate. Students at universities often face wait times ranging from weeks to months just to speak with a counselor. During that limbo, the option to interact with an AI, even one with obvious limitations, can be a critical stopgap. It is not about AI outperforming a trained therapist but offering a form of support when no other is available. It can provide coping strategies, lend a sympathetic ear, or guide someone to emergency services. Here, AI does not replace therapy; it provides something valuable in the absence of timely human support.

Education offers another case for AI’s gap-filling potential. Tutoring is an essential resource, but access to quality tutors is often limited, mainly because it is expensive. Universities might offer tutoring services, but they are frequently understaffed or employ peer tutors. Office hours with professors or teaching assistants can be similarly constrained. AI can step into this void. Chatting with an AI about a difficult concept or problem set might not equal the depth of understanding gained from a one-on-one session with a human tutor, but it is unquestionably better than struggling alone. AI does not compete with tutors; it extends their reach into spaces they cannot physically or temporally cover.

The same logic applies to a range of other fields. Legal advice, financial planning, career coaching—all are areas where AI has the potential to add significant value, not by outstripping human expertise but by offering something in environments where professional advice is out of reach. Imagine a low-income individual navigating legal complexities without the means to hire an attorney. An AI could provide at least basic guidance, clarify legal jargon, and suggest possible actions. All of it must be done with proper disclaimers. It is not a substitute for legal representation, but it is a world better than the alternative: no help at all.

In embracing this non-competing stance, we shift the narrative. The role of AI is not to replace human experts but to step in where human services are scarce or nonexistent. The true potential of AI lies in its ability to democratize access to essential services that many people currently go without. When AI is viewed as a bridge rather than a rival, its utility becomes much more evident. AI does not have to be better than a person to be valuable; it just should be better than the void it fills.



Monday, August 19, 2024

The Right to Leapfrog: Redefining Educational Equity in the Age of AI

AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.

This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.

Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.

The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.

This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.

Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.

As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education. 


Friday, August 9, 2024

Authorship, Automation, and Answerability

In the ongoing debate about the ethical use of AI, two main concerns stand out—one superficial and one profound. The first concern, often highlighted, is about the authenticity of authorship, with fears that AI-generated content might mislead us about who the true author is. However, this worry is largely misguided. It stems from a historically limited, Western-centric notion of authorship that blurs the line between the origin of ideas and the craft of their representation.

Take the legacy of Steve Jobs. He wasn’t celebrated for personally assembling each iPhone, but for his vision and design that brought the device to life. In our industrial world, the act of making things is not inherently authorial—designing them is. Why should it be any different with text, code, or images? If I designed this text, and used advanced tools to produce it, why am I not still the author? The shock many feel towards AI’s ability to generate content is akin to the upheaval experienced by 19th-century bootmakers during the Industrial Revolution. Automation has simply extended its reach into the realms of writing, coding, and art. The craftsmanship is replaced by automation, but the core principle remains: take pride in the ideas, not in the mechanics of their production. There is no inherent authorship in the latter.

But here’s where Mikhail Bakhtin’s notion of answerability helps our understanding of the true ethical stakes. While responsibility is often about fulfilling obligations or being held accountable after the fact, answerability is about our ongoing, active engagement with the world and the people in it. It is not just about who gets credit for the content; it is about recognizing that every action, every word, and every piece of AI-generated content occurs within a web of relationships. We are answerable to others because our creations—whether authored by human hands or machine algorithms—affect them.

The real concern, then, lies in the issue of answerability. AI-generated content often appears polished, convincing, and ready for immediate consumption. This creates a dangerous temptation to release such content into the world without thorough scrutiny. Here is where the ethical stakes rise significantly. AI may produce work that looks and sounds credible, but this does not guarantee that it is unbiased, meaningful, or truthful. It maybe garbage polluting the infosphere at best, or an outward harmful fake at worst. The ease of content creation does not absolve us of the responsibility to ensure its quality and integrity, and more importantly, it doesn’t free us from the answerability we have to the world around us.

This is the message we need to instill in our students, professionals, and anyone working with AI: you are still accountable and answerable for what you produce, even if a machine does the heavy lifting. Releasing AI-generated content without critical evaluation is akin to conjuring a spell without understanding its consequences. Like a magician wielding powerful but unpredictable magic, or a novice driver behind the wheel of a truck instead of a bicycle, the stakes have been raised. The tools at our disposal are more potent than ever, and with that power comes a heightened level of answerability.

In essence, the ethical debate surrounding AI shuold not be about the authorship of the craft but shuold be about the integrity and impact of the output. The real challenge is ensuring that what we create with these advanced tools is not only innovative but also responsible and answerable. As we continue to integrate AI into more aspects of our lives, we must focus less on who—or what—authored the content and more on the ethical implications of releasing it into the world. This is where the true ethical discourse lies, and it is here that our attention should be firmly fixed.


Thursday, August 8, 2024

The Cognitive Leap Theory

With the arrival of AI, education is experiencing a profound shift, one that requires a rethinking of how we design and implement learning activities. This shift is captured in the cognitive leap theory, which posits that AI is not just an add-on to traditional education but a transformative force that redefines the learning process itself. The Cognitive Leap theory is a core part of a larger AI-positive pedagogy framework.

Traditionally, educational activities have been structured around original or revised Bloom’s Taxonomy, a framework that organizes cognitive skills from basic recall of facts (Remember) to higher-order skills like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a sequential progression, Bloom himself never insisted on a strict hierarchy. In fact, with the integration of AI into the classroom, the importance of these skills is being rebalanced. The higher-order skills, particularly those involving critical evaluation, are gaining prominence in ways that were previously unimaginable.

In an AI-positive pedagogical approach, the focus shifts from merely applying and analyzing information—tasks typically associated with mid-level cognitive engagement—to critically evaluating and improving AI-generated outputs. This represents a significant cognitive leap. Instead of simply completing tasks, students are now challenged to scrutinize AI outputs for accuracy, bias, and effectiveness in communication. This shift not only fosters deeper cognitive engagement but also prepares students to navigate the complex landscape of AI-driven information.

A key component of this approach is the development of meta-AI skills. These skills encompass the ability to formulate effective (rich) inquiries or prompts for AI, to inject original ideas into these prompts, and, crucially, to critically assess the AI’s responses. This assessment is not a one-time task but part of an iterative loop where students evaluate, re-prompt, and refine until the output meets a high standard of quality. This process not only sharpens their analytical skills but also enhances their creative abilities, as they learn to think critically about the inputs and outputs of AI systems.

Moreover, the traditional view that learning progresses linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced classroom, evaluation and creation are no longer the endpoints of learning but are increasingly becoming the starting points. Students must begin by evaluating AI-generated content and then proceed to improve it, a process that requires a deep understanding of context, an awareness of potential biases, and the ability to communicate effectively. This reordering of cognitive priorities is at the heart of the cognitive leap theory, which emphasizes that the future of education lies in teaching students not just to perform tasks but to engage in higher-order thinking at every stage of the learning process.

The implications of this shift are serious. Educators must rethink how they design assignments, moving away from traditional task-based assessments toward activities that challenge students to evaluate and improve upon AI-generated outputs. This requires a new kind of pedagogy, one that is flexible, iterative, and deeply engaged with the possibilities and limitations of AI.

By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself. 


Thursday, August 1, 2024

Meet Jinni, a Universal Assistant Bot

In a busy campus with 30,000 students, hundreds of faculty, and staff, managing everyday tasks and emergencies can be tricky. Imagine a universal bot, named Jinni, designed to assist everyone regardless of what they want and need to happen. Here’s a glimpse into how this could transform daily life on campus.

Take Dr. Nguyen, for instance. A junior professor with a packed schedule, she was just invited to present at a conference in Milan but wasn't sure how to get funding. She turned to Jinni.
"Good afternoon, Professor Nguyen. What do you need today?" Jinni asked.
"I want to attend a conference in Milan. Can I get support?" she inquired.

Jinni quickly scanned the institutional website and the financial data wharehouse and responded, "In your College, it takes a request from your Associate Dean. There is still some travel budget left, but you need to hurry. However, if it’s not a peer-reviewed conference and you’re not presenting, I wouldn't bother—the College's policy does not allow for this."

It added, "If you’d rather tell me the details about the conference and upload the invitation letter, I can file the request for you. Or, you can follow the link and do it yourself."

Professor Nguyen appreciated the options and the clarity, and chose to upload her details, letting Jinni handle the rest. Within a minute, Jinni said "Done, you shuold hear from the dean's office within a week. I alrready checked your eligibility, and recommended the Associate Dean to approve."

Then there was Mr. Thompson, a new staff member who discovered a puddle in the lobby after a rainy night. He pulled out his phone and described the situation to Jinni.

"You need to file an urgent facilities request. Here’s the link. Would you like me to file one for you? If yes, take a picture of the puddle," Jinni offered. "But if it’s really bad, you may want to call them. Do you want me to dial?"

Mr. Thompson opted for the latter, and within moments, Jinni had connected him to the facilities team.

Finally, there was Jose, a student who had missed the course drop deadline because of a bad flu. Anxious and unsure what to do, he asked Jinni for help. 

"Sorry to hear you’ve been sick. Jose. Yes, there is a petition you can file with the Registrar," Jinni replied. "I can do it for you, but I need a few more details. Do you have a note from your doctor? If not, you should get it first, then take a picture of it for me. If you used the Campus Health Center, I can contact them for you to request documentation. I will then write and submit the petition on your behalf. I will also need a few details - which class, the instructore's name, when you got sick, etc." Jose was relieved to find a straightforward solution to his problem and began to answer Jinni's questions one by one. 

The technology to create a universal agent bot like Jinni is not yet on the open market, but all elements do already exist as prototypes. More advanced customizable AI models, trained on extensive and diverse datasets, are essential to handle such tasks. More active, agentic AI also does exist. It can file and submit forms, not just find them. But even if we could to simply find and interpret policy and procedures, and point users to the right forms, it would alredy be a huge step forward. 

Simplifying and streamlining hundreds of procedures that any complex organization develops is definitely possible, but we know few examples of successful transformations like that. The next best thing is to use AI to help people navigate those procedures. This will lower barriers for all and reduce transactional costs. 


Monday, July 29, 2024

AI is an Amateur Savant

Most people who use AI think it is great in general but believe it does not grasp their area of specialization very well. As an applied philosopher, I create intellectual tools to help others think through their problems. I find AI excellent at clarifying and explaining ideas, but it has never generated an original idea worth writing about. I have yet to see reports from others in any discipline that AI has independently produced groundbreaking ideas.

AI can handle large amounts of data and provide coherent, accurate responses across various fields. This ability is comparable to a well-informed amateur who has a broad understanding but lacks deep expertise. AI can recount historical facts, explain scientific principles, and offer legal insights based on data patterns, yet it falls short in deeper, more nuanced analysis.

In my case, AI can assist by summarizing existing theories or offering possible objections or additional arguments. However, it lacks the ability to generate a genuinely novel idea. I use it a lot, and not even once did it produce anything like that. This limitation stems from its reliance on pre-existing data and patterns, preventing it from achieving the level of innovation that human professionals bring to their fields. Some believe that this limitation will soon be overcome, but I do not think so. It seems to be an intrinsic limitation, a function of AI's way of training.

Professionals/experts, whether in philosophy, medicine, or history, possess a depth of understanding developed through extensive education and practical experience. They apply complex methodologies, critical thinking, and ethical considerations that AI cannot replicate. A doctor considers the patient's history and unique implications of treatments, while a professional historian places events within a broader socio-cultural context. AI, despite its capabilities, often misses these subtleties. It is, in some sense, a savant: a fast, amazing, but inexperienced thinker.

The gap between a capable amateur and a professional/expert might seem small, especially from the point of view of the amateur. However, it is huge and is rooted in the depth of expertise, critical thinking, and the ability to judge that professionals possess; it is a function of intellect, experience, and education. This gap is where educators should look to adapt the curriculum.

In education, we should focus on that gap between the amateur and the professional and conceptualize it as the ultimate learning outcome, then build new skill ladders to claim there. Students need to understand and conquer the gap between AI and a professional expert. These meta-AI skills are our true North. AI can support this learning process by providing clear explanations and diverse perspectives, but it cannot replace the nuanced understanding and innovation that human professionals offer.


Wednesday, July 24, 2024

What percentage of my text is AI-generated?

Go ahead, ask me the question. However, I would in turn ask you to specify which of the following kinds of assistance from AI you are interested in.  

  1. Distilling information into summaries
  2. Revamping and recasting content
  3. Polishing grammar, spelling, and punctuation
  4. Sparking ideas and crafting titles
  5. Conjuring additional arguments or perspectives
  6. Spotting potential counterarguments or objections
  7. Constructing and organizing content
  8. Juxtaposing points from multiple sources
  9. Scrutinizing and refining existing content
  10. Demystifying complex ideas or jargon
  11. Architecting outlines and organizational structures
  12. Fashioning examples or illustrations
  13. Tailoring content for different audiences or formats
  14. Forging hooks or attention-grabbing openings
  15. Sculpting strong conclusions or call-to-actions
  16. Unearthing relevant quotes or citations
  17. Decoding concepts in simpler terms
  18. Fleshing out brief points or ideas
  19. Trimming verbose text
  20. Honing clarity and coherence
  21. Smoothing the flow between paragraphs or sections
  22. Concocting metaphors or analogies
  23. Verifying and authenticating information
  24. Proposing synonyms or alternative phrasing
  25. Pinpointing and eliminating redundancies
  26. Diversifying sentence variety and structure
  27. Maintaining consistency in tone and style
  28. Aligning content with specific style guides
  29. Devising keywords for SEO optimization
  30. Assembling bullet points or numbered lists
  31. Bridging sections with appropriate transitions
  32. Flagging areas that need more elaboration
  33. Accentuating key takeaways or main points
  34. Formulating questions for further exploration
  35. Contextualizing with background information
  36. Envisioning visual elements or data representations
  37. Detecting potential areas of bias or subjectivity
  38. Inventing catchy titles or headlines
  39. Streamlining the logical flow of arguments
  40. Boosting text engagement and persuasiveness
  41. Rooting out and rectifying logical fallacies
  42. Imagining hypothetical scenarios or case studies
  43. Illuminating alternative perspectives on a topic
  44. Weaving in storytelling elements
  45. Uncovering gaps in research or argumentation
  46. Producing counterexamples or rebuttals
  47. Bolstering weak arguments
  48. Harmonizing tense and voice inconsistencies
  49. Composing topic sentences for paragraphs
  50. Integrating data or statistics effectively
  51. Devising analogies to explain complex concepts
  52. Injecting humor or wit
  53. Eradicating passive voice usage
  54. Compiling topic-specific vocabulary lists
  55. Enhancing paragraph transitions
  56. Untangling run-on sentences
  57. Articulating thesis statements or main arguments
  58. Infusing content with sensory details
  59. Resolving dangling modifiers
  60. Conceiving potential research questions
  61. Incorporating rhetorical devices
  62. Rectifying pronoun inconsistencies
  63. Anticipating potential counterarguments
  64. Embedding anecdotes effectively
  65. Mending comma splices
  66. Drafting potential interview questions
  67. Sprinkling in cultural references
  68. Correcting subject-verb agreement errors
  69. Designing potential survey questions
  70. Adorning text with figurative language
  71. Repositioning misplaced modifiers
  72. Brainstorming potential titles for sections or chapters
  73. Integrating expert opinions
  74. Paring down wordiness
  75. Exploring potential subtopics
  76. Weaving in statistical data
  77. Eliminating tautologies
  78. Coining potential taglines or slogans
  79. Embedding historical context
  80. Untangling mixed metaphors
  81. Developing potential FAQs and answers
  82. Incorporating scientific terminology
  83. Fixing split infinitives
  84. Generating potential discussion points
  85. Blending in technical jargon
  86. Expunging clichés
  87. Crafting potential calls-to-action
  88. Inserting industry-specific terms
  89. Replacing euphemisms
  90. Extracting potential pullout quotes
  91. Interweaving mathematical concepts
  92. Eliminating redundant phrasing
  93. Compiling potential glossary terms and definitions
  94. Introducing philosophical concepts
  95. Standardizing formatting
  96. Curating potential appendix content
  97. Incorporating legal terminology
  98. Clarifying ambiguous pronouns
  99. Cataloging potential index terms
  100. Synthesizing interdisciplinary perspectives
  101. Writing long list of AI uses for content generation



Saturday, July 20, 2024

The Three Wave Strategy of AI Implementation

Whether it's a university, a tech giant, a manufacturing company, a public utility, or a government agency, the complexity of operations can be overwhelming. To illustrate this point, Claude and I have generated a list of over 1,150 workflows typical for a large university, many of which can be further broken down into smaller, more specific processes.

Given this complexity, the question arises: Where do we start with AI implementation? The answer lies in a strategic, phased approach that considers the unique characteristics of each workflow and the organization's readiness for AI adoption.

The First Wave: Low-Hanging Fruit

The initial phase of AI implementation should focus on what we call the "low-hanging fruit" - workflows that meet three crucial criteria:

  1. Self-evident quality: The output quality is immediately obvious and doesn't require complex evaluation.
  2. Single-person control: The workflow is typically managed or executed by one individual.
  3. Ready-made AI tools: The process can be enhanced using existing AI tools without requiring specialized development. It is either using one of the primary LLM's or building a custom bot.

These criteria help identify areas where AI can quickly and effectively augment human efforts, improving efficiency and potentially enhancing the quality of service provided. Based on these criteria, here's a priority list of workflows that could be considered for the first wave of AI implementation. These are just examples:

  1. Student services
    • Student and prospective student advising of all kinds
  2. Resume and Cover Letter Review (Career Services)
    • Offering individual resume critiques
    • Assisting with cover letter development
  3. Academic Policy Development and Enforcement (Academic Affairs)
    • Drafting and revising academic policies
  4. Health Education and Outreach (Health and Wellness Services)
    • Creating and distributing health education materials
  5. Sustainability Education and Outreach (Sustainability and Environmental Initiatives)
    • Creating sustainability guides and resources for campus community
  6. Digital Marketing and Social Media Management (University Communications and Marketing)
    • Creating and curating content for various platforms
  7. Grant Proposal Development and Submission (Research and Innovation)
    • Assisting faculty with proposal writing
  8. Financial Aid Counseling (Financial Aid and Scholarships)
    • Providing one-on-one counseling sessions
    • Offering debt management and financial literacy education
  9. Alumni Communications (Alumni Relations and Development)
    • Producing alumni magazines and newsletters
  10. Scholarly Communications (Library Services)
    • Supporting faculty in publishing and copyright issues
    • Providing guidance on research impact metrics
  11. International Student and Scholar Services (International Programs and Global Engagement)
    • Providing immigration advising and document processing

This first wave serves multiple purposes. It demonstrates the proof of principle, making more stakeholders comfortable with AI integration. It also helps build internal expertise and confidence in working with AI technologies. These early successes can pave the way for more ambitious implementations in the future.

The Second Wave: Tackling Costly Workflows

Once the organization has gained experience and confidence from the first wave, it can move on to more complex and costly workflows. These are typically processes that involve significant labor, occur frequently, and have a broad scope of impact on the organization. However, it is crucial to narrow down this list based on feasibility and readiness for AI implementation.

For instance, while teaching is undoubtedly one of the most labor-intensive and impactful processes in a university, we do not yet have sufficient knowledge on how to make it significantly more efficient through AI. Some processes, like teaching, may never be fully optimized by AI because to their inherently relational nature. 

Note, this is also an opportunity to review major workflows; they often evolved over the years, and are far from ideal efficiency. AI can help review these workflows, and recommend streamlining. And of course, AI can be integrated into actually doing the work. 

The Third Wave: Enterprise-Level Solutions

Only after successfully navigating the first two waves should an organization consider enterprise-level AI solutions. These solutions have the potential to radically redefine the organization's core operations, placing AI at the center of its processes. This level of integration requires a deep understanding of AI capabilities, a clear vision of the organization's future, and a robust infrastructure to support AI-driven operations. Most importantly, it requires specialized tools and high level of security. 

The Timeline and Exceptions

This phased approach to AI implementation is not a quick process. For most large, complex organizations, it could take a couple of decades to fully realize the potential of AI across all workflows. However, there are exceptions. Some businesses with simpler and fewer workflows, such as narrowly specialized customer service operations, may be able to leapfrog straight into the third wave, especially if they have prior experience with AI technologies.

But these are the exceptions rather than the rule. For the majority of organizations, the path to comprehensive AI implementation requires a well-thought-out strategy, clear priorities, and a focus on building confidence and expertise over time.

Integrating AI into a complex organization's workflows is a marathon, not a sprint. It asks for patience, strategic thinking, and a willingness to learn and adapt. The key is to approach this journey with a clear strategy, well-defined priorities, and a commitment to building internal AI expertise. 

Abstract painting of waves

Wednesday, July 17, 2024

AI is not going to implement itself, but governments can help

The AI hype has passed, and the overexcited futurists' voices are mercifully fading away. We're now entering a practical era where AI is leveraged to boost productivity in businesses, non-profit, and public organizations. This shift brings a sobering realization: AI integration requires a meticulous, pragmatic approach to build reliable and trustworthy systems. It's a lot of work and requires some strategy.

When a single person manages a well-defined workflow, integrating AI is relatively straightforward. It's easy to incorporate AI tools like ChatGPT or Claude to assist with ad copy, reports, or applications. The beauty of these scenarios lies in their simplicity - the user acts as both operator and quality controller, immediately judging the output's effectiveness.

However, the story changes dramatically when we shift to multi-user workflows or more complex processes, where both inputs and outputs are more of a collective responsibility. I recently spoke with an Accounts Payable team who posed a challenging question: "Yes, we can see that AI can help review travel claims, but can you guarantee it's going to be 100% accurate?" I couldn't provide that guarantee; I don't have time to conduct a hundred tests, and I don't even have access to a hundred travel reports. They emphasized their need for completely audit-proof outcomes. This conversation highlighted the trust issues that arise when moving from AI enthusiasts to skeptics in larger organizations. And organizations should have a healthy group of skeptics to remain viable.

I've also recently been a fly on the wall during discussions between healthcare executives and a U.S. lawmaker. The executives explained that each AI-assisted medical procedure needs validation, which is expensive and often duplicated across multiple hospital systems. This challenge extends beyond healthcare. For instance, when using AI to crunch data in all organizations, we need to understand its reliability in analyzing large datasets, cleaning them, and handling outliers.

The problem is that no private institution can conduct the kind of comprehensive testing and validation needed to establish trust in AI systems across various industries. We cannot seriously trust claims of startups who are trying to sell a specialized product to an industry or a government organization. It's not clear how a hypothetical validation private service would monetize such an endeavor.

This is where I believe government involvement becomes crucial. Instead of obsessing with deep fakes and ethics, that's what governments should be doing. Governments can collaborate with industry experts to develop standardized benchmarks for AI reliability and performance. They could establish certification programs that act as quality marks, assuring users that AI systems have undergone rigorous testing. Moreover, government funding could support businesses, NGOs, and government agencies in conducting extensive AI testing, especially benefiting smaller organizations lacking the necessary resources.

In my view, public-private partnerships are key to navigating these challenges. By leveraging expertise from both sectors, we can develop robust testing frameworks and create dependable AI systems. This approach would pave the way for more efficient and innovative workflows across industries, ensuring that the benefits of AI are realized while maintaining trust and reliability. 

Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Tuesday, July 9, 2024

AI-Positive Pedagogy: Navigating the Great Disruption

AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.

While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.

The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.

AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.

AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.

The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.

To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.

It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.

In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.

To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.

The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.


Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...