When we encounter a stranger, we make swift, often
unconscious judgments about who they are and what they are capable of. A person
who speaks our language with barely a hint of an accent? We assume they are
fluent. Someone who drops a reference to a complex scientific theory? We peg
them as well-educated, likely to be literate, and probably knowledgeable about
a range of topics from current events to social norms.
These snap judgments form the backbone of our social interactions. They are
mental shortcuts, honed over millennia of human evolution, allowing us to
navigate the complexities of social life with remarkable efficiency. Most of
the time, they serve us well. We can usually guess whether someone will
understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models.
But when we step into the realm of artificial intelligence, these time-tested
models crumble. Our human-centric predictions fail spectacularly, leaving us
confused and often frustrated. Consider a recent incident with ChatGPT, a
sophisticated language model. When asked to count the number of 'r's in the
word "strawberry," it faltered. Many observers scoffed, concluding
that AI must be fundamentally stupid if it couldn't handle such a simple task.
Yet this reaction reveals more about our flawed expectations than any
shortcoming of AI. Those familiar with AI's inner workings were not surprised.
They understand that a language model, no matter how advanced, is not optimized
for character-level analysis. It is like expecting a master chef to be an
expert accountant simply because both professions involve numbers.
This misalignment between our expectations and AI's actual capabilities stems
from our tendency to anthropomorphize. We instinctively attribute human-like
qualities to these digital entities. We expect them to have consistent
opinions, to learn from our interactions, to understand context and nuance as
we do. But AI, in its current form, does none of these things.
Unlike humans, AI does not carry the baggage of personal experience or emotion.
It does not have good days or bad days. It will not be flattered by praise or
offended by insults. It can switch from discussing quantum physics to writing
poetry without missing a beat, unencumbered by the specialization that defines
human expertise.
But AI's differences extend beyond mere capability. It lacks the fundamental
attributes we associate with consciousness. It has no self-awareness, no goals
or motivations of its own. It does not truly understand the content it
generates, despite how convincing it may seem. It is a reflection of the data
it was trained on, not a sentient being forming its own thoughts and opinions.
To interact effectively with AI, we need to develop new mental models. We must
learn to predict its behavior not based on human analogies, but on an
understanding of its unique nature. This means recognizing that AI might
struggle with tasks we find trivially easy, while effortlessly accomplishing
feats that would challenge even the most brilliant human minds.
It means understanding that every interaction with AI is essentially new.
Unlike humans, who build on past conversations and experiences, most current AI
systems do not retain information from one chat to the next. They do not learn
or evolve through our interactions. Each query is processed afresh, without the
context of what came before.
This new model of understanding also requires us to be more precise in our
interactions with AI. While humans often fill in gaps in conversation with
assumed context, AI interprets our requests literally. It does not
automatically infer our unstated needs or desires. The clarity of our input
directly influences the quality of the AI's output.
As AI becomes an increasingly integral part of our lives, developing these new
mental models is not just about avoiding frustration. It is about unlocking the
full potential of these powerful tools. By understanding AI's strengths and
limitations, we can craft our interactions to leverage its capabilities more
effectively.
The future of human-AI interaction lies not in expecting AI to conform to human
patterns, but in adapting our approach to align with AI's unique
characteristics. It is a future that requires us to be more thoughtful, more
precise, and more open to rethinking our instinctive assumptions. In doing so,
we may not only improve our interactions with AI but also gain new insights
into the nature of intelligence itself.
AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Sunday, September 29, 2024
Advanced AI users develop special cognitive models
Monday, September 23, 2024
Cognitive Offloading: Learning more by doing less
Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.
Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."
The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.
With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.
This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.
As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.
For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.
The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.
It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us.
Tuesday, September 17, 2024
Why Parallel Integration Is the Sensible Strategy of AI Adoption in the Workplace
Artificial intelligence promises to revolutionize the way we work, offering efficiency gains and new capabilities. Yet, adopting AI is not without its challenges. One prudent approach is to integrate AI into existing workflows in parallel with human processes. This strategy minimizes risk, builds confidence, and allows organizations to understand where AI excels and where it stumbles before fully committing. I have described the problem of AI output validation before; it is a serious impediment to AI integration. Here is how to solve it.
Consider a professor grading student essays. Traditionally, this is a manual task that relies on the educator's expertise. Introducing AI into this process does not mean handing over the red pen entirely. Instead, the professor continues grading as usual but also runs the essays through an AI system. Comparing results highlights discrepancies and agreements, offering insights into the AI's reliability. Over time, the professor may find that the AI is adept at spotting grammatical errors but less so at evaluating nuanced arguments.
In human resources, screening job applications is a time-consuming task. An HR professional might continue their usual screening while also employing an AI tool to assess the same applications. This dual approach ensures that no suitable candidate is overlooked due to an AI's potential bias or error. It also helps the HR team understand how the AI makes decisions, which is crucial for transparency and fairness.
Accountants auditing receipts can apply the same method. They perform their standard checks while an AI system does the same in the background. Any discrepancies can be investigated, and patterns emerge over time about where the AI is most and least effective.
This strategy aligns with the concept of "double-loop learning" from organizational theory, introduced by Chris Argyris. Double-loop learning involves not just correcting errors but examining and adjusting the underlying processes that lead to those errors. By running human and AI processes in parallel, organizations engage in a form of double-loop learning—continually refining both human and AI methods. Note, it is not only about catching and understanding AI errors; the parallel process will also find human errors through the use of AI. The overall error level will decrease.
Yes, running parallel processes takes some extra time and resources. However, this investment is modest compared to the potential costs of errors, compliance issues, or damaged reputation from an AI mishap. People need to trust technology they use, and bulding such trust takes time.
The medical field offers a pertinent analogy. Doctors do not immediately rely on AI diagnoses without validation. They might consult AI as a second opinion, especially in complex cases. This practice enhances diagnostic accuracy while maintaining professional responsibility. Similarly, in business processes, AI can serve as a valuable second set of eyes.
As confidence in the AI system grows, organizations can adjust the role of human workers. Humans might shift from doing the task to verifying AI results, focusing their expertise where it's most needed. This gradual transition helps maintain quality and trust, both internally and with clients or stakeholders.
In short, parallel integration of AI into work processes is a sensible path that balances innovation with caution. It allows organizations to harness the benefits of AI while managing risks effectively. By building confidence through experience and evidence, businesses can make informed decisions about when and how to rely more heavily on AI.
Saturday, September 14, 2024
Navigating the AI Gold Rush: Skins, Security, and the Real Value Proposition
The economic battle surrounding artificial intelligence is intensifying at an unprecedented pace. Major AI players like OpenAI, Google, Meta, and Anthropic are leading this technological revolution. Tech giants such as Microsoft, Amazon, and Apple, along with thousands of startups, are vying for a stake in this burgeoning market without being able to develop their own competitive models. Amidst this frenzy, a critical question arises: what exactly is being sold?
Two primary value propositions have emerged in this
landscape: skins and security mongers. Skins are interfaces or applications
that overlay major AI models, aiming to simplify user interaction. They cater
to individuals lacking advanced prompting skills, offering a more user-friendly
experience. Security mongers, on the other hand, emphasize heightened privacy
and security, often exaggerating potential risks to entice users.
While both propositions seem valuable on the surface, a
deeper examination reveals significant shortcomings. Skins promise to
streamline interactions with AI models by providing preset prompts or
simplified interfaces. For instance, a startup might offer a chatbot
specialized in drafting business emails, claiming it saves users the hassle of
formulating prompts themselves. However, is this convenience truly worth it?
Major AI models are increasingly user-friendly. ChatGPT, for
example, has an intuitive interface that caters to both novices and experts.
Users often find they can achieve the same or better results without
intermediary platforms. Additionally, skins often come with subscription fees
or hidden costs, meaning users are essentially paying extra for a service the
primary AI model already provides. There is also the issue of limited
functionality; skins may restrict access to the full capabilities of the AI
model, offering a narrow set of functions that might not meet all user needs.
The second proposition taps into growing concerns over data
privacy and security. Vendors claim to offer AI solutions with superior
security measures, assuring users their data is safer compared to using
mainstream models directly. But does this claim hold up under scrutiny?
Most of these intermediaries still rely on API connections
to major AI models like ChatGPT. Your data passes through their servers before
reaching the AI model, effectively adding another point of vulnerability.
Introducing additional servers and transactions inherently increases the risk
of data breaches. More touchpoints mean more opportunities for data to be
intercepted or mishandled. Furthermore, major AI providers invest heavily in
security and compliance, adhering to stringent international standards. Smaller
vendors may lack the resources to match these safeguards.
For example, a startup might advertise an AI-powered
financial advisor with enhanced security features. However, if they are routing
data through their servers to access a model like GPT-4, your sensitive
financial data is exposed to additional risk without any tangible security
benefit. The promise of enhanced security becomes questionable when the
underlying infrastructure depends on the same major models.
AI platforms have not introduced new risks to privacy or
security beyond what exists with other online services like banks or credit
bureaus. They employ advanced encryption and security protocols to protect user
data. While no system is infallible, major AI models are on par with, if not
superior to, other industries in terms of security measures. They use
end-to-end encryption to protect data in transit and at rest, implement strict
authentication measures to prevent unauthorized access, and conduct regular
security assessments to identify and mitigate vulnerabilities. It is easy to
opt out of providing your data to train new models. It is much more difficult to
know what your vendors are going to do with your data.
In a market flooded with AI offerings, it is crucial to
approach vendors' claims with a healthy dose of skepticism. Validate the
functionality by testing whether the convenience offered by skins genuinely
enhances your experience or merely repackages what is already available. Assess
the security measures by inquiring about the specific protocols in place and
how they differ from those used by major AI providers. Transparency is key;
reputable vendors should be open about how your data is used, stored, and protected.
As the AI gold rush continues, distinguishing between
genuine innovation and superficial value propositions becomes essential. Skins
and security mongers may offer appealing pitches, but often they add little to
no value while potentially increasing costs and risks. It is wise to try using
major AI models directly before opting for third-party solutions. Research the
backgrounds of vendors to determine their credibility and reliability. Seek
reviews and testimonials from other users to gauge the actual benefits and
drawbacks.
In the end, the most powerful tool at your disposal is due
diligence. By critically evaluating what is being sold, you can make informed
decisions that truly benefit you in the rapidly evolving world of AI. Beware of
vendors selling either convenience or security without substantial evidence of
their value. At the very least, take the time to validate their claims before
making an investment.
Thursday, September 12, 2024
The Stealth AI Adoption
In modern workplaces, a quiet trend is taking hold: employees are secretly adopting artificial intelligence tools to enhance their work. Whether it is writing, designing, coding, or creating content, many are leveraging AI without informing their bosses. This “stealth AI adoption” is likely more widespread than managers realize.
Consider Alex, a software developer at a bustling tech firm. To streamline his coding process, Alex uses an AI assistant that can generate snippets of code in seconds. This tool not only saves him hours each week but also allows him to tackle more complex projects. However, Alex keeps this AI helper under wraps. Why? He has two choices: use the extra time for personal activities or take on additional work to appear more productive than his peers. There is no actual incentive to admit the use of AI. In some shops, cybersecurity people will come after you, if you confess.
This hidden use of AI offers clear benefits for employees. Saving a few hours each week is tempting, whether for personal pursuits or to discreetly boost one’s workload. As a result, many organizations might be underestimating how extensively AI is being integrated into daily tasks.
Productivity can be measured in two ways: doing the same work with fewer people or doing more with the same number. The latter is a healthier, more sustainable approach. To achieve true success, organizations should aim to do more with their existing workforce rather than cutting staff. However, the stealth adoption of AI complicates this goal.
When employees use AI tools without disclosure, organizations miss out on opportunities to harness these technologies strategically. Without knowing how AI is being utilized, companies can not provide proper training or integrate AI into their workflows effectively. This fragmented approach can lead to missed efficiency gains and a lack of cohesive progress.
To foster a productive and innovative environment, companies need to build trust with their employees. Here is how:
Reassure Employees: Let your team know that adopting AI will not lead to layoffs. Emphasize that AI is a tool to help them do their jobs better, not a replacement for their roles. In unionized environments, a conversation with labor leaders would be wise.
Create Incentives for Disclosure: Encourage employees to share the AI tools they are using by offering rewards or recognition. This transparency can help management understand how AI is being integrated and identify best practices.
Do More with the Same People: Focus on expanding the scope of work and fostering innovation rather than cutting positions. This approach not only boosts morale but also drives the organization forward.
By building trust and creating a supportive environment, organizations can turn stealth AI adoption into a strategic advantage. Employees will feel comfortable sharing their AI discoveries, allowing organizations to implement these tools effectively and sustainably.
As we move further into the AI-driven era, organizations must address this hidden trend. Encouraging transparency about AI tools and developing clear strategies for their use can ensure that productivity gains are real and sustainable. Until then, the silent spread of AI will keep reshaping workplaces, one undisclosed tool at a time.
Saturday, September 7, 2024
AI in Education Research: Are We Asking the Right Questions?
The study finds that students who had access to a standard GPT-4 interface (GPT Base) performed significantly better on practice problems, but when that access was removed, they actually performed worse on exams compared to students who never had AI assistance. Interestingly, students who used a specially designed AI tutor with learning safeguards (GPT Tutor) performed similarly to the control group on exams. While these results are intriguing, we need to take a step back and consider the broader implications.
The researchers should be commended for tackling an important topic. As AI becomes more prevalent in education, understanding its effects on learning is crucial. The study's methodology appears sound, with a good sample size and appropriate controls. However, the conclusions drawn from the results may be somewhat misleading.
Consider an analogy: Imagine a study that taught one group of students to use calculators for arithmetic, while another group learned traditional pencil-and-paper methods. If you then tested both groups without calculators, of course the calculator-trained group would likely perform worse. But does this mean calculators "harm learning"? Or does it simply mean we are testing the wrong skills?
The real question we should be asking is: Are we preparing students for a world without AI assistance, or a world where AI is ubiquitous? Just as we do not expect most adults to perform complex calculations without digital aids, we may need to reconsider what math skills are truly essential in an AI-augmented world.
The study's focus on performance in traditional, unassisted exams may be missing the point. What would be far more interesting is an examination of how AI tutoring affects higher-level math reasoning, problem-solving strategies, or conceptual understanding. These skills are likely to remain relevant even in a world where AI can handle routine calculations and problem-solving.
Moreover, the study's title, "Generative AI Can Harm Learning," may be overstating the case. What the study really shows is that reliance on standard AI interfaces without developing underlying skills can lead to poor performance when that AI is unavailable. However, it also demonstrates that carefully designed AI tutoring systems can potentially mitigate these negative effects. This nuanced finding highlights the importance of thoughtful AI integration in educational settings.
While this study provides valuable data and raises important questions, we should be cautious about interpreting its results too broadly. Instead of seeing AI as a potential harm to learning, we might instead ask how we can best integrate AI tools into education to enhance deeper understanding and problem-solving skills. The goal should be to prepare students for a future where AI is a ubiquitous tool, not to protect them from it.
As we continue to explore the intersection of AI and education, studies like this one are crucial. However, we must ensure that our research questions and methodologies evolve along with the technology landscape. Only then can we truly understand how to harness AI's potential to enhance, rather than hinder, learning.
Thursday, August 29, 2024
Why Newsom should veto SB 1047
Moreover, the burdens imposed by this bill will disproportionately affect smaller developers, particularly those on college campuses or within startups, who simply cannot afford the additional costs. This will stifle innovation, further entrenching the dominance of large tech companies and discouraging new entrants from participating in the AI landscape.
Before implementing such heavy-handed regulations, California must first focus on developing clear standards and building the capacity to enforce them. Without this groundwork, the bill will do more harm than good, leading to increased monopolization and a chilling effect on the very innovation it seeks to protect. The Governor should veto this bill and advocate for a more measured, phased approach that prioritizes the development of standards and capacity before regulation.
Friday, August 23, 2024
Filling Voids, Not Replacing Human Experts
Consider healthcare, particularly in underserved areas. It is a truism that a qualified doctor’s advice is better than anything an AI could provide. But what if you live in a rural village where the nearest doctor is hundreds of miles away? Or in a developing country where medical professionals are stretched thin? Suddenly, the prospect of AI-driven medical advice does not seem like a compromise; it feels like a lifeline. While AI lacks the nuanced judgment of an experienced physician, it can provide basic diagnostics, suggest treatments, or alert patients to symptoms that warrant urgent attention. In such scenarios, AI does not replace a doctor—it replaces the silence of inaccessibility with something, however imperfect.
Another case in point is mental health counseling. In many parts of the world, even in affluent countries, mental health services are woefully inadequate. Students at universities often face wait times ranging from weeks to months just to speak with a counselor. During that limbo, the option to interact with an AI, even one with obvious limitations, can be a critical stopgap. It is not about AI outperforming a trained therapist but offering a form of support when no other is available. It can provide coping strategies, lend a sympathetic ear, or guide someone to emergency services. Here, AI does not replace therapy; it provides something valuable in the absence of timely human support.
Education offers another case for AI’s gap-filling potential. Tutoring is an essential resource, but access to quality tutors is often limited, mainly because it is expensive. Universities might offer tutoring services, but they are frequently understaffed or employ peer tutors. Office hours with professors or teaching assistants can be similarly constrained. AI can step into this void. Chatting with an AI about a difficult concept or problem set might not equal the depth of understanding gained from a one-on-one session with a human tutor, but it is unquestionably better than struggling alone. AI does not compete with tutors; it extends their reach into spaces they cannot physically or temporally cover.
The same logic applies to a range of other fields. Legal advice, financial planning, career coaching—all are areas where AI has the potential to add significant value, not by outstripping human expertise but by offering something in environments where professional advice is out of reach. Imagine a low-income individual navigating legal complexities without the means to hire an attorney. An AI could provide at least basic guidance, clarify legal jargon, and suggest possible actions. All of it must be done with proper disclaimers. It is not a substitute for legal representation, but it is a world better than the alternative: no help at all.
In embracing this non-competing stance, we shift the narrative. The role of AI is not to replace human experts but to step in where human services are scarce or nonexistent. The true potential of AI lies in its ability to democratize access to essential services that many people currently go without. When AI is viewed as a bridge rather than a rival, its utility becomes much more evident. AI does not have to be better than a person to be valuable; it just should be better than the void it fills.
Monday, August 19, 2024
The Right to Leapfrog: Redefining Educational Equity in the Age of AI
AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.
This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.
Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.
The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.
This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.
Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.
As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education.
Friday, August 9, 2024
Authorship, Automation, and Answerability
In the ongoing debate about the ethical use of AI, two main concerns stand out—one superficial and one profound. The first concern, often highlighted, is about the authenticity of authorship, with fears that AI-generated content might mislead us about who the true author is. However, this worry is largely misguided. It stems from a historically limited, Western-centric notion of authorship that blurs the line between the origin of ideas and the craft of their representation.
Take the legacy of Steve Jobs. He wasn’t celebrated for personally assembling each iPhone, but for his vision and design that brought the device to life. In our industrial world, the act of making things is not inherently authorial—designing them is. Why should it be any different with text, code, or images? If I designed this text, and used advanced tools to produce it, why am I not still the author? The shock many feel towards AI’s ability to generate content is akin to the upheaval experienced by 19th-century bootmakers during the Industrial Revolution. Automation has simply extended its reach into the realms of writing, coding, and art. The craftsmanship is replaced by automation, but the core principle remains: take pride in the ideas, not in the mechanics of their production. There is no inherent authorship in the latter.
But here’s where Mikhail Bakhtin’s notion of answerability helps our understanding of the true ethical stakes. While responsibility is often about fulfilling obligations or being held accountable after the fact, answerability is about our ongoing, active engagement with the world and the people in it. It is not just about who gets credit for the content; it is about recognizing that every action, every word, and every piece of AI-generated content occurs within a web of relationships. We are answerable to others because our creations—whether authored by human hands or machine algorithms—affect them.
The real concern, then, lies in the issue of answerability. AI-generated content often appears polished, convincing, and ready for immediate consumption. This creates a dangerous temptation to release such content into the world without thorough scrutiny. Here is where the ethical stakes rise significantly. AI may produce work that looks and sounds credible, but this does not guarantee that it is unbiased, meaningful, or truthful. It maybe garbage polluting the infosphere at best, or an outward harmful fake at worst. The ease of content creation does not absolve us of the responsibility to ensure its quality and integrity, and more importantly, it doesn’t free us from the answerability we have to the world around us.
This is the message we need to instill in our students, professionals, and anyone working with AI: you are still accountable and answerable for what you produce, even if a machine does the heavy lifting. Releasing AI-generated content without critical evaluation is akin to conjuring a spell without understanding its consequences. Like a magician wielding powerful but unpredictable magic, or a novice driver behind the wheel of a truck instead of a bicycle, the stakes have been raised. The tools at our disposal are more potent than ever, and with that power comes a heightened level of answerability.
In essence, the ethical debate surrounding AI shuold not be about the authorship of the craft but shuold be about the integrity and impact of the output. The real challenge is ensuring that what we create with these advanced tools is not only innovative but also responsible and answerable. As we continue to integrate AI into more aspects of our lives, we must focus less on who—or what—authored the content and more on the ethical implications of releasing it into the world. This is where the true ethical discourse lies, and it is here that our attention should be firmly fixed.
Thursday, August 8, 2024
The Cognitive Leap Theory
With the arrival of AI, education is experiencing a profound
shift, one that requires a rethinking of how we design and implement learning
activities. This shift is captured in the cognitive leap theory, which posits
that AI is not just an add-on to traditional education but a transformative
force that redefines the learning process itself. The Cognitive Leap theory is
a core part of a larger AI-positive pedagogy framework.
Traditionally, educational activities have been structured
around original or revised Bloom’s Taxonomy, a framework that organizes
cognitive skills from basic recall of facts (Remember) to higher-order skills
like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a
sequential progression, Bloom himself never insisted on a strict hierarchy. In
fact, with the integration of AI into the classroom, the importance of these
skills is being rebalanced. The higher-order skills, particularly those
involving critical evaluation, are gaining prominence in ways that were
previously unimaginable.
In an AI-positive pedagogical approach, the focus shifts
from merely applying and analyzing information—tasks typically associated with
mid-level cognitive engagement—to critically evaluating and improving
AI-generated outputs. This represents a significant cognitive leap. Instead of
simply completing tasks, students are now challenged to scrutinize AI outputs
for accuracy, bias, and effectiveness in communication. This shift not only
fosters deeper cognitive engagement but also prepares students to navigate the
complex landscape of AI-driven information.
A key component of this approach is the development of
meta-AI skills. These skills encompass the ability to formulate effective (rich)
inquiries or prompts for AI, to inject original ideas into these prompts, and,
crucially, to critically assess the AI’s responses. This assessment is not a
one-time task but part of an iterative loop where students evaluate, re-prompt,
and refine until the output meets a high standard of quality. This process not
only sharpens their analytical skills but also enhances their creative
abilities, as they learn to think critically about the inputs and outputs of AI
systems.
Moreover, the traditional view that learning progresses
linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced
classroom, evaluation and creation are no longer the endpoints of learning but
are increasingly becoming the starting points. Students must begin by
evaluating AI-generated content and then proceed to improve it, a process that
requires a deep understanding of context, an awareness of potential biases, and
the ability to communicate effectively. This reordering of cognitive priorities
is at the heart of the cognitive leap theory, which emphasizes that the future
of education lies in teaching students not just to perform tasks but to engage
in higher-order thinking at every stage of the learning process.
The implications of this shift are serious. Educators must
rethink how they design assignments, moving away from traditional task-based
assessments toward activities that challenge students to evaluate and improve
upon AI-generated outputs. This requires a new kind of pedagogy, one that is
flexible, iterative, and deeply engaged with the possibilities and limitations
of AI.
By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself.
Thursday, August 1, 2024
Meet Jinni, a Universal Assistant Bot
Take Dr. Nguyen, for instance. A junior professor with a packed schedule, she was just invited to present at a conference in Milan but wasn't sure how to get funding. She turned to Jinni.
"Good afternoon, Professor Nguyen. What do you need today?" Jinni asked.
"I want to attend a conference in Milan. Can I get support?" she inquired.
It added, "If you’d rather tell me the details about the conference and upload the invitation letter, I can file the request for you. Or, you can follow the link and do it yourself."
Professor Nguyen appreciated the options and the clarity, and chose to upload her details, letting Jinni handle the rest. Within a minute, Jinni said "Done, you shuold hear from the dean's office within a week. I alrready checked your eligibility, and recommended the Associate Dean to approve."
Then there was Mr. Thompson, a new staff member who discovered a puddle in the lobby after a rainy night. He pulled out his phone and described the situation to Jinni.
"You need to file an urgent facilities request. Here’s the link. Would you like me to file one for you? If yes, take a picture of the puddle," Jinni offered. "But if it’s really bad, you may want to call them. Do you want me to dial?"
Mr. Thompson opted for the latter, and within moments, Jinni had connected him to the facilities team.
Finally, there was Jose, a student who had missed the course drop deadline because of a bad flu. Anxious and unsure what to do, he asked Jinni for help.
"Sorry to hear you’ve been sick. Jose. Yes, there is a petition you can file with the Registrar," Jinni replied. "I can do it for you, but I need a few more details. Do you have a note from your doctor? If not, you should get it first, then take a picture of it for me. If you used the Campus Health Center, I can contact them for you to request documentation. I will then write and submit the petition on your behalf. I will also need a few details - which class, the instructore's name, when you got sick, etc." Jose was relieved to find a straightforward solution to his problem and began to answer Jinni's questions one by one.
The technology to create a universal agent bot like Jinni is not yet on the open market, but all elements do already exist as prototypes. More advanced customizable AI models, trained on extensive and diverse datasets, are essential to handle such tasks. More active, agentic AI also does exist. It can file and submit forms, not just find them. But even if we could to simply find and interpret policy and procedures, and point users to the right forms, it would alredy be a huge step forward.
Monday, July 29, 2024
AI is an Amateur Savant
Most people who use AI think it is great in general but believe it does not grasp their area of specialization very well. As an applied philosopher, I create intellectual tools to help others think through their problems. I find AI excellent at clarifying and explaining ideas, but it has never generated an original idea worth writing about. I have yet to see reports from others in any discipline that AI has independently produced groundbreaking ideas.
AI can handle large amounts of data and provide coherent, accurate responses across various fields. This ability is comparable to a well-informed amateur who has a broad understanding but lacks deep expertise. AI can recount historical facts, explain scientific principles, and offer legal insights based on data patterns, yet it falls short in deeper, more nuanced analysis.
In my case, AI can assist by summarizing existing theories or offering possible objections or additional arguments. However, it lacks the ability to generate a genuinely novel idea. I use it a lot, and not even once did it produce anything like that. This limitation stems from its reliance on pre-existing data and patterns, preventing it from achieving the level of innovation that human professionals bring to their fields. Some believe that this limitation will soon be overcome, but I do not think so. It seems to be an intrinsic limitation, a function of AI's way of training.
Professionals/experts, whether in philosophy, medicine, or history, possess a depth of understanding developed through extensive education and practical experience. They apply complex methodologies, critical thinking, and ethical considerations that AI cannot replicate. A doctor considers the patient's history and unique implications of treatments, while a professional historian places events within a broader socio-cultural context. AI, despite its capabilities, often misses these subtleties. It is, in some sense, a savant: a fast, amazing, but inexperienced thinker.
The gap between a capable amateur and a professional/expert might seem small, especially from the point of view of the amateur. However, it is huge and is rooted in the depth of expertise, critical thinking, and the ability to judge that professionals possess; it is a function of intellect, experience, and education. This gap is where educators should look to adapt the curriculum.
In education, we should focus on that gap between the amateur and the professional and conceptualize it as the ultimate learning outcome, then build new skill ladders to claim there. Students need to understand and conquer the gap between AI and a professional expert. These meta-AI skills are our true North. AI can support this learning process by providing clear explanations and diverse perspectives, but it cannot replace the nuanced understanding and innovation that human professionals offer.
Wednesday, July 24, 2024
What percentage of my text is AI-generated?
Go ahead, ask me the question. However, I would in turn ask you to specify which of the following kinds of assistance from AI you are interested in.
- Distilling information into summaries
- Revamping and recasting content
- Polishing grammar, spelling, and punctuation
- Sparking ideas and crafting titles
- Conjuring additional arguments or perspectives
- Spotting potential counterarguments or objections
- Constructing and organizing content
- Juxtaposing points from multiple sources
- Scrutinizing and refining existing content
- Demystifying complex ideas or jargon
- Architecting outlines and organizational structures
- Fashioning examples or illustrations
- Tailoring content for different audiences or formats
- Forging hooks or attention-grabbing openings
- Sculpting strong conclusions or call-to-actions
- Unearthing relevant quotes or citations
- Decoding concepts in simpler terms
- Fleshing out brief points or ideas
- Trimming verbose text
- Honing clarity and coherence
- Smoothing the flow between paragraphs or sections
- Concocting metaphors or analogies
- Verifying and authenticating information
- Proposing synonyms or alternative phrasing
- Pinpointing and eliminating redundancies
- Diversifying sentence variety and structure
- Maintaining consistency in tone and style
- Aligning content with specific style guides
- Devising keywords for SEO optimization
- Assembling bullet points or numbered lists
- Bridging sections with appropriate transitions
- Flagging areas that need more elaboration
- Accentuating key takeaways or main points
- Formulating questions for further exploration
- Contextualizing with background information
- Envisioning visual elements or data representations
- Detecting potential areas of bias or subjectivity
- Inventing catchy titles or headlines
- Streamlining the logical flow of arguments
- Boosting text engagement and persuasiveness
- Rooting out and rectifying logical fallacies
- Imagining hypothetical scenarios or case studies
- Illuminating alternative perspectives on a topic
- Weaving in storytelling elements
- Uncovering gaps in research or argumentation
- Producing counterexamples or rebuttals
- Bolstering weak arguments
- Harmonizing tense and voice inconsistencies
- Composing topic sentences for paragraphs
- Integrating data or statistics effectively
- Devising analogies to explain complex concepts
- Injecting humor or wit
- Eradicating passive voice usage
- Compiling topic-specific vocabulary lists
- Enhancing paragraph transitions
- Untangling run-on sentences
- Articulating thesis statements or main arguments
- Infusing content with sensory details
- Resolving dangling modifiers
- Conceiving potential research questions
- Incorporating rhetorical devices
- Rectifying pronoun inconsistencies
- Anticipating potential counterarguments
- Embedding anecdotes effectively
- Mending comma splices
- Drafting potential interview questions
- Sprinkling in cultural references
- Correcting subject-verb agreement errors
- Designing potential survey questions
- Adorning text with figurative language
- Repositioning misplaced modifiers
- Brainstorming potential titles for sections or chapters
- Integrating expert opinions
- Paring down wordiness
- Exploring potential subtopics
- Weaving in statistical data
- Eliminating tautologies
- Coining potential taglines or slogans
- Embedding historical context
- Untangling mixed metaphors
- Developing potential FAQs and answers
- Incorporating scientific terminology
- Fixing split infinitives
- Generating potential discussion points
- Blending in technical jargon
- Expunging clichés
- Crafting potential calls-to-action
- Inserting industry-specific terms
- Replacing euphemisms
- Extracting potential pullout quotes
- Interweaving mathematical concepts
- Eliminating redundant phrasing
- Compiling potential glossary terms and definitions
- Introducing philosophical concepts
- Standardizing formatting
- Curating potential appendix content
- Incorporating legal terminology
- Clarifying ambiguous pronouns
- Cataloging potential index terms
- Synthesizing interdisciplinary perspectives
- Writing long list of AI uses for content generation
Saturday, July 20, 2024
The Three Wave Strategy of AI Implementation
Whether it's a university, a tech giant, a manufacturing company, a public utility, or a government agency, the complexity of operations can be overwhelming. To illustrate this point, Claude and I have generated a list of over 1,150 workflows typical for a large university, many of which can be further broken down into smaller, more specific processes.
Given this complexity, the question arises: Where do we start with AI implementation? The answer lies in a strategic, phased approach that considers the unique characteristics of each workflow and the organization's readiness for AI adoption.
The First Wave: Low-Hanging Fruit
The initial phase of AI implementation should focus on what we call the "low-hanging fruit" - workflows that meet three crucial criteria:
- Self-evident quality: The output quality is immediately obvious and doesn't require complex evaluation.
- Single-person control: The workflow is typically managed or executed by one individual.
- Ready-made AI tools: The process can be enhanced using existing AI tools without requiring specialized development. It is either using one of the primary LLM's or building a custom bot.
These criteria help identify areas where AI can quickly and effectively augment human efforts, improving efficiency and potentially enhancing the quality of service provided. Based on these criteria, here's a priority list of workflows that could be considered for the first wave of AI implementation. These are just examples:
- Student services
- Student and prospective student advising of all kinds
- Resume and Cover Letter Review (Career Services)
- Offering individual resume critiques
- Assisting with cover letter development
- Academic Policy Development and Enforcement (Academic Affairs)
- Drafting and revising academic policies
- Health Education and Outreach (Health and Wellness Services)
- Creating and distributing health education materials
- Sustainability Education and Outreach (Sustainability and Environmental Initiatives)
- Creating sustainability guides and resources for campus community
- Digital Marketing and Social Media Management (University Communications and Marketing)
- Creating and curating content for various platforms
- Grant Proposal Development and Submission (Research and Innovation)
- Assisting faculty with proposal writing
- Financial Aid Counseling (Financial Aid and Scholarships)
- Providing one-on-one counseling sessions
- Offering debt management and financial literacy education
- Alumni Communications (Alumni Relations and Development)
- Producing alumni magazines and newsletters
- Scholarly Communications (Library Services)
- Supporting faculty in publishing and copyright issues
- Providing guidance on research impact metrics
- International Student and Scholar Services (International Programs and Global Engagement)
- Providing immigration advising and document processing
This first wave serves multiple purposes. It demonstrates the proof of principle, making more stakeholders comfortable with AI integration. It also helps build internal expertise and confidence in working with AI technologies. These early successes can pave the way for more ambitious implementations in the future.
The Second Wave: Tackling Costly Workflows
Once the organization has gained experience and confidence from the first wave, it can move on to more complex and costly workflows. These are typically processes that involve significant labor, occur frequently, and have a broad scope of impact on the organization. However, it is crucial to narrow down this list based on feasibility and readiness for AI implementation.
For instance, while teaching is undoubtedly one of the most labor-intensive and impactful processes in a university, we do not yet have sufficient knowledge on how to make it significantly more efficient through AI. Some processes, like teaching, may never be fully optimized by AI because to their inherently relational nature.
Note, this is also an opportunity to review major workflows; they often evolved over the years, and are far from ideal efficiency. AI can help review these workflows, and recommend streamlining. And of course, AI can be integrated into actually doing the work.
The Third Wave: Enterprise-Level Solutions
Only after successfully navigating the first two waves should an organization consider enterprise-level AI solutions. These solutions have the potential to radically redefine the organization's core operations, placing AI at the center of its processes. This level of integration requires a deep understanding of AI capabilities, a clear vision of the organization's future, and a robust infrastructure to support AI-driven operations. Most importantly, it requires specialized tools and high level of security.
The Timeline and Exceptions
This phased approach to AI implementation is not a quick process. For most large, complex organizations, it could take a couple of decades to fully realize the potential of AI across all workflows. However, there are exceptions. Some businesses with simpler and fewer workflows, such as narrowly specialized customer service operations, may be able to leapfrog straight into the third wave, especially if they have prior experience with AI technologies.
But these are the exceptions rather than the rule. For the majority of organizations, the path to comprehensive AI implementation requires a well-thought-out strategy, clear priorities, and a focus on building confidence and expertise over time.
Integrating AI into a complex organization's workflows is a marathon, not a sprint. It asks for patience, strategic thinking, and a willingness to learn and adapt. The key is to approach this journey with a clear strategy, well-defined priorities, and a commitment to building internal AI expertise.
Wednesday, July 17, 2024
AI is not going to implement itself, but governments can help
Monday, July 15, 2024
Effort in Learning: The Good, the Bad, and the AI Advantage
Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.
Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain.
The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.
The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.
Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.
Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon.
AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.
To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.
Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.
Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.
Tuesday, July 9, 2024
AI-Positive Pedagogy: Navigating the Great Disruption
AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.
While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.
The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.
AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.
AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.
The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.
To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.
It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.
In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.
To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.
The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.
Wednesday, June 26, 2024
On AI Shaming
Here is a new thing: AI shaming. It is a practice where individuals accuse others of using artificial intelligence to generate written content, as if such an act were inherently deceitful or somhow sinful. How fascinating, the very premise of it.
This phenomenon reveals itself to be a fairly common logical fallacy. It is a summative dismissive argument, with a dash of ad hominem (ad machinam?) for flavor. One might wonder why the method of creation should overshadow the substance of the content. After all, we don't dismiss the works of authors who switched from quills to typewriters, do we?
There's a certain Foucauldian quality to this practice, if one may be permitted a touch of theoryhead's indulgence. By attempting to draw a sharp line between 'acceptable' and 'unacceptable' means of content creation, some seem to be engaging in a subtle power play. It is almost as if they're trying to sell their unfamiliarity with newer technologies as a virtue. it is a rather clever way of elevating the game at which one excells, and putting down a game at which one fails. While an understandable inclination it is still a folly.
For those of us who have embraced these new tools, such accusations are about as concerning as a light drizzle on a summer day - which is to say, entirely expected and hardly worth mentioning. If anything, it provides a certain amusement to observe the lengths to which some will go to maintain the status quo and their priviledged little spot in it.
However, there is a more sobering concern to consider. While thick-skinned people like me might brush off such criticisms with a raised eyebrow, younger, more impressionable ones might internalise this arbitrary stigma. It would be a shame if the next generation felt compelled to hide their technological proficiency out of fear of Luddites' bullying.
As these AI tools inevitably become more sophisticated and ubiquitous, perhaps we might redirect our energy towards more productive ends. Instead of engaging in this curious form of digital fingerpointing, we could focus on the responsible and creative use of these technologies. After all, the ideas expressed within content will always be more intriguing than the means by which they were transcribed.
To those who persist in AI shaming: by all means, knock yourelf out. Your dedication to this cause is admirable, if somewhat perplexing. Just don't be too surprised if the rest of us seem a bit distracted - we'll be busy adapting to the future while you're perfecting your fingerwagging techniques.
P.S. This text, according to QuillBot, is 0% AI-generated, however I wrote it with Claude :)
How to bullshit about bullshit
1. Focus on terminology over utility:
The paper spends considerable effort arguing that the outputs of large language models (LLMs) like ChatGPT should be classified as "bullshit" rather than "hallucinations." While this may be an interesting philosophical distinction, it arguably does little to advance our understanding of the practical implications and applications of these technologies. The authors could have devoted more attention to analyzing the actual utility and potential use cases of LLMs, which would likely be more valuable to policymakers and the public.2. Limited practical insights:
By focusing primarily on categorizing the nature of LLM outputs, the paper misses an opportunity to provide meaningful insights into how these technologies can be effectively and responsibly utilized. A more constructive approach might involve examining specific contexts where LLMs prove useful and where they fall short, rather than broadly labeling their output as "bullshit."3. Potential for misdirection:
While the authors argue that the term "hallucinations" is misleading, replacing it with "bullshit" may not necessarily lead to a clearer understanding of LLM behavior. Both terms are metaphorical and could potentially misguide readers about the true nature and capabilities of these systems. A more technical and nuanced explanation of how LLMs function and their limitations might be more informative.4. Overlooking nuance:
The paper seems to take a binary approach – either LLMs are concerned with truth (which they argue against) or they are producing "bullshit." This oversimplification may overlook the nuanced ways in which LLMs can be useful for various tasks, even if they don't have an inherent concern for truth in the way humans do.5. Lack of actionable recommendations:
While the authors critique the use of the term "hallucinations," they don't offer clear, actionable recommendations for how to better communicate about LLMs to policymakers and the public. A more constructive approach would be to propose specific ways to educate stakeholders about the capabilities and limitations of these technologies.6. Missing the broader context:
By focusing narrowly on the philosophical categorization of LLM outputs, the paper misses an opportunity to discuss the broader implications of these technologies on society, economy, and various industries. A more comprehensive analysis of the impact and potential of LLMs would likely be more valuable to readers.Do AI bots deceive?
The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...