When we encounter a stranger, we make swift, often
unconscious judgments about who they are and what they are capable of. A person
who speaks our language with barely a hint of an accent? We assume they are
fluent. Someone who drops a reference to a complex scientific theory? We peg
them as well-educated, likely to be literate, and probably knowledgeable about
a range of topics from current events to social norms.
These snap judgments form the backbone of our social interactions. They are
mental shortcuts, honed over millennia of human evolution, allowing us to
navigate the complexities of social life with remarkable efficiency. Most of
the time, they serve us well. We can usually guess whether someone will
understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models.
But when we step into the realm of artificial intelligence, these time-tested
models crumble. Our human-centric predictions fail spectacularly, leaving us
confused and often frustrated. Consider a recent incident with ChatGPT, a
sophisticated language model. When asked to count the number of 'r's in the
word "strawberry," it faltered. Many observers scoffed, concluding
that AI must be fundamentally stupid if it couldn't handle such a simple task.
Yet this reaction reveals more about our flawed expectations than any
shortcoming of AI. Those familiar with AI's inner workings were not surprised.
They understand that a language model, no matter how advanced, is not optimized
for character-level analysis. It is like expecting a master chef to be an
expert accountant simply because both professions involve numbers.
This misalignment between our expectations and AI's actual capabilities stems
from our tendency to anthropomorphize. We instinctively attribute human-like
qualities to these digital entities. We expect them to have consistent
opinions, to learn from our interactions, to understand context and nuance as
we do. But AI, in its current form, does none of these things.
Unlike humans, AI does not carry the baggage of personal experience or emotion.
It does not have good days or bad days. It will not be flattered by praise or
offended by insults. It can switch from discussing quantum physics to writing
poetry without missing a beat, unencumbered by the specialization that defines
human expertise.
But AI's differences extend beyond mere capability. It lacks the fundamental
attributes we associate with consciousness. It has no self-awareness, no goals
or motivations of its own. It does not truly understand the content it
generates, despite how convincing it may seem. It is a reflection of the data
it was trained on, not a sentient being forming its own thoughts and opinions.
To interact effectively with AI, we need to develop new mental models. We must
learn to predict its behavior not based on human analogies, but on an
understanding of its unique nature. This means recognizing that AI might
struggle with tasks we find trivially easy, while effortlessly accomplishing
feats that would challenge even the most brilliant human minds.
It means understanding that every interaction with AI is essentially new.
Unlike humans, who build on past conversations and experiences, most current AI
systems do not retain information from one chat to the next. They do not learn
or evolve through our interactions. Each query is processed afresh, without the
context of what came before.
This new model of understanding also requires us to be more precise in our
interactions with AI. While humans often fill in gaps in conversation with
assumed context, AI interprets our requests literally. It does not
automatically infer our unstated needs or desires. The clarity of our input
directly influences the quality of the AI's output.
As AI becomes an increasingly integral part of our lives, developing these new
mental models is not just about avoiding frustration. It is about unlocking the
full potential of these powerful tools. By understanding AI's strengths and
limitations, we can craft our interactions to leverage its capabilities more
effectively.
The future of human-AI interaction lies not in expecting AI to conform to human
patterns, but in adapting our approach to align with AI's unique
characteristics. It is a future that requires us to be more thoughtful, more
precise, and more open to rethinking our instinctive assumptions. In doing so,
we may not only improve our interactions with AI but also gain new insights
into the nature of intelligence itself.
AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Sunday, September 29, 2024
Advanced AI users develop special cognitive models
Monday, September 23, 2024
Cognitive Offloading: Learning more by doing less
Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.
Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."
The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.
With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.
This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.
As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.
For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.
The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.
It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us.
Tuesday, September 17, 2024
Why Parallel Integration Is the Sensible Strategy of AI Adoption in the Workplace
Artificial intelligence promises to revolutionize the way we work, offering efficiency gains and new capabilities. Yet, adopting AI is not without its challenges. One prudent approach is to integrate AI into existing workflows in parallel with human processes. This strategy minimizes risk, builds confidence, and allows organizations to understand where AI excels and where it stumbles before fully committing. I have described the problem of AI output validation before; it is a serious impediment to AI integration. Here is how to solve it.
Consider a professor grading student essays. Traditionally, this is a manual task that relies on the educator's expertise. Introducing AI into this process does not mean handing over the red pen entirely. Instead, the professor continues grading as usual but also runs the essays through an AI system. Comparing results highlights discrepancies and agreements, offering insights into the AI's reliability. Over time, the professor may find that the AI is adept at spotting grammatical errors but less so at evaluating nuanced arguments.
In human resources, screening job applications is a time-consuming task. An HR professional might continue their usual screening while also employing an AI tool to assess the same applications. This dual approach ensures that no suitable candidate is overlooked due to an AI's potential bias or error. It also helps the HR team understand how the AI makes decisions, which is crucial for transparency and fairness.
Accountants auditing receipts can apply the same method. They perform their standard checks while an AI system does the same in the background. Any discrepancies can be investigated, and patterns emerge over time about where the AI is most and least effective.
This strategy aligns with the concept of "double-loop learning" from organizational theory, introduced by Chris Argyris. Double-loop learning involves not just correcting errors but examining and adjusting the underlying processes that lead to those errors. By running human and AI processes in parallel, organizations engage in a form of double-loop learning—continually refining both human and AI methods. Note, it is not only about catching and understanding AI errors; the parallel process will also find human errors through the use of AI. The overall error level will decrease.
Yes, running parallel processes takes some extra time and resources. However, this investment is modest compared to the potential costs of errors, compliance issues, or damaged reputation from an AI mishap. People need to trust technology they use, and bulding such trust takes time.
The medical field offers a pertinent analogy. Doctors do not immediately rely on AI diagnoses without validation. They might consult AI as a second opinion, especially in complex cases. This practice enhances diagnostic accuracy while maintaining professional responsibility. Similarly, in business processes, AI can serve as a valuable second set of eyes.
As confidence in the AI system grows, organizations can adjust the role of human workers. Humans might shift from doing the task to verifying AI results, focusing their expertise where it's most needed. This gradual transition helps maintain quality and trust, both internally and with clients or stakeholders.
In short, parallel integration of AI into work processes is a sensible path that balances innovation with caution. It allows organizations to harness the benefits of AI while managing risks effectively. By building confidence through experience and evidence, businesses can make informed decisions about when and how to rely more heavily on AI.
Saturday, September 14, 2024
Navigating the AI Gold Rush: Skins, Security, and the Real Value Proposition
The economic battle surrounding artificial intelligence is intensifying at an unprecedented pace. Major AI players like OpenAI, Google, Meta, and Anthropic are leading this technological revolution. Tech giants such as Microsoft, Amazon, and Apple, along with thousands of startups, are vying for a stake in this burgeoning market without being able to develop their own competitive models. Amidst this frenzy, a critical question arises: what exactly is being sold?
Two primary value propositions have emerged in this
landscape: skins and security mongers. Skins are interfaces or applications
that overlay major AI models, aiming to simplify user interaction. They cater
to individuals lacking advanced prompting skills, offering a more user-friendly
experience. Security mongers, on the other hand, emphasize heightened privacy
and security, often exaggerating potential risks to entice users.
While both propositions seem valuable on the surface, a
deeper examination reveals significant shortcomings. Skins promise to
streamline interactions with AI models by providing preset prompts or
simplified interfaces. For instance, a startup might offer a chatbot
specialized in drafting business emails, claiming it saves users the hassle of
formulating prompts themselves. However, is this convenience truly worth it?
Major AI models are increasingly user-friendly. ChatGPT, for
example, has an intuitive interface that caters to both novices and experts.
Users often find they can achieve the same or better results without
intermediary platforms. Additionally, skins often come with subscription fees
or hidden costs, meaning users are essentially paying extra for a service the
primary AI model already provides. There is also the issue of limited
functionality; skins may restrict access to the full capabilities of the AI
model, offering a narrow set of functions that might not meet all user needs.
The second proposition taps into growing concerns over data
privacy and security. Vendors claim to offer AI solutions with superior
security measures, assuring users their data is safer compared to using
mainstream models directly. But does this claim hold up under scrutiny?
Most of these intermediaries still rely on API connections
to major AI models like ChatGPT. Your data passes through their servers before
reaching the AI model, effectively adding another point of vulnerability.
Introducing additional servers and transactions inherently increases the risk
of data breaches. More touchpoints mean more opportunities for data to be
intercepted or mishandled. Furthermore, major AI providers invest heavily in
security and compliance, adhering to stringent international standards. Smaller
vendors may lack the resources to match these safeguards.
For example, a startup might advertise an AI-powered
financial advisor with enhanced security features. However, if they are routing
data through their servers to access a model like GPT-4, your sensitive
financial data is exposed to additional risk without any tangible security
benefit. The promise of enhanced security becomes questionable when the
underlying infrastructure depends on the same major models.
AI platforms have not introduced new risks to privacy or
security beyond what exists with other online services like banks or credit
bureaus. They employ advanced encryption and security protocols to protect user
data. While no system is infallible, major AI models are on par with, if not
superior to, other industries in terms of security measures. They use
end-to-end encryption to protect data in transit and at rest, implement strict
authentication measures to prevent unauthorized access, and conduct regular
security assessments to identify and mitigate vulnerabilities. It is easy to
opt out of providing your data to train new models. It is much more difficult to
know what your vendors are going to do with your data.
In a market flooded with AI offerings, it is crucial to
approach vendors' claims with a healthy dose of skepticism. Validate the
functionality by testing whether the convenience offered by skins genuinely
enhances your experience or merely repackages what is already available. Assess
the security measures by inquiring about the specific protocols in place and
how they differ from those used by major AI providers. Transparency is key;
reputable vendors should be open about how your data is used, stored, and protected.
As the AI gold rush continues, distinguishing between
genuine innovation and superficial value propositions becomes essential. Skins
and security mongers may offer appealing pitches, but often they add little to
no value while potentially increasing costs and risks. It is wise to try using
major AI models directly before opting for third-party solutions. Research the
backgrounds of vendors to determine their credibility and reliability. Seek
reviews and testimonials from other users to gauge the actual benefits and
drawbacks.
In the end, the most powerful tool at your disposal is due
diligence. By critically evaluating what is being sold, you can make informed
decisions that truly benefit you in the rapidly evolving world of AI. Beware of
vendors selling either convenience or security without substantial evidence of
their value. At the very least, take the time to validate their claims before
making an investment.
Thursday, September 12, 2024
The Stealth AI Adoption
In modern workplaces, a quiet trend is taking hold: employees are secretly adopting artificial intelligence tools to enhance their work. Whether it is writing, designing, coding, or creating content, many are leveraging AI without informing their bosses. This “stealth AI adoption” is likely more widespread than managers realize.
Consider Alex, a software developer at a bustling tech firm. To streamline his coding process, Alex uses an AI assistant that can generate snippets of code in seconds. This tool not only saves him hours each week but also allows him to tackle more complex projects. However, Alex keeps this AI helper under wraps. Why? He has two choices: use the extra time for personal activities or take on additional work to appear more productive than his peers. There is no actual incentive to admit the use of AI. In some shops, cybersecurity people will come after you, if you confess.
This hidden use of AI offers clear benefits for employees. Saving a few hours each week is tempting, whether for personal pursuits or to discreetly boost one’s workload. As a result, many organizations might be underestimating how extensively AI is being integrated into daily tasks.
Productivity can be measured in two ways: doing the same work with fewer people or doing more with the same number. The latter is a healthier, more sustainable approach. To achieve true success, organizations should aim to do more with their existing workforce rather than cutting staff. However, the stealth adoption of AI complicates this goal.
When employees use AI tools without disclosure, organizations miss out on opportunities to harness these technologies strategically. Without knowing how AI is being utilized, companies can not provide proper training or integrate AI into their workflows effectively. This fragmented approach can lead to missed efficiency gains and a lack of cohesive progress.
To foster a productive and innovative environment, companies need to build trust with their employees. Here is how:
Reassure Employees: Let your team know that adopting AI will not lead to layoffs. Emphasize that AI is a tool to help them do their jobs better, not a replacement for their roles. In unionized environments, a conversation with labor leaders would be wise.
Create Incentives for Disclosure: Encourage employees to share the AI tools they are using by offering rewards or recognition. This transparency can help management understand how AI is being integrated and identify best practices.
Do More with the Same People: Focus on expanding the scope of work and fostering innovation rather than cutting positions. This approach not only boosts morale but also drives the organization forward.
By building trust and creating a supportive environment, organizations can turn stealth AI adoption into a strategic advantage. Employees will feel comfortable sharing their AI discoveries, allowing organizations to implement these tools effectively and sustainably.
As we move further into the AI-driven era, organizations must address this hidden trend. Encouraging transparency about AI tools and developing clear strategies for their use can ensure that productivity gains are real and sustainable. Until then, the silent spread of AI will keep reshaping workplaces, one undisclosed tool at a time.
Saturday, September 7, 2024
AI in Education Research: Are We Asking the Right Questions?
The study finds that students who had access to a standard GPT-4 interface (GPT Base) performed significantly better on practice problems, but when that access was removed, they actually performed worse on exams compared to students who never had AI assistance. Interestingly, students who used a specially designed AI tutor with learning safeguards (GPT Tutor) performed similarly to the control group on exams. While these results are intriguing, we need to take a step back and consider the broader implications.
The researchers should be commended for tackling an important topic. As AI becomes more prevalent in education, understanding its effects on learning is crucial. The study's methodology appears sound, with a good sample size and appropriate controls. However, the conclusions drawn from the results may be somewhat misleading.
Consider an analogy: Imagine a study that taught one group of students to use calculators for arithmetic, while another group learned traditional pencil-and-paper methods. If you then tested both groups without calculators, of course the calculator-trained group would likely perform worse. But does this mean calculators "harm learning"? Or does it simply mean we are testing the wrong skills?
The real question we should be asking is: Are we preparing students for a world without AI assistance, or a world where AI is ubiquitous? Just as we do not expect most adults to perform complex calculations without digital aids, we may need to reconsider what math skills are truly essential in an AI-augmented world.
The study's focus on performance in traditional, unassisted exams may be missing the point. What would be far more interesting is an examination of how AI tutoring affects higher-level math reasoning, problem-solving strategies, or conceptual understanding. These skills are likely to remain relevant even in a world where AI can handle routine calculations and problem-solving.
Moreover, the study's title, "Generative AI Can Harm Learning," may be overstating the case. What the study really shows is that reliance on standard AI interfaces without developing underlying skills can lead to poor performance when that AI is unavailable. However, it also demonstrates that carefully designed AI tutoring systems can potentially mitigate these negative effects. This nuanced finding highlights the importance of thoughtful AI integration in educational settings.
While this study provides valuable data and raises important questions, we should be cautious about interpreting its results too broadly. Instead of seeing AI as a potential harm to learning, we might instead ask how we can best integrate AI tools into education to enhance deeper understanding and problem-solving skills. The goal should be to prepare students for a future where AI is a ubiquitous tool, not to protect them from it.
As we continue to explore the intersection of AI and education, studies like this one are crucial. However, we must ensure that our research questions and methodologies evolve along with the technology landscape. Only then can we truly understand how to harness AI's potential to enhance, rather than hinder, learning.
Four Myths About AI
AI is often vilified, with myths shaping public perception more than facts. Let us dispel four common myths about AI and present a more bala...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...