When we encounter a stranger, we make swift, often
unconscious judgments about who they are and what they are capable of. A person
who speaks our language with barely a hint of an accent? We assume they are
fluent. Someone who drops a reference to a complex scientific theory? We peg
them as well-educated, likely to be literate, and probably knowledgeable about
a range of topics from current events to social norms.
These snap judgments form the backbone of our social interactions. They are
mental shortcuts, honed over millennia of human evolution, allowing us to
navigate the complexities of social life with remarkable efficiency. Most of
the time, they serve us well. We can usually guess whether someone will
understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models.
But when we step into the realm of artificial intelligence, these time-tested
models crumble. Our human-centric predictions fail spectacularly, leaving us
confused and often frustrated. Consider a recent incident with ChatGPT, a
sophisticated language model. When asked to count the number of 'r's in the
word "strawberry," it faltered. Many observers scoffed, concluding
that AI must be fundamentally stupid if it couldn't handle such a simple task.
Yet this reaction reveals more about our flawed expectations than any
shortcoming of AI. Those familiar with AI's inner workings were not surprised.
They understand that a language model, no matter how advanced, is not optimized
for character-level analysis. It is like expecting a master chef to be an
expert accountant simply because both professions involve numbers.
This misalignment between our expectations and AI's actual capabilities stems
from our tendency to anthropomorphize. We instinctively attribute human-like
qualities to these digital entities. We expect them to have consistent
opinions, to learn from our interactions, to understand context and nuance as
we do. But AI, in its current form, does none of these things.
Unlike humans, AI does not carry the baggage of personal experience or emotion.
It does not have good days or bad days. It will not be flattered by praise or
offended by insults. It can switch from discussing quantum physics to writing
poetry without missing a beat, unencumbered by the specialization that defines
human expertise.
But AI's differences extend beyond mere capability. It lacks the fundamental
attributes we associate with consciousness. It has no self-awareness, no goals
or motivations of its own. It does not truly understand the content it
generates, despite how convincing it may seem. It is a reflection of the data
it was trained on, not a sentient being forming its own thoughts and opinions.
To interact effectively with AI, we need to develop new mental models. We must
learn to predict its behavior not based on human analogies, but on an
understanding of its unique nature. This means recognizing that AI might
struggle with tasks we find trivially easy, while effortlessly accomplishing
feats that would challenge even the most brilliant human minds.
It means understanding that every interaction with AI is essentially new.
Unlike humans, who build on past conversations and experiences, most current AI
systems do not retain information from one chat to the next. They do not learn
or evolve through our interactions. Each query is processed afresh, without the
context of what came before.
This new model of understanding also requires us to be more precise in our
interactions with AI. While humans often fill in gaps in conversation with
assumed context, AI interprets our requests literally. It does not
automatically infer our unstated needs or desires. The clarity of our input
directly influences the quality of the AI's output.
As AI becomes an increasingly integral part of our lives, developing these new
mental models is not just about avoiding frustration. It is about unlocking the
full potential of these powerful tools. By understanding AI's strengths and
limitations, we can craft our interactions to leverage its capabilities more
effectively.
The future of human-AI interaction lies not in expecting AI to conform to human
patterns, but in adapting our approach to align with AI's unique
characteristics. It is a future that requires us to be more thoughtful, more
precise, and more open to rethinking our instinctive assumptions. In doing so,
we may not only improve our interactions with AI but also gain new insights
into the nature of intelligence itself.
AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Sunday, September 29, 2024
Advanced AI users develop special cognitive models
Do AI bots deceive?
The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...