AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Wednesday, December 4, 2024
Why We Undervalue Ideas and Overvalue Writing
This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas - shaped by unique life experiences and cultural viewpoints - get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.
Polished academic prose renders judgments easy. Evaluators find comfort in assessing grammatical correctness, citation formats, and paragraph transitions. The quality of ideas brings discomfort - they defy easy measurement and often challenge established thinking. When ideas come wrapped in awkward prose, they face near-automatic devaluation.
AI writing tools expose this bias with new clarity. These tools excel at producing acceptable academic prose - the mechanical aspect we overvalue. Yet in generating truly original ideas, AI remains remarkably limited. AI can refine expression but cannot match the depth of human insight, creativity, and lived experience. This technological limitation actually highlights where human creativity becomes most valuable.
This bias shapes student behavior in troubling ways. Rather than exploring new intellectual territory, students learn to package conventional thoughts in pristine prose. The real work of scholarship - generating and testing ideas - takes second place to mastering academic style guides. We have created a system that rewards intellectual safety over creative risk, while systematically disadvantaging students whose mastery of academic conventions does not match their intellectual capacity.
Changing this pattern requires uncomfortable shifts in how we teach and evaluate. What if we graded papers first without looking at the writing quality? What if we asked students to submit rough drafts full of half-formed ideas before cleaning up their prose? What if we saw AI tools as writing assistants that free humans to focus on what they do best - generating original insights and making unexpected connections?
The rise of AI makes this shift urgent. When machines can generate polished prose on demand, continuing to favor writing craft over ideation becomes indefensible. We must learn to value and develop what remains uniquely human - the ability to think in truly original ways, to see patterns others miss, to imagine what has never existed. The future belongs not to the best writers but to the most creative thinkers, and our educational practices must evolve to reflect this reality while ensuring all students can fully contribute their intellectual gifts.
Thursday, November 7, 2024
Notebook LM: A quintessential Google Move
NotebookLM represents something that Google has always done well: make advanced technology accessible. In a crowded landscape where hundreds of startups have launched custom bots, Google has not just entered the competition but has redefined it. Many of these emerging tools come with a bewildering array of features, promising endless configurability but often requiring a steep learning curve. MS Azure is the prime example: powerful, but not for regular folks. Google has approached this differently, prioritizing a user experience over the quality of the output. NotebookLM may not be revolutionary, but it offers an intuitive interface that anyone can engage with easily.
Perhaps more cleverly, Google has managed to capture attention with an unexpected viral twist. NotebookLM features the ability to generate a podcast in which two AI voices engage in a dialogue about the content of source files. The feature is, admittedly, not all that practical; the voices cannot му changes, and will soon make people tired of them. Yet from a marketing standpoint, it is brilliant. It creates a shareable moment, a curiosity that makes people talk. The move does not just showcase technical capability but also a playful spirit that reminds users of Google's early days, when the company was known for surprising innovations.
Still, whether this resurgence will lead to long-term success is uncertain. Skeptics point out that Google has a history of launching exciting products only to abandon them later (recall Google Wave). Flashy features alone will not sustain momentum. What matters is how NotebookLM performs as a knowledge synthesizer and learning tool. If it falls short in these core areas, the buzz may prove to be little more than a temporary distraction.
Yet, for now, Google's reentry into the AI conversation is worth appreciating. In a tech landscape increasingly dominated by dense, intricate systems, Google's emphasis on usability stands out. Even if NotebookLM does not single-handedly redefine the custom bot race, it serves as a reminder of what once made Google a technological giant: the ability to turn complexity into something approachable and joyful.
Whether Google will truly reclaim its place as an AI leader is anyone’s guess, but at the very least, the company has made the race more interesting. For an industry that often takes itself far too seriously, this burst of creativity feels like a breath of fresh air. In a field defined by hard-nosed competition, seeing Google take risks and create a bit of buzz is a win, even if it is only a moral one.
Tuesday, October 22, 2024
Is AI Better Than Nothing? In Mental Health, Probably Yes
In medical trials, "termination for benefit" allows a trial to be stopped early when the evidence of a drug’s effectiveness is so strong that it becomes unethical to continue withholding the treatment. Although this is rare—only 1.7% of trials are stopped for this reason—it ensures that life-saving treatments reach patients as quickly as possible.
This concept can be applied to the use of AI in addressing the shortage of counsellors and therapists for the nation's student population, which is facing a mental health crisis. Some are quick to reject the idea of AI-based therapy, upset by the notion of students talking to a machine instead of a human counselor. However, this reaction often lacks a careful weighing of the benefits. AI assistance, while not perfect, could provide much-needed support where human resources are stretched too thin.
Yes, there have been concerns, such as the story of Tessa, a bot that reportedly gave inappropriate advice to a user with an eating disorder. But focusing on isolated cases does not take into account the larger picture. Human therapists also make mistakes, and we do not ban the profession for it. AI, which is available around the clock and costs next to nothing, should not be held to a higher standard than human counselors. The real comparison is not between AI and human therapists, but between AI and the complete lack of human support that many students currently face. Let's also not forget that in some cultures, going to a mental health professional is still a taboo. Going to an AI is a private matter.
I have personally tested ChatGPT several times, simulating various student issues, and found it consistently careful, thoughtful, and sensible in its responses. Instead of panicking over astronomically rare errors, I encourage more people to conduct their own tests and share any issues they discover publicly. This would provide a more balanced understanding of the strengths and weaknesses of AI therapy, helping us improve it over time. There is no equivalent of a true clinical trial, so some citizen testing would have to be done.
The situation is urgent, and waiting for AI to be perfect before deploying it is not much of an option. Like early termination in medical trials, deploying AI therapy now could be the ethical response to a growing crisis. While not a replacement for human counselors, AI can serve as a valuable resource in filling the gaps that the current mental health system leaves wide open.
Saturday, October 19, 2024
Where is the work? AI and Creativity
The rise of AI forces us to untangle these ideas and reconsider what it means to produce "human" work. Take a recent story I heard from from the audience of one of my talks: a person described how he fed an AI every detail about a retiring colleague, and the AI generated a speech so moving that it brought the retiree to tears. But the retiree, upon learning the speech's origin, was dumbfounded.
What is interesting is not the retiree’s reaction, but the storyteller's own oversight. He failed to see his own critical role in the process. By gathering the details, curating moments that best captured the retiree’s essence, he performed the most human part of the creative act. He mistook the act of turning those ideas into words as the creative work, but that is not the case.
AI, ironically, is pushing us to be more human, not more like machines. It is forcing us to recognize that our true contribution lies in the ability to think, to create, and to feel. As AI takes over the mechanical aspects of tasks we once considered integral to creativity—whether that is writing, painting, or coding—we are left with the more uniquely human roles: original thinking and emotional depth.
This shift reshapes our understanding of creativity and work. It shows that human value does not lie in production—the technical aspect of turning an idea into a product—but in the deeper conceptual and emotional layers that AI still cannot reach.
As we move forward, we are compelled to rethink productivity itself. The future will not belong to those who can outdo AI in execution, but to those who can combine AI’s strengths with our unique capacities for innovation, empathy, and insight.
The challenge we face is not to resist AI, but to fully embrace our humanity—to cultivate the traits that machines cannot replicate. With AI taking over the drudgery, we are freed to focus on higher-order thinking and those creative leaps that define human ingenuity.
Ironically, the more we develop artificial intelligence, the more we learn about what human intelligence really is. And in that discovery lies our future—a future where AI does not replace creativity, but elevates it to new possibilities.
Thursday, October 10, 2024
Is the college essay dead?
The college essay, once a revered academic exercise, is now facing an existential crisis. It used to be a good tool—a structured way for students to demonstrate their understanding, showcase their critical thinking, and express ideas with clarity . The college essay was not merely about content; it was a skill-building process, teaching students to organize thoughts, develop arguments, and refine language. Yet today, AI has made the traditional essay feel outdated, as it can generate polished, formulaic essays effortlessly. Policing AI use in these assignments is nearly impossible, and the conventional essay’s value is rapidly diminishing.
Not all essays are created equal, however, and the future of
the college essay might depend on the type of skills we emphasize. The expository essay,
designed to see if students understand material or can apply concepts, is on
its last legs. When AI can churn out a satisfactory response in seconds, it is
a clear sign that this form of assessment is no longer viable. The AI does not
just pass these assignments; it excels at them, raising an uncomfortable
question—if a machine can do it, why are we still teaching it? For these kinds
of essays, the challenge is that they often assess recall rather than thinking.
They were already on shaky ground; AI is just the final push.
The essays that may survive, though, are those that demand
novelty, creativity, and genuine problem-solving. AI may help in drafting,
structuring, or even generating ideas, but it does not replace the kind of
original thinking needed to solve real-world problems. It cannot fully simulate
human intuition, lived experience, or deep critical evaluation. AI's writing is wooden, and often devoid of true beauty. Essays that
require students to synthesize information in new ways, explore original ideas, exhibit artistic talent, or reflect deeply on personal experiences still have value. These essays are
not about whether you know a theory; they are about what you can do with it.
This is where the human element—the messy, unpredictable spark of
creativity—remains irreplaceable.
The deeper issue is not AI itself but the way we have been
teaching and valuing writing. For decades, the emphasis has been on
producing “correct” essays—structured, grammatically precise, and obedient to
the format. We have been training students to write well enough to meet
requirements, not to push the boundaries of their creativity. It is like
teaching students to be proficient typists when what we really need are
novelists or inventors. We have confused competency with originality, thinking
that writing formulaic content is a necessary step before producing meaningful
work. This is a misunderstanding of how creativity works; mastery does not come
from repetition of the mundane but from risk-taking and exploration, even if
that means stumbling along the way.
The real future of the essay should start with this recognition. Imagine if instead of book reports or basic expository pieces, students were challenged to write for real audiences—to draft scientific papers for journals, craft poems for literary contests, or propose solutions to pressing social issues. Sure, many students would not reach the publication stage, but the act of aiming higher would teach them infinitely more about the writing process, and more importantly, about thinking itself. This would not just be about mastering the mechanics of writing but developing a mindset of curiosity and originality. AI could still play a role in these processes, helping with the technicalities, leaving the student free to focus on developing and articulating novel ideas.
The problem with the book report or the “explain Theory A” essay is not just that they are boring; it is that they are irrelevant. Nobody in the professional world is paid to summarize books or explain theories in isolation. These are stepping stones that lead nowhere. Excelling at pointless, terrible genre does not prepare to succeed ad an authentic genre. Instead of teaching students to write these antiquated forms, we should ask them to write pieces that demand something more—something they cannot copy-paste or generate easily with a prompt. Authentic, context-rich, and creative assignments are the ones that will endure. If there is no expectation of novelty or problem-solving, the essay format becomes an exercise in futility.
AI’s rise does not have to spell the end of the essay. It
might, in fact, be the nudge needed to reinvent it. We have the chance to move
beyond teaching “correct” writing toward cultivating insightful, original work
that challenges the boundaries of what students can do. AI’s presence forces us
to ask hard questions about what we want students to learn. If writing is no
longer about mechanics or regurgitating content but about generating ideas and
engaging critically, then AI becomes a collaborator, not a competitor. It can help
with the structure, but the essence—the thinking—must come from the student.
In the end, the college essay is not dead; it is just in need of reinvention. The conventional model of essays as rote demonstrations of knowledge is no longer viable. But the essay that challenges students to think, create, and solve problems—those essays will survive. They might even thrive, as the focus shifts from the mechanics of writing to the art of thinking. The key is to evolve our teaching methods and expectations, making room for a new kind of writing that leverages AI without losing the human touch. Raising expectations is the main strategy in dealing with AI in education.
Wednesday, October 2, 2024
Four Myths About AI
AI is often vilified, with myths shaping public perception more than facts. Let us dispel four common myths about AI and present a more balanced view of its potential and limitations.
1. AI Is Environmentally Costly
One of the most persistent claims about AI is that its use
requires massive amounts of energy and water, making it unsustainable in the
long run. While it is true that training large AI models can be
energy-intensive, this perspective needs context. Consider the environmental
cost of daily activities such as driving a car, taking a shower, or watching
hours of television. AI, on a per-minute basis, is significantly less taxing
than these routine activities.
More importantly, AI is becoming a key driver in creating
energy-efficient solutions. From optimizing power grids to improving logistics
for reduced fuel consumption, AI has a role in mitigating the very problems it
is accused of exacerbating. Furthermore, advancements in hardware and
algorithms continually reduce the energy demands of AI systems, making them
more sustainable over time.
In the end, it is a question of balance. The environmental
cost of AI exists, but the benefits—whether in terms of solving climate
challenges or driving efficiencies across industries—often outweigh the
negatives.
2. AI Presents High Risks to Cybersecurity and Privacy
Another major concern is that AI poses a unique threat to
cybersecurity and privacy. Yet there is little evidence to suggest that AI
introduces any new vulnerabilities that were not already present in our
existing digital infrastructure. To date, there has not been a single instance
of data theft directly linked to AI models like ChatGPT or other large language
models (LLMs).
In fact, AI can enhance security. It helps in detecting
anomalies and intrusions faster than traditional software, potentially catching
cyberattacks in their earliest stages. Privacy risks do exist, but they are no
different from the risks inherent in any technology that handles large amounts
of data. Regulations and ethical guidelines are catching up, ensuring AI
applications remain as secure as other systems we rely on.
It is time to focus on the tangible benefits AI
provides—such as faster detection of fraud or the ability to sift through vast
amounts of data to prevent attacks—rather than the hypothetical risks. The fear
of AI compromising our security is largely unfounded.
3. Using AI to Create Content Is Dishonest
The argument that AI use, especially in education, is a form
of cheating reflects a misunderstanding of technology’s role as a tool. It is
no more dishonest than using a calculator for math or employing a spell-checker
for writing. AI enhances human capacity by offering assistance, but it does not
replace critical thinking, creativity, or understanding.
History is full of examples of backlash against new
technologies. Consider the cultural resistance to firearms in Europe during the
late Middle Ages. Guns were viewed as dishonorable because they undermined
traditional concepts of warfare and chivalry, allowing common soldiers to
defeat skilled knights. This resistance did not last long, however, as
societies learned to adapt to the new tools, and guns ultimately became an
accepted part of warfare.
Similarly, AI is viewed with suspicion today, but as we
better integrate it into education, the conversation will shift. The knights of
intellectual labor are being defeated by peasants with better weapons. AI can
help students better understand complex topics, offer personalized feedback,
and enhance learning. The key is to see AI as a supplement to education, not a
replacement for it.
4. AI Is Inaccurate and Unreliable
Critics often argue that AI models, including tools like
ChatGPT, are highly inaccurate and unreliable. However, empirical evidence
paints a different picture. While no AI is perfect, the accuracy of models like
ChatGPT or Claude when tested on general undergraduate knowledge is remarkably
high—often in the range of 85-90%. For comparison, the average human memory
recall rate is far lower, and experts across fields frequently rely on tools
and references to supplement their knowledge.
AI continues to improve as models are fine-tuned with more
data and better training techniques. While early versions may have struggled
with certain tasks, the current generation of AI models is much more robust. As
with any tool, the key lies in how it is used. AI works best when integrated
with human oversight, where its ability to process vast amounts of information
complements our capacity for judgment. AI’s reliability is not perfect, but it
is far from the "uncontrollable chaos" some claim it to be.
***
AI, like any revolutionary technology, invites both excitement and fear. Many of the concerns people have, however, are rooted in myth rather than fact. When we consider the evidence, it becomes clear that the benefits of AI—whether in energy efficiency, cybersecurity, education, or knowledge accuracy—far outweigh its potential downsides. The challenge now is not to vilify AI but to understand its limitations and maximize its strengths.
Sunday, September 29, 2024
Advanced AI users develop special cognitive models
When we encounter a stranger, we make swift, often
unconscious judgments about who they are and what they are capable of. A person
who speaks our language with barely a hint of an accent? We assume they are
fluent. Someone who drops a reference to a complex scientific theory? We peg
them as well-educated, likely to be literate, and probably knowledgeable about
a range of topics from current events to social norms.
These snap judgments form the backbone of our social interactions. They are
mental shortcuts, honed over millennia of human evolution, allowing us to
navigate the complexities of social life with remarkable efficiency. Most of
the time, they serve us well. We can usually guess whether someone will
understand a joke, follow a complex argument, or need help using a smartphone. These are cognitive models.
But when we step into the realm of artificial intelligence, these time-tested
models crumble. Our human-centric predictions fail spectacularly, leaving us
confused and often frustrated. Consider a recent incident with ChatGPT, a
sophisticated language model. When asked to count the number of 'r's in the
word "strawberry," it faltered. Many observers scoffed, concluding
that AI must be fundamentally stupid if it couldn't handle such a simple task.
Yet this reaction reveals more about our flawed expectations than any
shortcoming of AI. Those familiar with AI's inner workings were not surprised.
They understand that a language model, no matter how advanced, is not optimized
for character-level analysis. It is like expecting a master chef to be an
expert accountant simply because both professions involve numbers.
This misalignment between our expectations and AI's actual capabilities stems
from our tendency to anthropomorphize. We instinctively attribute human-like
qualities to these digital entities. We expect them to have consistent
opinions, to learn from our interactions, to understand context and nuance as
we do. But AI, in its current form, does none of these things.
Unlike humans, AI does not carry the baggage of personal experience or emotion.
It does not have good days or bad days. It will not be flattered by praise or
offended by insults. It can switch from discussing quantum physics to writing
poetry without missing a beat, unencumbered by the specialization that defines
human expertise.
But AI's differences extend beyond mere capability. It lacks the fundamental
attributes we associate with consciousness. It has no self-awareness, no goals
or motivations of its own. It does not truly understand the content it
generates, despite how convincing it may seem. It is a reflection of the data
it was trained on, not a sentient being forming its own thoughts and opinions.
To interact effectively with AI, we need to develop new mental models. We must
learn to predict its behavior not based on human analogies, but on an
understanding of its unique nature. This means recognizing that AI might
struggle with tasks we find trivially easy, while effortlessly accomplishing
feats that would challenge even the most brilliant human minds.
It means understanding that every interaction with AI is essentially new.
Unlike humans, who build on past conversations and experiences, most current AI
systems do not retain information from one chat to the next. They do not learn
or evolve through our interactions. Each query is processed afresh, without the
context of what came before.
This new model of understanding also requires us to be more precise in our
interactions with AI. While humans often fill in gaps in conversation with
assumed context, AI interprets our requests literally. It does not
automatically infer our unstated needs or desires. The clarity of our input
directly influences the quality of the AI's output.
As AI becomes an increasingly integral part of our lives, developing these new
mental models is not just about avoiding frustration. It is about unlocking the
full potential of these powerful tools. By understanding AI's strengths and
limitations, we can craft our interactions to leverage its capabilities more
effectively.
The future of human-AI interaction lies not in expecting AI to conform to human
patterns, but in adapting our approach to align with AI's unique
characteristics. It is a future that requires us to be more thoughtful, more
precise, and more open to rethinking our instinctive assumptions. In doing so,
we may not only improve our interactions with AI but also gain new insights
into the nature of intelligence itself.
Get Used to It: You Will Read AI Summaries, Too
No human can keep up with academic publishing. In philosophy alone - a relatively small field - scholars produce over 100 million words a ye...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...