AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Wednesday, December 4, 2024
Why We Undervalue Ideas and Overvalue Writing
This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas - shaped by unique life experiences and cultural viewpoints - get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.
Polished academic prose renders judgments easy. Evaluators find comfort in assessing grammatical correctness, citation formats, and paragraph transitions. The quality of ideas brings discomfort - they defy easy measurement and often challenge established thinking. When ideas come wrapped in awkward prose, they face near-automatic devaluation.
AI writing tools expose this bias with new clarity. These tools excel at producing acceptable academic prose - the mechanical aspect we overvalue. Yet in generating truly original ideas, AI remains remarkably limited. AI can refine expression but cannot match the depth of human insight, creativity, and lived experience. This technological limitation actually highlights where human creativity becomes most valuable.
This bias shapes student behavior in troubling ways. Rather than exploring new intellectual territory, students learn to package conventional thoughts in pristine prose. The real work of scholarship - generating and testing ideas - takes second place to mastering academic style guides. We have created a system that rewards intellectual safety over creative risk, while systematically disadvantaging students whose mastery of academic conventions does not match their intellectual capacity.
Changing this pattern requires uncomfortable shifts in how we teach and evaluate. What if we graded papers first without looking at the writing quality? What if we asked students to submit rough drafts full of half-formed ideas before cleaning up their prose? What if we saw AI tools as writing assistants that free humans to focus on what they do best - generating original insights and making unexpected connections?
The rise of AI makes this shift urgent. When machines can generate polished prose on demand, continuing to favor writing craft over ideation becomes indefensible. We must learn to value and develop what remains uniquely human - the ability to think in truly original ways, to see patterns others miss, to imagine what has never existed. The future belongs not to the best writers but to the most creative thinkers, and our educational practices must evolve to reflect this reality while ensuring all students can fully contribute their intellectual gifts.
Thursday, August 1, 2024
Meet Jinni, a Universal Assistant Bot
Take Dr. Nguyen, for instance. A junior professor with a packed schedule, she was just invited to present at a conference in Milan but wasn't sure how to get funding. She turned to Jinni.
"Good afternoon, Professor Nguyen. What do you need today?" Jinni asked.
"I want to attend a conference in Milan. Can I get support?" she inquired.
It added, "If you’d rather tell me the details about the conference and upload the invitation letter, I can file the request for you. Or, you can follow the link and do it yourself."
Professor Nguyen appreciated the options and the clarity, and chose to upload her details, letting Jinni handle the rest. Within a minute, Jinni said "Done, you shuold hear from the dean's office within a week. I alrready checked your eligibility, and recommended the Associate Dean to approve."
Then there was Mr. Thompson, a new staff member who discovered a puddle in the lobby after a rainy night. He pulled out his phone and described the situation to Jinni.
"You need to file an urgent facilities request. Here’s the link. Would you like me to file one for you? If yes, take a picture of the puddle," Jinni offered. "But if it’s really bad, you may want to call them. Do you want me to dial?"
Mr. Thompson opted for the latter, and within moments, Jinni had connected him to the facilities team.
Finally, there was Jose, a student who had missed the course drop deadline because of a bad flu. Anxious and unsure what to do, he asked Jinni for help.
"Sorry to hear you’ve been sick. Jose. Yes, there is a petition you can file with the Registrar," Jinni replied. "I can do it for you, but I need a few more details. Do you have a note from your doctor? If not, you should get it first, then take a picture of it for me. If you used the Campus Health Center, I can contact them for you to request documentation. I will then write and submit the petition on your behalf. I will also need a few details - which class, the instructore's name, when you got sick, etc." Jose was relieved to find a straightforward solution to his problem and began to answer Jinni's questions one by one.
The technology to create a universal agent bot like Jinni is not yet on the open market, but all elements do already exist as prototypes. More advanced customizable AI models, trained on extensive and diverse datasets, are essential to handle such tasks. More active, agentic AI also does exist. It can file and submit forms, not just find them. But even if we could to simply find and interpret policy and procedures, and point users to the right forms, it would alredy be a huge step forward.
Tuesday, April 9, 2024
Why doing nothing with AI is not an option
In the business of technology adoption, the prudent path often lies in inaction. Education, in particular, has a natural proclivity for sifting through the chaff of technological fads, embracing only those innovations that truly enhance learning outcomes or make educators' lives easier. This organic process of selection has served the sector well, allowing it to evolve at a measured pace without succumbing to the allure of every shiny new tool. However, the emergence of AI presents a singular challenge, one that makes doing nothing all but impossible.
The disruptive potential of AI in education cannot be overstated. For centuries, the cornerstone of our pedagogical approach has been the written word – assignments and assessments that serve as both a means of developing and gauging understanding. The AI-powered tools capable of generating human-like responses threaten to undermine this foundational element of education. Inaction in the face of this shift is not merely ill-advised; it is a recipe for curricular erosion and a potential deskilling of an entire generation. Most educators intuitively understand the threat, hence the tinge of moral panic surrounding the AI invasion of education.
Moreover, a passive approach to AI in education risks exacerbating existing inequities. As Leon Furze, a prominent voice in the field, has vividly described, policing student use of AI tools will inevitably lead to a new digital divide. Access to these technologies, even at the seemingly modest price point of $20 per month, can serve as a significant barrier for many students. The solution lies not in restriction, but in universal training – ensuring that all students are equipped with the skills to harness AI efficiently, thus leveling the playing field.
The stakes extend beyond the classroom. Higher education and K-12 institutions that fail to adapt to the AI revolution risk further straining their already tenuous relationships with employers. In an era where the relevance of traditional education is increasingly questioned, ignoring the clear signals from the labor market is a perilous path. It leaves educational institutions vulnerable to political attacks and diminishes their ability to prepare students for the realities of the modern workforce.
The imperative, then, is clear: embrace the bots. This is not a call for wholesale abandonment of traditional pedagogy, but rather a recognition that AI must be woven into the fabric of our educational approach. Curriculum must be revised, assignments and assessments reimagined to not only incorporate but require the use of AI. Every student, regardless of background or discipline, should be exposed to and ideally proficient in leveraging these tools.
Such a transformation is no small undertaking. It demands resources, commitment, visionary leadership, and a comprehensive institutional strategy. But the alternative – a slow, painful descent into irrelevance – is far more daunting. The question is not whether education can afford to embrace AI, but whether it can afford not to. In this particular case, inaction is the riskiest action of all.
Wednesday, February 28, 2024
Hackers vs. Handlers: The Battle for Equity in the Generative AI Revolution
Handlers, often backed by deep pockets and corporate interests, are the architects of proprietary systems. They package, distribute, and regulate access to AI technologies, aiming to create comprehensive solutions that cater to market demands. Their approach, while fostering innovation and ensuring quality, often leads to restricted access and a consolidation of power, raising concerns about equity and inclusivity in the technological realm. The curious fact is that many handlers are former hackers, who made it in the startup world.
Hackers, in contrast, are the rebels of the digital age. They advocate for a more open and collaborative approach to AI development, believing that technology should be a public good, accessible to all. They prefer the do-it-yourself, scrappy solutions. Their efforts are not driven by profit but by a conviction that broader access to AI tools can level the playing field, enabling a more diverse set of voices to contribute to and benefit from technological advancements.
The clash between hackers and handlers is emblematic of a larger debate about the future of technology and its role in society. While handlers bring structure and scalability, hackers inject diversity, creativity, and a sense of community. The balance between these forces is crucial. An overemphasis on control and commercialization risks stifling innovation and perpetuating inequalities, while unchecked openness may lead to issues of quality and security.
The generative AI revolution presents an opportunity to recalibrate this balance. Supporting hackers and their open-access ethos can foster a more equitable technological landscape, where innovation is not the exclusive domain of the well-funded. This means championing open-source projects, recognizing community-driven initiatives, and creating legal frameworks that protect the principles of openness and collaboration.
As we stand at the precipice of this AI revolution, the choices the societies make will have far-reaching implications. Supporting the hacker ethos without alienating the handlers, and promoting broader access to AI technologies can ensure that the benefits of this revolution are shared by all, not just the privileged few. It is time to shift the balance in favor of equity, inclusivity, and the collective advancement of society.
Tuesday, February 6, 2024
AI undermines linguistic privilege
The tremors of unease felt across the echelons of privilege are not solely due to the fear of technological unemployment or the unsettling pace of change. Rather, they stem from a deeper, more introspective anxiety: the threat AI poses to the use of language as a bastion of privilege. For centuries, mastery over the nuanced realms of oral and written speech has served as a subtle yet potent tool of social stratification, a way to gatekeep the corridors of power and influence. But as AI begins to democratize these linguistic capabilities, it inadvertently challenges the very foundations of societal hierarchies, provoking a backlash draped in ethical rhetoric that masks a more self-serving agenda.
Language, in its most refined forms, has long been a marker of education, sophistication, and belonging. To speak with the clipped accents of an upper-class Englishman, to wield the jargon of academia, or to navigate the complex conventions of professional communication has been to hold a key to doors otherwise closed. These linguistic markers function as tacit gatekeepers, delineating who belongs within the inner circles of influence and who remains outside, their voices deemed less worthy. The assertion that one must speak or write in a certain way to be considered intelligent or capable reinforces societal power structures and perpetuates inequities. It's a subtle form of oppression, one that privileges certain dialects, accents, and syntactical forms over others, equating linguistic conformity with intelligence and worthiness.
Enter the realm of artificial intelligence, with its natural language processing capabilities and machine learning algorithms. AI, with its inherent impartiality to the accents, dialects, and syntactical structures it mimics, does not discriminate based on the traditional markers of linguistic prestige. It can generate scholarly articles, craft professional emails, or compose poetic verses with equal ease, regardless of the socioeconomic or cultural background of the user. This leveling of the linguistic playing field poses a direct challenge to those who have historically leveraged their mastery of language as a means of maintaining status and privilege.
Critics of AI often cloak their apprehensions in the guise of ethical concerns, voicing fears about data privacy, algorithmic bias, or the dehumanization of communication. While these issues are undoubtedly important, they sometimes serve to obscure a more uncomfortable truth: the democratizing impact of AI on language threatens to undermine traditional power dynamics. The reluctance to embrace this technology fully may, in part, stem from a reluctance to relinquish the privilege that comes with linguistic mastery.
This resistance to change is not a new phenomenon. Throughout history, technological advancements have often been met with skepticism by those whose status quo they disrupt. The printing press, the telephone, and the internet all faced initial pushback from those who feared the loss of control over information dissemination. Similarly, AI's impact on language is merely the latest battleground in the ongoing struggle between progress and privilege.
Yet, the equalizing potential of AI should not be viewed with apprehension but embraced as an opportunity for societal advancement. By breaking down the barriers erected by linguistic elitism, AI can facilitate more inclusive, diverse forms of communication. It can empower individuals from all backgrounds to express themselves effectively, participate in scholarly discourse, and compete in professional arenas on equal footing. In doing so, AI can help to dismantle some of the systemic barriers that have perpetuated inequality and hindered social mobility.
The anxiety surrounding AI's impact on language reflects broader concerns about the erosion of traditional forms of privilege. As AI continues to advance, it challenges us to reconsider the values we ascribe to certain forms of linguistic expression and to question the fairness of societal structures built upon them. Embracing the democratizing influence of AI on language could lead to a more equitable and inclusive society, where intelligence and capability are recognized in all their diverse expressions, rather than gauged by adherence to arbitrary linguistic norms. In the end, the true measure of progress may not be in the sophistication of our technologies but in our willingness to let go of outdated markers of privilege.
Tuesday, January 30, 2024
The tiny tools issue
The world of AI implementation has three tiers. At the base are user-friendly, ready-to-use AI tools – the digital world's equivalent of instant coffee: one can simply go to a chatbot and do your thing. Ascending a level, there is the realm of tiny tools like APIs, a middle ground easily accessible to coders but mystifying to the layperson. The apex of this hierarchy is reserved for integrated, complex AI solutions – the grand orchestras of technology, both sophisticated and costly.
The drama in AI implementation, however, is not rooted in the existence of these tiers, but in their portrayal and accessibility. Providers, often driven by material interests, tend to downplay the simplicity and adequacy of the lower tiers. This misrepresentation is not just about pushing expensive solutions; it is a deeper issue of monopolizing knowledge and perpetuating power imbalances. Of course, if one knows how to do something that others do not, they want to make themselves look more essential, so they can sell their expertise.
The key takeaway here is to be a discerning consumer. Before opting for an expensive, integrated solution, consider first if one can do it themselves, and if not, if a tiny tool would suffice. Perhaps a computer science student could craft a solution efficiently and affordably. Or there might be a vendor that sells just the tiny tool needed. This approach is not just about saving resources; it is about maintaining control in an increasingly technology-driven world. Surrendering to high-end solutions can lead to a loss of autonomy, difficult to reclaim once given up.
When faced with AI implementation decisions, balance is crucial. It is essential to recognize where one's needs fit within this spectrum and make choices that harmonize practicality, cost, and control. The most effective tool at our disposal is not always the most complex or expensive one, but often our own critical thinking. By understanding the nuances of these AI tiers, we can make informed decisions that preserve our technological independence and prevent being unwittingly upsold solutions that far exceed our actual needs.
Thursday, January 18, 2024
Four principles of public and philanthropic support of AI adoption
Governments and philanthropists can play a role in supporting AI diffusion across various sectors. Their involvement is as critical as that of businesses and researchers in driving forward this technological revolution. However, forging a public strategy for AI adoption remains a complex and unresolved task.
The rapid embrace of AI technology calls for a focus on
leveraging the collective experiences of its extensive user base, in
conjunction with market forces and entrepreneurial innovation. The United
States, and California in particular, stands out for its globally admired
technology innovation ecosystem. This environment, driven by dynamic market
forces and a spirit of entrepreneurship, creates an ideal setting for AI
development. Nevertheless, the lack of a cohesive public strategy in managing
this evolution might lead to varied and possibly conflicting outcomes and
objectives in AI adoption.
At the heart of this matter is the public's significant
interest in the effective deployment of AI. The technology holds the potential
to substantially boost the economy, revolutionize public services, reshape
education, and enhance social welfare systems. Yet, it is essential to balance
these advancements with equitable and efficient technology adoption, ensuring that
AI contributes to resolving rather than exacerbating societal inequities.
Moreover, the integration of AI in public services presents
a dual advantage: improving efficiency and extending service accessibility to a
wider population segment. The key challenge is to deploy these technologies
inclusively, considering the diverse needs of the community. While the swift
adoption of AI offers numerous opportunities, it also demands strategic and
thoughtful planning. This strategy must aim not only to capitalize on AI's
benefits for economic and service improvements but also to guarantee that its
societal integration is equitable and inclusive, aligning technological
progress with the greater public interest.
1. Get
real
The first guiding principle in AI adoption is the pursuit of
a balanced perspective, essential in navigating between two extreme viewpoints.
On one side, there's the dystopian view that envisions AI as a catalyst for catastrophic
job losses. This narrative often emerges from a fear of the unknown, harking
back to historical instances where technological advancements initially
disrupted the job market. However, this perspective tends to overlook how
technological evolution has historically opened up new job opportunities and
industries. There is also fear that AI poses an existential threat to humanity.
These two mutually exclusive dooms day scenarios are amplified by the media.
On the other side lies the overly optimistic view that sees
AI as a cure-all for every challenge, and that we quickly transition to labor
free economies and enjoy abundance without work. This standpoint emerges from
recognizing AI's immense potential to improve efficiency, solve complex issues,
and bring novel solutions to various sectors. However, it can underestimate the
challenges and limitations of implementing AI technologies, such as ethical
considerations, the need for comprehensive data, and crucial human oversight.
A more realistic outlook suggests that the future of AI will
likely follow historical trends, presenting both opportunities and challenges.
Similar to the impact of the internet and mobile technology, AI is expected to
enhance productivity and stimulate economic growth, but not bring us quickly
into the world without scarcity. This advancement could manifest through more
streamlined operations, improved data analysis, and innovation in diverse
sectors.
Both extremes discourage pragmatic, thoughtful planning. The
society cannot control a change that it cannot fathom. A balanced approach to
AI adoption acknowledges AI's significant potential to contribute to
productivity and economic growth. Simultaneously, it recognizes the importance
of strategic management to facilitate a smooth transition in the job market and
society at large. This approach avoids the pitfalls of extreme views, opting
instead for a nuanced and realistic understanding of AI's role in shaping the
future.
2. Democratize
technology
The second principle in AI adoption emphasizes the
democratization of AI technology. This concept is based on the idea that AI's
benefits should be broadly accessible, not just limited to a small group of
experts. This approach to democratizing technology mirrors developments in
fields like web design, which has evolved from a specialized skill for
programmers to a more universally accessible tool. The devolution of expertise has
been a steady trend, and we must not allow it to reverse with AI.
In AI, democratization means expanding access beyond tech
experts to include educational institutions, public agencies, and businesses.
This approach prevents potential monopolization by a few vendors who might control the market
with proprietary platforms and high licensing fees, which could shift the focus
of AI from innovation to profit, limiting its societal benefits.
Democratizing AI fosters grassroots innovation, reducing
vendor-dependency, enabling individuals and organizations to develop custom AI
solutions for specific needs and challenges. This can spur a wave of creativity
and problem-solving in sectors like education, healthcare, social services, and
public administration.
Additionally, democratizing AI plays a critical role in
reducing the risks of AI reinforcing existing inequalities or biases. When AI
is accessible and understandable to a diverse group, it is more likely to be
used inclusively, considering a wide range of perspectives and needs.
In essence, democratizing AI is about making it a tool for
many, empowering a wide user base to understand, engage with, and apply AI in
ways that enhance their work and lives. This approach ensures that AI's
benefits are widely distributed and its development reflects a variety of
voices and needs.
3. Regulate
fine-tuning
The third principle in AI adoption underscores the vital
role of governments and philanthropic organizations in regulating AI's
"fine-tuning" process. This principle acknowledges their significant
influence in shaping AI's ethical development.
Fine-tuning in AI involves refining algorithms and their
outputs to align with specific ethical guidelines and objectives. This step is
crucial to ensure AI systems adhere to societal norms. A key part of
fine-tuning is filtering out harmful or inappropriate content, such as
pornography, conspiracy theories, or explicit violence. This process not only
prevents the spread of such content but also ensures AI's positive contribution
to society.
However, fine-tuning goes beyond just excluding harmful
content. It also includes identifying and rectifying inherent biases within AI
systems. AI models, trained on vast datasets, can inadvertently reflect
societal biases. Left unchecked, these biases may reinforce or exacerbate
societal inequalities. For example, AI by default generates images of
unspecified people as white males, reflecting
a bias in training data. Correcting such biases is essential to make AI
inclusive and representative of global diversity. Governments must compel IT
companies to spend more on fine-tuning, and make their fine-tuning practices
more transparent.
However, governments and philanthropist may play an active
role in funding AI ethics research, promoting diversity in training data, or
setting up bodies to oversee and evaluate AI systems for biases.
4. Support
equity
The fourth principle in AI adoption is about addressing
areas where market forces alone may not suffice, particularly concerning the
equity implications of AI. This principle calls for targeted support in
segments where the private sector might not adequately invest due to limited
market incentives.
A critical area of focus is technology for people with
disabilities. Market mechanisms often fail to address these needs adequately,
as the market for assistive technologies can be too small to lure significant
private investment. This gap necessitates government or philanthropic
intervention to develop AI solutions that are innovative, inclusive, and
accessible to everyone, regardless of their physical or cognitive abilities.
Another area is AI's role in bridging language barriers and
aiding language learners and linguistically marginalized communities. Here
again, market forces may not be enough to drive the development of AI tools
tailored for these groups. Government and philanthropic support is essential in
creating AI applications that meet diverse linguistic needs, promoting
inclusivity and understanding.
In education, AI's impact is particularly profound.
Traditional reliance on written assignments and assessments means integrating
AI into education is not just about investment but also about understanding
learning theories and pedagogical practices. While entrepreneurs are adept at
crafting innovative tech solutions, they may lack the necessary expertise in learning
sciences to ensure these tools are effective in a learning context. Thus,
additional support in research, development, and training is crucial for AI to
positively transform educational practices.
Contrastingly, sectors like entertainment, which are more
adaptable and resource-rich, are likely to manage AI-driven disruption
independently. However, more public-oriented sectors such as social services,
education, and medicine require substantial support from governments and
philanthropic organizations. These sectors are pivotal to societal well-being
and equity; their effective navigation of AI integration is crucial for the
equitable distribution of AI benefits.
In summary, strategic AI adoption and integration is imperative, especially in sectors where market forces are insufficient. This strategy should include support for education, social services, and disability support to ensure AI serves the public good effectively. The involvement of governments and philanthropic organizations is critical in providing necessary resources, guidance, and regulatory frameworks. This ensures the development and implementation of AI in ethical, equitable, and universally beneficial ways.
Tuesday, August 8, 2023
AI Use by Students is an Issue of Equity
As we consider how to integrate AI in higher education, it's essential to examine who stands to benefit and why it matters. The historical context of language paints a complex picture, where written language has been a marker of class and education. The ability to write elegantly and follow grammatical rules distinguished the educated elite from the masses. Even today, mastery of written language serves not just as a tool for communication but as a status symbol, a differentiation between "us" and "them."
This outsized prominence of literacy and grammar has no intrinsic value; dialects are not inferior, and misspelled words can still convey meaning. The significance of literacy often aligns with social class markers and the dominant culture, rather than enhancing the clarity of ideas.
The fear of losing another marker of social status continues to drive anxiety around language and writing in our society. However, those concerned with social justice should recognize AI-assisted writing, reading, speaking, research, and problem-solving as potential equalizers. For individuals grappling with dyslexia, aphasia, ADHD, and other learning disorders, writing is a daunting task. AI has the potential to level the playing field, offering a means to overcome these hurdles.
Moreover, for the vast population trying to master English or any second, dominant language, AI's smart algorithms can simplify and streamline the learning process. This benefit extends to students from underprivileged backgrounds who may struggle with writing due to a lack of quality secondary schooling. AI offers a chance to level the playing field for these marginalized groups of students.
The transformative potential of AI promises liberation for those constrained by conventional written language. With technology capturing thoughts and expressing them competently, the value of ideas rises, while the value of grammar falls. It is a liberating thing, not a sign of cultural impoverishment.
However, the rise of AI also highlights an enduring concern: inequality. Technological revolutions, while empowering, can exacerbate socio-economic disparities. Those with education and technological proficiency might find themselves better equipped to reap the AI revolution's benefits, leaving others struggling to keep up.
The answer to the question "who benefits?" is contingent on university faculty and administrators. We hold an ethical obligation to empower disadvantaged students with the advanced skills of writing with AI, giving them an equal opportunity to harness this powerful technology.
The potential "AI gap" could become our reality if we do not take proactive measures. We must avoid criminalizing the use of AI, such as GPT, especially as it may disproportionately penalize the most vulnerable students, including students of color. If we equate the use of AI with cheating, the most brilliant, original thinkers will be punished, while the most compliant will be rewarded. Do I want our students to use AI in their real careers, to write better CVs and cover letters, to use it in their jobs? – you bet, I do, and I hope so do you.
AI use by students is not just an issue of technological advancement; it is an issue of equity, inclusivity, and human potential. We must avoid letting others fall behind in the race.
Do AI bots deceive?
The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...