AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Wednesday, June 26, 2024
How to bullshit about bullshit
Monday, June 24, 2024
Can observational learning work with AI?
Can humans learn new problem-solving strategies simply by observing AI at work? Following Bandura's theory of observational learning, it may not be as far-fetched as it at first appears.
When humans watch AI systems solve problems or generate text, they naturally construct mental models of the underlying processes. These models, while factually wrong (AI thinking is very different from ours), can nonetheless prove useful. It is imagining yourself in teh task that can be beneficial.
Consider a person observing an AI system summarise a lengthy academic paper. The human observer cannot directly perceive the AI's internal computations. Instead, the observer likely imagines themselves performing the task, focusing on key sentences, identifying main themes, and connecting key ideas.
This mental model, though inaccurate in representing the AI's actual mechanisms, may still enhance the observer's own summarisation skills. They might, for instance, learn to pay closer attention to introductory and concluding paragraphs, or to look for repeated phrases that signal important concepts.
Observing AI failures can be particularly instructive. When an AI system produces an erroneous or nonsensical output, it often reveals the limitations of its approach. A human observer, reflecting on these errors, might develop a more nuanced understanding of the problem at hand and devise novel strategies to overcome the AI's shortcomings.
For example, watching an AI struggle with a complex logical reasoning task might prompt a human to break the problem down into smaller, more manageable steps. This approach, inspired by the AI's limitations, could prove valuable even in contexts where AI is not involved.
To test this hypothesis rigorously, consider an experiment:
1. Select a diverse set of problem-solving tasks, ranging from creative writing to mathematical reasoning.
2. Divide participants into three groups:
- a) An observation group that watches AI systems attempt these tasks, including both successes and failures.
- b) A practice group that attempts the tasks themselves without AI involvement.
- c) A control group that engages in unrelated activities.
3. After the observation or practice period, test all participants on a new set of similar problems.
4. Compare the performance of the three groups, paying particular attention to novel problem-solving strategies employed by the observation group.
5. Conduct follow-up interviews to explore participants' thought processes and any conscious attempts to apply AI-inspired techniques.
Friday, June 21, 2024
Can We Learn Without Struggle in the Age of AI?
I've been pondering a question: What if our traditional understanding of cognitive growth is too narrow? We've long held onto the idea that real learning comes from struggle, from pushing against our limits, from grappling with challenges just beyond our current abilities. But what if that's not the whole story?
I'm starting to wonder if growth - real, meaningful cognitive development - might not always need the strong challenges we've assumed were necessary. And this thought has become particularly relevant as we enter the new world of AI-assisted learning.
Many of our theories about learning and development are rooted in the idea of conflict or tension. Vygotsky's Zone of Proximal Development, for instance, emphasizes the space between what a learner can do without help and what they can do with guidance. Piaget talked about cognitive dissonance as a driver of development. These Hegelian/Mamrxist heories have shaped how we think about education for decades.
But here's what I'm pondering: What if growth can happen more... gently? What if it can occur through exposure, through interaction, through a kind of cognitive osmosis that doesn't necessarily involve struggle or challenge? And importantly, what if this gentler form of learning is still deeply social and relational?
There's a lot of hand-wringing in educational circles about AI tools like ChatGPT. The worry is that by providing ready answers, these tools will short-circuit the learning process. Students won't have to struggle, so they won't really learn. I have definitely been expressing these concerns in my previous blogs.
But I'm not so sure anymore. Let me float a hypothesis: What if AI-assisted learning doesn't dampen growth, but instead provides a different kind of cognitive experience that can still lead to meaningful development? And what if this experience, rather than being isolating, actually opens up new avenues for social learning and collaboration?
Here's an analogy that's been helpful for me in thinking about this. Remember when GPS first became widely available? There were concerns that people would never learn to navigate cities anymore, that we'd lose our sense of spatial awareness. And yet, most of us who use GPS regularly still develop a pretty good understanding of the cities we live in and visit. We might learn differently - perhaps more slowly, or with less detail - but we do learn, without all the frustrations of trying to read the map while driving, or memorize multiple turns (Left, second right, soft left again...). City driving is probably safer, but we did not get more stupid.
The GPS doesn't prevent us from learning; it provides a different context for learning. We're not struggling with paper maps, but we're still processing spatial information, making connections, building mental models of our environment.
Could AI-assisted learning work in a similar way? Sure, students might get quick answers or produce a quick text with an AI without much effort, which feels somehow wrong (the nature of these feelings is a subject of a special consideration). But that doesn't mean they stop thinking or interacting. They will start wondering how to get better answers, produce better outputs. They will begin to notice patterns in the AI's responses. They will start to question or critique what the AI produces. That's what the human brain has evolved to do.
Moreover, this process doesn't happen in isolation. Students will discuss their AI interactions with peers, compare outputs, collaboratively explore how to improve results. It becomes a form of social play - experimenting, sharing discoveries, building on each other's ideas. The AI becomes a tool for social learning, not a replacement for it.
In other words, the presence of AI doesn't eliminate cognitive work or social interaction - it might just shift their nature. And who's to say that this new form of cognitive and social engagement is any less valuable than the old?
Now, I'm not saying we should throw out everything we know about learning and development. Challenge and struggle certainly have their place. But I am suggesting that maybe we need to broaden our understanding of how growth happens, recognizing that it can occur through gentler, more playful forms of social interaction and exploration.
Perhaps there's a whole spectrum of cognitive experiences that can lead to growth, ranging from intense challenge to gentle, collaborative exposure. Maybe AI-assisted learning falls somewhere on this spectrum - not replacing traditional forms of learning, but adding new dimensions to how we can develop cognitively and socially.
This is all hypothetical, of course. We're in new territory with AI, and it will take time and research to understand its full impact on learning and development. But I think it's worth considering that our assumptions about how growth happens might need updating, particularly in how we view the social aspects of learning.
What do you think? Is it possible that we've overemphasized challenge in our understanding of learning? Could AI-assisted learning open up new pathways for cognitive development and social learning that we haven't fully appreciated yet? I am just back from a conference, where we had many early childhood educators. In that tribe, there is a lot more talk about play than about forcing growth. Maybe that's why I am having these second thoughts about growth.
I don't have definitive answers, but I think these are questions worth exploring as we navigate this new landscape of learning. After all, if there's one thing we know for sure about human cognition, it's that it's endlessly adaptable and intrinsically social. Perhaps it's time for our theories about learning to adapt as well, embracing a more collaborative, playful vision of growth in the age of AI.
Monday, June 10, 2024
Testing AI once does not make you an expert
I heard of a professor who asked ChatGPT to write a profile of himself, only to discover inaccuracies and decide that AI is unsuitable for education. Instead of reflecting on why he is not sufficiently famous, the professor blamed the AI. This reaction is like boycotting all cars after driving an old Soviet-made Lada. Dismissing AI entirely based on a couple of lazy interactions is a classic example of the overgeneralization fallacy.
Before hastily testing and dismissing, one would be well served to read about the known limitations of AI, particularly when it comes to generating content about individuals who are not well-known. AI can "hallucinate" details and citations, creating a misleading picture of reality.
The key is to approach AI with a spirit of curiosity and creativity, exploring its strengths and weaknesses through multiple tests and scenarios. By focusing on what works rather than fixating on what does not, we can begin to appreciate AI for what it is—a tool with potential that takes some skill and experience to unlock.
Also, think about your the risk to your reputation. If you are saying, "I tried, and it is crap," you are also dismissing all those other people who found it valuable as gullible fools. The failure to see that the joke is on you is a test of your hubris, and that kind of a test works on just one try.
Thursday, June 6, 2024
Using AI without losing control
As AI becomes increasingly capable of performing tasks that were once the exclusive domain of humans, it no longer makes sense to spend a lot of time teaching skills that AI can do better and faster. Just as we largely replaced the teaching of manual drafting with computer-aided design (CAD), we must now reassess the curriculum and remove skills that have been rendered obsolete by AI.
The way we define curriculum reflects our understanding of what it means to be human, a definition that has evolved significantly over the centuries. As machines have become more advanced, skills previously valued and developed by humans have gradually been entrusted to machines.
However, the logic behind this shift is also changing. It is no longer just a matter of what machines can do better, as they seem to be excelling at almost everything. Instead, it is about what we, as humans, choose to retain, enjoy, and feel compelled to do. It is less about competition with machines and more about the arrangement of power and authority. To maintain our human authority, we must continue to perform certain tasks.
One of the most important of these tasks is the ability to set goals and make value judgments about what should or should not be done. This is a complex skill that requires a comprehensive understanding of the world, both in its physical and social aspects, as well as the wisdom to make decisions when faced with competing values and the inherent uncertainty of the future. As AI increasingly mediates our interactions, the ability to determine the goals and direction of these interactions becomes even more crucial.
Equally important is the ability to work with AI to achieve our objectives. This process, sometimes referred to as "prompt engineering," involves iterative interaction and refinement to obtain the most accurate, appropriate, and helpful outputs from AI. Beyond technical skills, this requires robust critical thinking to assess the truthfulness and value of AI-generated content. In essence, it is the skill of editing, not merely for style but for substance, constantly asking: Is this useful? Does it improve upon what already exists? Does it positively contribute to the overall human knowledge and well-being?
These core competencies should shape the curriculum of the future: goal-setting, value judgment, critical discernment, and the ability to collaborate with AI to achieve human objectives. The specifics will evolve as AI capabilities advance, and there will be variations across disciplines.
However, the fundamental challenge remains the same: in a world increasingly shaped by artificial intelligence, we must rededicate ourselves to cultivating human intelligence in its fullest and most distinctively human expressions. Only then can we ensure that the tremendous power of AI serves to elevate humanity rather than diminish it.
Tuesday, May 21, 2024
"First try with AI"; On the advantages of organic learning
Some people advocate for structured training programs and dedicated time for AI learning, but a more organic approach is more effective and efficient.
The secret to successfully incorporating AI into your work is to simply start using it for your next task. Rather than setting aside special time for AI learning, dive right in and explore how AI can assist you in your current projects. Need to do something? Write a memo, a long email, a letter, a grant proposal? "First Try with AI."
What do you have to lose? he worst-case scenario is that you waste a little time if AI proves unhelpful for that particular task. However, in most cases, you will discover its usefulness and potential to save you some time, even if it doesn't complete the task entirely.
It's important to recognize that AI never does everything for you. Only the most mindless, bureaucratic, compliance-related content may be primarily handled by AI. However, for the majority of tasks, you will intuitively learn the right mix of human and AI ingredients to create the best soup. This organic learning process allows you to understand the strengths and limitations of AI within the context of your specific work.
There is nothing wrong with taking courses to learn about AI, but it is worth noting that assignments in such courses often lack authenticity. Those are "pretend tasks." Even after completing a course, you would still need to learn how to transfer your new skills into real-world contexts. In contrast, an organic approach to AI learning allows you to immediately apply your knowledge within the context of your work, resulting in a more motivated, deeper, and faster learning experience.
As you gradually incorporate AI into your daily tasks, you will naturally develop a better understanding of when and how to leverage its capabilities, and where to mitigate its shortcomings. This hands-on, contextual learning approach will not only help you become more proficient in using AI but also enable you to identify new opportunities for its application within your organization.
For educational contexts, we know there is a strong correlation between instructors personally using AI and them allowing students to use it in class. We don't trust things we do not understand, which explains the unreasonably strong worries about cheating. There will be no classroom use without the personal use by instructors first. Once teachers start using it for their own purposes, their anxiety levels go down, and their creativity frees up to invent classroom uses.
Monday, May 13, 2024
Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric implies a homogeneous substance, akin to measuring the alcohol content in a beverage. However, my recent survey suggests that academic integrity associated with AI use is far from homogeneous. The survey asked educators to evaluate the ethical implications of using AI for twelve different tasks in writing an academic paper, ranging from researching to brainstorming to editing to actually writing full sections.
The findings revealed significant variance in responses. While many respondents were comfortable with AI aiding in brainstorming ideas, they expressed reservations or outright disapproval of AI writing entire paragraphs or papers. This disparity underscores a critical issue: there is no consensus in the academic profession on what constitutes acceptable AI assistance in learning. More strikingly, within each individual's responses, there was considerable variation in how different AI uses were assessed.
Consider the implications of a tool like Turnitin reporting "50% AI-generated" content. What does this figure actually represent? It lacks context about how the AI-generated content was incorporated. For instance, a paper could be largely original, with only minor edits made by AI at the end, potentially showing a high percentage of AI contribution. Conversely, a student might contribute minimally to an essentially AI-written paper, making slight modifications to reduce the AI-detected percentage. Both scenarios could yield vastly different percentages, yet the ethical implications are markedly divergent.
The pursuit of better detection technology misses the point. The issue is not with the detection capabilities but with the construct itself. The very idea of "AI-generated text" as a unified concept is problematic. Just as a depression inventory measures various symptoms that converge on the underlying construct of depression, our methods for evaluating AI in academic work must recognize the diverse and context-dependent nature of its use. The current approach, which treats all AI contributions as equivalent, is akin to judging a book's genre by counting its words. I which Turnitin and other commercial "AI Detectors" would show just a little more integrity and stop selling us the snake oil. They must know for sure that their claims are bogus, because AI-generated text is not a valid construct to be measured.
Instead of focusing obsessively on detecting AI-generated content, we need to shift our perspective. We should expect and require students to use AI as part of their learning process. The challenge then becomes developing assignments that not only measure the content knowledge but also the meta-AI skills and competencies necessary to navigate and leverage these tools effectively. This approach acknowledges the complexity of AI's applications and ensures it is used responsibly, promoting a learning environment that respects both the potential and the limitations of artificial intelligence.
Do AI bots deceive?
The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...