AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Wednesday, December 4, 2024
Why We Undervalue Ideas and Overvalue Writing
This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas - shaped by unique life experiences and cultural viewpoints - get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.
Polished academic prose renders judgments easy. Evaluators find comfort in assessing grammatical correctness, citation formats, and paragraph transitions. The quality of ideas brings discomfort - they defy easy measurement and often challenge established thinking. When ideas come wrapped in awkward prose, they face near-automatic devaluation.
AI writing tools expose this bias with new clarity. These tools excel at producing acceptable academic prose - the mechanical aspect we overvalue. Yet in generating truly original ideas, AI remains remarkably limited. AI can refine expression but cannot match the depth of human insight, creativity, and lived experience. This technological limitation actually highlights where human creativity becomes most valuable.
This bias shapes student behavior in troubling ways. Rather than exploring new intellectual territory, students learn to package conventional thoughts in pristine prose. The real work of scholarship - generating and testing ideas - takes second place to mastering academic style guides. We have created a system that rewards intellectual safety over creative risk, while systematically disadvantaging students whose mastery of academic conventions does not match their intellectual capacity.
Changing this pattern requires uncomfortable shifts in how we teach and evaluate. What if we graded papers first without looking at the writing quality? What if we asked students to submit rough drafts full of half-formed ideas before cleaning up their prose? What if we saw AI tools as writing assistants that free humans to focus on what they do best - generating original insights and making unexpected connections?
The rise of AI makes this shift urgent. When machines can generate polished prose on demand, continuing to favor writing craft over ideation becomes indefensible. We must learn to value and develop what remains uniquely human - the ability to think in truly original ways, to see patterns others miss, to imagine what has never existed. The future belongs not to the best writers but to the most creative thinkers, and our educational practices must evolve to reflect this reality while ensuring all students can fully contribute their intellectual gifts.
Thursday, October 10, 2024
Is the college essay dead?
The college essay, once a revered academic exercise, is now facing an existential crisis. It used to be a good tool—a structured way for students to demonstrate their understanding, showcase their critical thinking, and express ideas with clarity . The college essay was not merely about content; it was a skill-building process, teaching students to organize thoughts, develop arguments, and refine language. Yet today, AI has made the traditional essay feel outdated, as it can generate polished, formulaic essays effortlessly. Policing AI use in these assignments is nearly impossible, and the conventional essay’s value is rapidly diminishing.
Not all essays are created equal, however, and the future of
the college essay might depend on the type of skills we emphasize. The expository essay,
designed to see if students understand material or can apply concepts, is on
its last legs. When AI can churn out a satisfactory response in seconds, it is
a clear sign that this form of assessment is no longer viable. The AI does not
just pass these assignments; it excels at them, raising an uncomfortable
question—if a machine can do it, why are we still teaching it? For these kinds
of essays, the challenge is that they often assess recall rather than thinking.
They were already on shaky ground; AI is just the final push.
The essays that may survive, though, are those that demand
novelty, creativity, and genuine problem-solving. AI may help in drafting,
structuring, or even generating ideas, but it does not replace the kind of
original thinking needed to solve real-world problems. It cannot fully simulate
human intuition, lived experience, or deep critical evaluation. AI's writing is wooden, and often devoid of true beauty. Essays that
require students to synthesize information in new ways, explore original ideas, exhibit artistic talent, or reflect deeply on personal experiences still have value. These essays are
not about whether you know a theory; they are about what you can do with it.
This is where the human element—the messy, unpredictable spark of
creativity—remains irreplaceable.
The deeper issue is not AI itself but the way we have been
teaching and valuing writing. For decades, the emphasis has been on
producing “correct” essays—structured, grammatically precise, and obedient to
the format. We have been training students to write well enough to meet
requirements, not to push the boundaries of their creativity. It is like
teaching students to be proficient typists when what we really need are
novelists or inventors. We have confused competency with originality, thinking
that writing formulaic content is a necessary step before producing meaningful
work. This is a misunderstanding of how creativity works; mastery does not come
from repetition of the mundane but from risk-taking and exploration, even if
that means stumbling along the way.
The real future of the essay should start with this recognition. Imagine if instead of book reports or basic expository pieces, students were challenged to write for real audiences—to draft scientific papers for journals, craft poems for literary contests, or propose solutions to pressing social issues. Sure, many students would not reach the publication stage, but the act of aiming higher would teach them infinitely more about the writing process, and more importantly, about thinking itself. This would not just be about mastering the mechanics of writing but developing a mindset of curiosity and originality. AI could still play a role in these processes, helping with the technicalities, leaving the student free to focus on developing and articulating novel ideas.
The problem with the book report or the “explain Theory A” essay is not just that they are boring; it is that they are irrelevant. Nobody in the professional world is paid to summarize books or explain theories in isolation. These are stepping stones that lead nowhere. Excelling at pointless, terrible genre does not prepare to succeed ad an authentic genre. Instead of teaching students to write these antiquated forms, we should ask them to write pieces that demand something more—something they cannot copy-paste or generate easily with a prompt. Authentic, context-rich, and creative assignments are the ones that will endure. If there is no expectation of novelty or problem-solving, the essay format becomes an exercise in futility.
AI’s rise does not have to spell the end of the essay. It
might, in fact, be the nudge needed to reinvent it. We have the chance to move
beyond teaching “correct” writing toward cultivating insightful, original work
that challenges the boundaries of what students can do. AI’s presence forces us
to ask hard questions about what we want students to learn. If writing is no
longer about mechanics or regurgitating content but about generating ideas and
engaging critically, then AI becomes a collaborator, not a competitor. It can help
with the structure, but the essence—the thinking—must come from the student.
In the end, the college essay is not dead; it is just in need of reinvention. The conventional model of essays as rote demonstrations of knowledge is no longer viable. But the essay that challenges students to think, create, and solve problems—those essays will survive. They might even thrive, as the focus shifts from the mechanics of writing to the art of thinking. The key is to evolve our teaching methods and expectations, making room for a new kind of writing that leverages AI without losing the human touch. Raising expectations is the main strategy in dealing with AI in education.
Monday, September 23, 2024
Cognitive Offloading: Learning more by doing less
Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.
Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."
The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.
With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.
This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.
As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.
For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.
The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.
It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us.
Saturday, September 7, 2024
AI in Education Research: Are We Asking the Right Questions?
The study finds that students who had access to a standard GPT-4 interface (GPT Base) performed significantly better on practice problems, but when that access was removed, they actually performed worse on exams compared to students who never had AI assistance. Interestingly, students who used a specially designed AI tutor with learning safeguards (GPT Tutor) performed similarly to the control group on exams. While these results are intriguing, we need to take a step back and consider the broader implications.
The researchers should be commended for tackling an important topic. As AI becomes more prevalent in education, understanding its effects on learning is crucial. The study's methodology appears sound, with a good sample size and appropriate controls. However, the conclusions drawn from the results may be somewhat misleading.
Consider an analogy: Imagine a study that taught one group of students to use calculators for arithmetic, while another group learned traditional pencil-and-paper methods. If you then tested both groups without calculators, of course the calculator-trained group would likely perform worse. But does this mean calculators "harm learning"? Or does it simply mean we are testing the wrong skills?
The real question we should be asking is: Are we preparing students for a world without AI assistance, or a world where AI is ubiquitous? Just as we do not expect most adults to perform complex calculations without digital aids, we may need to reconsider what math skills are truly essential in an AI-augmented world.
The study's focus on performance in traditional, unassisted exams may be missing the point. What would be far more interesting is an examination of how AI tutoring affects higher-level math reasoning, problem-solving strategies, or conceptual understanding. These skills are likely to remain relevant even in a world where AI can handle routine calculations and problem-solving.
Moreover, the study's title, "Generative AI Can Harm Learning," may be overstating the case. What the study really shows is that reliance on standard AI interfaces without developing underlying skills can lead to poor performance when that AI is unavailable. However, it also demonstrates that carefully designed AI tutoring systems can potentially mitigate these negative effects. This nuanced finding highlights the importance of thoughtful AI integration in educational settings.
While this study provides valuable data and raises important questions, we should be cautious about interpreting its results too broadly. Instead of seeing AI as a potential harm to learning, we might instead ask how we can best integrate AI tools into education to enhance deeper understanding and problem-solving skills. The goal should be to prepare students for a future where AI is a ubiquitous tool, not to protect them from it.
As we continue to explore the intersection of AI and education, studies like this one are crucial. However, we must ensure that our research questions and methodologies evolve along with the technology landscape. Only then can we truly understand how to harness AI's potential to enhance, rather than hinder, learning.
Monday, August 19, 2024
The Right to Leapfrog: Redefining Educational Equity in the Age of AI
AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.
This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.
Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.
The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.
This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.
Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.
As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education.
Thursday, August 8, 2024
The Cognitive Leap Theory
With the arrival of AI, education is experiencing a profound
shift, one that requires a rethinking of how we design and implement learning
activities. This shift is captured in the cognitive leap theory, which posits
that AI is not just an add-on to traditional education but a transformative
force that redefines the learning process itself. The Cognitive Leap theory is
a core part of a larger AI-positive pedagogy framework.
Traditionally, educational activities have been structured
around original or revised Bloom’s Taxonomy, a framework that organizes
cognitive skills from basic recall of facts (Remember) to higher-order skills
like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a
sequential progression, Bloom himself never insisted on a strict hierarchy. In
fact, with the integration of AI into the classroom, the importance of these
skills is being rebalanced. The higher-order skills, particularly those
involving critical evaluation, are gaining prominence in ways that were
previously unimaginable.
In an AI-positive pedagogical approach, the focus shifts
from merely applying and analyzing information—tasks typically associated with
mid-level cognitive engagement—to critically evaluating and improving
AI-generated outputs. This represents a significant cognitive leap. Instead of
simply completing tasks, students are now challenged to scrutinize AI outputs
for accuracy, bias, and effectiveness in communication. This shift not only
fosters deeper cognitive engagement but also prepares students to navigate the
complex landscape of AI-driven information.
A key component of this approach is the development of
meta-AI skills. These skills encompass the ability to formulate effective (rich)
inquiries or prompts for AI, to inject original ideas into these prompts, and,
crucially, to critically assess the AI’s responses. This assessment is not a
one-time task but part of an iterative loop where students evaluate, re-prompt,
and refine until the output meets a high standard of quality. This process not
only sharpens their analytical skills but also enhances their creative
abilities, as they learn to think critically about the inputs and outputs of AI
systems.
Moreover, the traditional view that learning progresses
linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced
classroom, evaluation and creation are no longer the endpoints of learning but
are increasingly becoming the starting points. Students must begin by
evaluating AI-generated content and then proceed to improve it, a process that
requires a deep understanding of context, an awareness of potential biases, and
the ability to communicate effectively. This reordering of cognitive priorities
is at the heart of the cognitive leap theory, which emphasizes that the future
of education lies in teaching students not just to perform tasks but to engage
in higher-order thinking at every stage of the learning process.
The implications of this shift are serious. Educators must
rethink how they design assignments, moving away from traditional task-based
assessments toward activities that challenge students to evaluate and improve
upon AI-generated outputs. This requires a new kind of pedagogy, one that is
flexible, iterative, and deeply engaged with the possibilities and limitations
of AI.
By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself.
Monday, July 15, 2024
Effort in Learning: The Good, the Bad, and the AI Advantage
Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.
Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain.
The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.
The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.
Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.
Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon.
AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.
To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.
Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.
Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.
Tuesday, July 9, 2024
AI-Positive Pedagogy: Navigating the Great Disruption
AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.
While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.
The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.
AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.
AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.
The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.
To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.
It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.
In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.
To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.
The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.
Friday, June 21, 2024
Can We Learn Without Struggle in the Age of AI?
I've been pondering a question: What if our traditional understanding of cognitive growth is too narrow? We've long held onto the idea that real learning comes from struggle, from pushing against our limits, from grappling with challenges just beyond our current abilities. But what if that's not the whole story?
I'm starting to wonder if growth - real, meaningful cognitive development - might not always need the strong challenges we've assumed were necessary. And this thought has become particularly relevant as we enter the new world of AI-assisted learning.
Many of our theories about learning and development are rooted in the idea of conflict or tension. Vygotsky's Zone of Proximal Development, for instance, emphasizes the space between what a learner can do without help and what they can do with guidance. Piaget talked about cognitive dissonance as a driver of development. These Hegelian/Mamrxist heories have shaped how we think about education for decades.
But here's what I'm pondering: What if growth can happen more... gently? What if it can occur through exposure, through interaction, through a kind of cognitive osmosis that doesn't necessarily involve struggle or challenge? And importantly, what if this gentler form of learning is still deeply social and relational?
There's a lot of hand-wringing in educational circles about AI tools like ChatGPT. The worry is that by providing ready answers, these tools will short-circuit the learning process. Students won't have to struggle, so they won't really learn. I have definitely been expressing these concerns in my previous blogs.
But I'm not so sure anymore. Let me float a hypothesis: What if AI-assisted learning doesn't dampen growth, but instead provides a different kind of cognitive experience that can still lead to meaningful development? And what if this experience, rather than being isolating, actually opens up new avenues for social learning and collaboration?
Here's an analogy that's been helpful for me in thinking about this. Remember when GPS first became widely available? There were concerns that people would never learn to navigate cities anymore, that we'd lose our sense of spatial awareness. And yet, most of us who use GPS regularly still develop a pretty good understanding of the cities we live in and visit. We might learn differently - perhaps more slowly, or with less detail - but we do learn, without all the frustrations of trying to read the map while driving, or memorize multiple turns (Left, second right, soft left again...). City driving is probably safer, but we did not get more stupid.
The GPS doesn't prevent us from learning; it provides a different context for learning. We're not struggling with paper maps, but we're still processing spatial information, making connections, building mental models of our environment.
Could AI-assisted learning work in a similar way? Sure, students might get quick answers or produce a quick text with an AI without much effort, which feels somehow wrong (the nature of these feelings is a subject of a special consideration). But that doesn't mean they stop thinking or interacting. They will start wondering how to get better answers, produce better outputs. They will begin to notice patterns in the AI's responses. They will start to question or critique what the AI produces. That's what the human brain has evolved to do.
Moreover, this process doesn't happen in isolation. Students will discuss their AI interactions with peers, compare outputs, collaboratively explore how to improve results. It becomes a form of social play - experimenting, sharing discoveries, building on each other's ideas. The AI becomes a tool for social learning, not a replacement for it.
In other words, the presence of AI doesn't eliminate cognitive work or social interaction - it might just shift their nature. And who's to say that this new form of cognitive and social engagement is any less valuable than the old?
Now, I'm not saying we should throw out everything we know about learning and development. Challenge and struggle certainly have their place. But I am suggesting that maybe we need to broaden our understanding of how growth happens, recognizing that it can occur through gentler, more playful forms of social interaction and exploration.
Perhaps there's a whole spectrum of cognitive experiences that can lead to growth, ranging from intense challenge to gentle, collaborative exposure. Maybe AI-assisted learning falls somewhere on this spectrum - not replacing traditional forms of learning, but adding new dimensions to how we can develop cognitively and socially.
This is all hypothetical, of course. We're in new territory with AI, and it will take time and research to understand its full impact on learning and development. But I think it's worth considering that our assumptions about how growth happens might need updating, particularly in how we view the social aspects of learning.
What do you think? Is it possible that we've overemphasized challenge in our understanding of learning? Could AI-assisted learning open up new pathways for cognitive development and social learning that we haven't fully appreciated yet? I am just back from a conference, where we had many early childhood educators. In that tribe, there is a lot more talk about play than about forcing growth. Maybe that's why I am having these second thoughts about growth.
I don't have definitive answers, but I think these are questions worth exploring as we navigate this new landscape of learning. After all, if there's one thing we know for sure about human cognition, it's that it's endlessly adaptable and intrinsically social. Perhaps it's time for our theories about learning to adapt as well, embracing a more collaborative, playful vision of growth in the age of AI.
Tuesday, May 21, 2024
"First try with AI"; On the advantages of organic learning
Some people advocate for structured training programs and dedicated time for AI learning, but a more organic approach is more effective and efficient.
The secret to successfully incorporating AI into your work is to simply start using it for your next task. Rather than setting aside special time for AI learning, dive right in and explore how AI can assist you in your current projects. Need to do something? Write a memo, a long email, a letter, a grant proposal? "First Try with AI."
What do you have to lose? he worst-case scenario is that you waste a little time if AI proves unhelpful for that particular task. However, in most cases, you will discover its usefulness and potential to save you some time, even if it doesn't complete the task entirely.
It's important to recognize that AI never does everything for you. Only the most mindless, bureaucratic, compliance-related content may be primarily handled by AI. However, for the majority of tasks, you will intuitively learn the right mix of human and AI ingredients to create the best soup. This organic learning process allows you to understand the strengths and limitations of AI within the context of your specific work.
There is nothing wrong with taking courses to learn about AI, but it is worth noting that assignments in such courses often lack authenticity. Those are "pretend tasks." Even after completing a course, you would still need to learn how to transfer your new skills into real-world contexts. In contrast, an organic approach to AI learning allows you to immediately apply your knowledge within the context of your work, resulting in a more motivated, deeper, and faster learning experience.
As you gradually incorporate AI into your daily tasks, you will naturally develop a better understanding of when and how to leverage its capabilities, and where to mitigate its shortcomings. This hands-on, contextual learning approach will not only help you become more proficient in using AI but also enable you to identify new opportunities for its application within your organization.
For educational contexts, we know there is a strong correlation between instructors personally using AI and them allowing students to use it in class. We don't trust things we do not understand, which explains the unreasonably strong worries about cheating. There will be no classroom use without the personal use by instructors first. Once teachers start using it for their own purposes, their anxiety levels go down, and their creativity frees up to invent classroom uses.
Monday, May 13, 2024
Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric implies a homogeneous substance, akin to measuring the alcohol content in a beverage. However, my recent survey suggests that academic integrity associated with AI use is far from homogeneous. The survey asked educators to evaluate the ethical implications of using AI for twelve different tasks in writing an academic paper, ranging from researching to brainstorming to editing to actually writing full sections.
The findings revealed significant variance in responses. While many respondents were comfortable with AI aiding in brainstorming ideas, they expressed reservations or outright disapproval of AI writing entire paragraphs or papers. This disparity underscores a critical issue: there is no consensus in the academic profession on what constitutes acceptable AI assistance in learning. More strikingly, within each individual's responses, there was considerable variation in how different AI uses were assessed.
Consider the implications of a tool like Turnitin reporting "50% AI-generated" content. What does this figure actually represent? It lacks context about how the AI-generated content was incorporated. For instance, a paper could be largely original, with only minor edits made by AI at the end, potentially showing a high percentage of AI contribution. Conversely, a student might contribute minimally to an essentially AI-written paper, making slight modifications to reduce the AI-detected percentage. Both scenarios could yield vastly different percentages, yet the ethical implications are markedly divergent.
The pursuit of better detection technology misses the point. The issue is not with the detection capabilities but with the construct itself. The very idea of "AI-generated text" as a unified concept is problematic. Just as a depression inventory measures various symptoms that converge on the underlying construct of depression, our methods for evaluating AI in academic work must recognize the diverse and context-dependent nature of its use. The current approach, which treats all AI contributions as equivalent, is akin to judging a book's genre by counting its words. I which Turnitin and other commercial "AI Detectors" would show just a little more integrity and stop selling us the snake oil. They must know for sure that their claims are bogus, because AI-generated text is not a valid construct to be measured.
Instead of focusing obsessively on detecting AI-generated content, we need to shift our perspective. We should expect and require students to use AI as part of their learning process. The challenge then becomes developing assignments that not only measure the content knowledge but also the meta-AI skills and competencies necessary to navigate and leverage these tools effectively. This approach acknowledges the complexity of AI's applications and ensures it is used responsibly, promoting a learning environment that respects both the potential and the limitations of artificial intelligence.
Tuesday, March 5, 2024
Let's hallucinate, take the kids along
Interestingly, this property of AI models reveals something profound about the human mind as well. Our remarkable ability to imagine, create and solve problems is inextricably linked to our capacity for error. Just like LLMs, human thinking is fundamentally generative and probabilistic - we are constantly making predictions and filling in gaps based on prior knowledge and contextual cues. And in doing so, we inevitably make mistakes, jumping to conclusions and seeing patterns where none exist. In a sense, "hallucination" is a built-in feature of human cognition, not a bug.
This insight has important implications for how we approach education and learning. Too often, our educational systems are overly focused on eliminating errors and inculcating "correct" answers. While accuracy is certainly important, an excessive emphasis on being right all the time can stifle creativity and limit our ability to generate novel ideas and solutions. To truly tap into the power of the human mind, we need to create space for productive mistakes and flights of imagination.
So perhaps we should spend less time trying to prevent students from ever being wrong, and more time teaching them how to recover from errors, distinguish fact from fantasy, and harness their imaginative abilities in positive ways. By embracing a bit of beneficial "hallucination", we may actually enhance our ability to discover truth and expand the boundaries of human knowledge. The key is striking the right balance - letting our minds roam free, while also exercising our critical faculties to rein in our fantasies when needed. In this way, we can learn from the foibles of AI to better understand and cultivate the powers of our own marvelous minds.
Friday, February 2, 2024
Monday, January 29, 2024
Writing instructors, why are you surprised by AI?
Why do you look surprised? Since the 1970s, there has been a shift in your field. This change was not about refining the minutiae of grammar or punctuation. Rather, it was a movement toward valuing the creative process in writing. Think of pioneers like Donald Graves, Lucy Calkins, and Peter Elbow. They were not merely toying with new ideas; they were fundamentally altering how writing is taught, influencing college-level instruction as well.
The advent of AI technology has accelerated a shift that was already underway. Historically, while there was vocal support for creative and critical thinking, the reality often leaned towards assessing grammar and spelling. It was simpler to grade based on these concrete elements. Judging originality and creativity posed greater challenges, especially when justifying grades during student appeals.
However, it is becoming clear that the reliance on traditional assessment is no longer sustainable. It is time to genuinely embrace what has been acknowledged for decades. The focus should shift more towards teaching originality, creativity, authenticity, discernment, and critical thinking. Ideas should be valued over mechanical accuracy.
A crucial aspect of this evolution is teaching students to write with AI assistance. This approach does not diminish writing standards. Instead, it raises the bar for the final product. Students should learn to use AI as a tool to enhance their writing, not as a substitute for critical thinking or creativity.
Dear writing instructors, the time has come to adapt. And you know how to do it better than anyone else. The gradual shift many of you have been working on, is now upon us. This is a moment for re-evaluating, rethinking, and embracing a new phase in education where AI complements and enhances the teaching of writing. The future is here, and it aligns with the trajectory you have been following.
Monday, January 22, 2024
Why AI is unlikely to replace teachers
The allure of a tech-driven utopia in education is not new. Radios, televisions, the internet, MOOCs – each has been heralded as a harbinger of the traditional teacher's obsolescence. Today, AI steps into this familiar spotlight, with some prophesizing a future with fewer educators. Understanding this perspective isn't challenging, given the enormity of public education's budget, the stubborn inequalities it harbors, and its notorious resistance to reform. However, the notion of significantly reducing teacher numbers through AI implementation seems, at best, a distant fantasy.
Chatbots, the latest prodigies of AI, have proven to be exceptional personal tutors. They can tailor information delivery to individual needs, offering a level of customization that traditional education struggles to match. But here's the rub: education is not merely about transferring information. It's about fostering a unique educational relationship that optimizes learning. For all its sophistication, AI lacks the capacity to replicate this.
AI indeed creates a paradise for autodidacts. Those with a natural inclination towards self-directed learning, armed with motivation and discipline, find in AI a boundless resource. However, the majority aren't autodidacts. They thrive in a relational context that not only motivates but also facilitates learning. This is a foundational principle in major learning theories, from Vygotsky's social development theory to Bandura's social learning theory and Bruner's constructivist theory. The invisible labor of a teacher or a college instructor lies in creating and nurturing this context. Presently, there is nothing in AI that can substitute this critical human element.
Furthermore, educational institutions have become integral to societal fabric, not merely as centers of learning but as community hubs. Imagining what millions of children and young adults would do without the structure of schools and colleges opens a Pandora's box of societal and developmental questions. These institutions require adult presence, not just for educational delivery, which AI might partly assume, but for the overarching environment of care and socialization they provide.
My prognosis? Unlike other industries where automation has resulted in significant workforce reductions, the field of education, particularly the teaching staff, will likely remain unscathed in this aspect. There's no need for panic among educators, but there is a need for adaptation. Learning to harness AI's capabilities will be crucial, not to replace teachers, but to complement them, freeing up time for the more nuanced, relational, and affective aspects of their roles. Additionally, educators must remain agile, adapting curricula to include skills that future employers will value, ensuring students are well-equipped for the evolving workforce.
In essence, AI in education is not a replacement, but a tool – one that, if used wisely, can enhance the educational experience without displacing its most vital component: the human educator.
Friday, January 12, 2024
AI use is not a sin
The enduring influence of Puritan ethics in American culture presents an intriguing dichotomy. This historical ethos, with its deep roots in hard work and discipline, colors modern perspectives on technology and learning. I am really worried about the disproportional efforts to catch students using AI, as if it was somehow sinful on its own.
Puritan ethics, born from 16th and 17th-century religious reformers, celebrated hard work as a moral virtue. This belief, that success must be earned through effort and toil, subtly shapes American attitudes towards technology, including AI in education. Critics of AI in this realm often argue that it makes learning 'too easy', equating ease with moral decay. They yearn for the 'authenticity' of traditional learning methods, where struggle is seen as the only legitimate path to knowledge.
However, it's crucial to acknowledge that learning does indeed require effort; growth is impossible without it. But this effort need not be synonymous with drudgery. Suffering and effort are not interchangeable. The assumption that struggle is inherently valuable and that ease is inherently suspect is a limited view, overlooking the broader purpose of education.
The Puritanical echo in the debate over AI in education is ironic. The ethos was about self-improvement, yet rejecting AI tools seems counterproductive. AI can democratize and personalize education, making it more accessible and tailored to individual needs.
The overuse of ethical judgments in this context reflects a broader issue. Ethics is often oversimplified, leaving little room for the complexities of life. This misuse of ethics, particularly in education, can hinder innovation.
In re-evaluating these inherited ethical frameworks, it's essential to recognize that ease in learning isn't antithetical to the values of hard work and achievement. Education's true goal is empowerment and enlightenment, and AI offers a transformative potential in reaching this goal.
Monday, January 8, 2024
I'll tell you what's unethical (a rant)
Ah, the great ethical quandary of our times in education – the use of AI! Picture this: earnest educators standing as the last bastion of traditional wisdom, decreeing “Thou shalt not use AI,” with a fervor that's almost admirable, if it weren't so quaintly misplaced. This isn't just a classic case of misunderstanding technology; it's like watching someone trying to ward off a spaceship with a broomstick.
Now, let's talk about restrictions. In education, where reason should reign supreme, the rationale for any restriction must be more substantial than "because it’s always been this way." When an educator waves the flag of prohibition against AI, one can't help but wonder: where’s the logic? It’s a bit like saying you shouldn’t use a calculator for fear it might erode your abacus skills.
Here's a thought to ponder: the only justifiable ground for restricting AI use in education is if, and only if, it hinders the development of a foundational skill – one that's essential for crafting more complex abilities required for advanced learning. And, let’s not forget, the burden of proof rests with the person setting the limits. Which skill, exactly, is prevented from being developed by the use of AI? If you can explain it to students, then yes, be my guest, ban away.
AI is a very good tutor. Yes, it makes mistakes sometimes, but it is infinitely patient and always available, no appointment necessary. No need to be embarrassed when asking for the umpteenth example to illustrate an elusive concept. To withhold this resource from students isn't just a tad unethical; it's like hiding the key to a treasure chest of knowledge and saying, “Oops, did I forget to mention where it is?”
So, what's ethical and what's not in this grand AI debate? Anything that facilitates learning and growth is a big yes in the ethical column. Casting aspersions on AI without a valid reason or depriving students of its benefits is unethical.
The larger, real question we should be asking is this: What defines ethical practice in education? Is it clinging to the past because it’s comfortable, or is it embracing the future and all the tools it brings to help our students soar? At the end of the day, what’s truly unethical is anything that hinders progress under the guise of misguided caution. After all, isn't education all about unlocking doors, not closing them?
Saturday, January 6, 2024
What does AI reveal about relational pedagogy?
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes uniquely human skills. Stephen Wolfram astutely observed that AI hasn't so much humanized computers as it has highlighted the machine-like aspects of certain human abilities, such as writing. This insight extends powerfully into the realm of education, reshaping our understanding of teaching and its intrinsic human elements.
Traditionally, teaching has been viewed primarily as a process of instruction, a transmission of knowledge from teacher to student. However, the advent of AI in education challenges this perspective. AI's ability to deliver instruction, personalize learning, and even interact with students reveals that the instructional aspect of teaching is not exclusively human after all. Machines can replicate, and in some cases, surpass human efficiency in these areas. This realization prompts a crucial question: if machines can instruct, what then is the unique value that human educators bring to the table?
The answer lies in the relational aspect of teaching, an area where AI cannot succeed. AI's emergence has inadvertently cast a spotlight on the importance of relationship-building in education, underscoring its irreplaceability. The human teacher's role evolves from being a mere conveyor of knowledge to a mentor, a guide, a catalyst for emotional and social growth. In this light, the human educator's value is redefined, emphasizing those qualities that machines cannot replicate: empathy, emotional intelligence, and the ability to inspire and motivate.
This shift in perspective is part of a broader redefinition of what it means to be human in an age increasingly dominated by machines. As AI takes over tasks that were once thought to require human intelligence, we are compelled to re-examine and emphasize those domains that are uniquely human. The essence of humanity is being recalibrated, focusing more on emotional, social, and creative capacities - areas where humans excel and machines falter.
In the context of education, this recalibration has profound implications. It suggests that the future of teaching lies not in competing with AI in cognitive tasks but in embracing and enhancing the relational, emotional, and creative aspects of education. Teachers, liberated from the routine cognitive aspects of their work by AI, can focus more on developing deep, meaningful relationships with students, fostering their emotional and social growth, and nurturing their creativity.
This evolution does not diminish the teacher's role; rather, it elevates it. Educators become the custodians of those aspects of learning that are quintessentially human. The classroom becomes a place where not just intellectual but emotional and social skills are developed, where students learn not just from machines but from the rich, complex interactions with their teachers and peers.
AI's integration into education does more than just streamline teaching; it prompts a reevaluation of the teaching profession and a redefinition of humanity itself. As AI assumes more cognitive tasks, the unique value of human educators comes into sharper focus, centering on the relational and emotional aspects of teaching. This shift heralds a new era in education, one where the human element is not just preserved but celebrated and elevated, defining a future where humans and machines work in tandem to create a richer, more holistic educational experience.
Monday, November 27, 2023
Assessing writing with AI
Writing with AI is a complex skill that overlaps with traditional manual writing, but it is not the same. Many instructors struggle to grasp this new skill because it is unfamiliar to them. Teaching something you haven't mastered is challenging, leading to noticeable unease at all educational levels. Even those eager to incorporate AI in teaching, often open to new innovations, face this difficulty. The issue essentially lies in redefining the objectives of writing instruction. If the belief is that students should ultimately write independently, then traditional practice is paramount, leaving no role for AI tools. However, the more challenging conceptual shift is recognizing the need to teach students how to write with AI. This is like the transition from penmanship to typing. We lose something in this shift: the beauty, the discipline, and the rigorous exercises of handwriting. I recall diligently practicing letter formations in my first-grade penmanship class. Although I was never adept at it and gladly transitioned to typewriters when they became accessible, I understand the pain of losing the esteemed art of writing, cherished for centuries. This pain, particularly acute for those who have spent decades mastering and teaching writing, must be acknowledged. Yet, this shift seems inevitable. We are dealing with a technology that is being adopted faster than any in history, and it is not a passing fad. The benefits are too clear. We face a stark paradox: educators use AI to create lesson plans and assessment rubrics, yet often bar their students from using the same technology. This is unsustainable and awkward.
As a profession, we are only taking the first steps in integrating AI into writing instruction. Here's another baby step: I revised Sacramento State University's Undergraduate Writing Portfolio Assessment criteria, considering the new skill of "wrating."
Writing Placement for Juniors Portfolio (WPJ)
5 - Exceptional Wraiter: Demonstrates mastery in "wraiting," producing AI-assisted compositions at a publishable level in their respective discipline. Showcases exceptional skill in generating rich, engaging prompts and collaboratively refining AI outputs. Exhibits a deep understanding of AI's strengths and limitations, skillfully navigating these in producing original, high-quality work.
4 - Strong Wraiter: Effectively employs AI tools in "wraiting," producing texts of high quality that reflect a sophisticated understanding of AI's capabilities. Demonstrates the ability to create rich prompts and engage in the iterative process of refining AI-generated content. Shows a clear grasp of AI's strengths and limitations, using them to enhance original thinking and critical evaluation.
3 - Competent Wraiter: Demonstrates a solid understanding of "wraiting," using AI tools to assist in writing tasks. Capable of creating effective prompts and engaging in the process of refining AI outputs. Shows awareness of the strengths and limitations of AI in writing, but may require further guidance to fully exploit these in creating high-quality texts.
2 - Developing Wraiter: Beginning to understand the role of AI in "wraiting." Can generate basic AI-assisted texts but requires further instruction in creating effective prompts and refining outputs. Shows potential in understanding AI's strengths and limitations, but needs more practice to integrate these effectively in writing tasks.
1 - Emerging Wraiter: Early stages of grasping "wraiting." Struggles with effectively using AI tools, often producing clichéd, uninspired texts that lack human input and originality. Needs substantial guidance in understanding AI's capabilities, constructing prompts, and refining AI-generated content.
0 - Incomplete Portfolio: Portfolio does not demonstrate the basic competencies in "wraiting" or effective use of AI in writing tasks. Requires additional work to understand and skillfully employ AI tools in the writing process. What do you think?
Monday, May 15, 2023
If a robot can beat your writing assignment, it is time to rethink it
The rise of machines, with AI bots like OpenAI's ChatGPT replicating human-like text generation, compels us to question our education system's foundations. Is reflective or expository writing a valid assessment when AI can mimic average student work? The answer isn't straightforward. However, this crisis provides an opportunity to redefine our understanding of writing and its relation to thought.
The advent of AI challenges us to de-emphasize the mechanics of writing, such as grammar, style, and syntax, that a machine can master. Instead, we should focus on more complex aspects of writing that are inherently human: original thought, artful language, and narratives that resonate with the human experience. This shift, although jarring, is necessary. It signals not just a technological revolution, but a cultural, intellectual, and educational upheaval.
The AI revolution illuminates a harsh truth: traditional education and assessment methods are no longer sufficient. The challenge is to teach students not just to write well, but to think originally, create artfully, and understand deeply. In the face of the machine, we must thrive in domains that are distinctly human.
In this technological metamorphosis, we're compelled to redefine our work and our value. Perhaps we are not merely creators, analysts, or workers. Perhaps we are artists, thinkers, dreamers. And perhaps, in this transition, we will find our survival and our redemption.
We must revisit traditional teaching methodologies, challenging the efficacy of our current writing assignments. As educators, a simple test can provide clarity: input your exact writing assignment into ChatGPT. If the AI produces a B-grade paper, it's time to rethink.
One option is to replace the writing assignment with an alternative assessment, such as oral exams, multiple choice, or short answer tests. Another option is to transform the assignment to require students to engage with AI, like ChatGPT. This approach would involve designing assignments that test not just writing ability, but also the capacity to evaluate AI outputs critically, discern the nuances differentiating human intellect from AI, and incorporate these insights creatively. In response to ChatGPT, dramatically raise your expectations of student writing, for now they have a lot of help.
AI's advent should be viewed not as a threat, but as an opportunity to explore new pedagogical frontiers. Our learning community is leading this exploration into AI-integrated education, and we invite all to join this journey.
In essence, AI's rise can help us rethink education. We must focus not just on subject mastery, but also on developing original thinking, creativity, and nuanced intelligence. In the face of the machine, we can still shine in our distinctly human domains, thriving in this technological metamorphosis. The future of education, interwoven with AI, might look different, but it promises a realm of possibilities, allowing us to redefine our unique human value.
Do AI bots deceive?
The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...
-
In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes un...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. Howe...