AI in Society
The blog is connected to my role of the head of the National Institute on AI in Society
at California State University Sacramento. However, opinions and positions expressed therein are mine, and do not represent the university's opinions or positions.
Search This Blog
Monday, March 24, 2025
Two Reactions to AI
For Rodrigo, it is almost liberating. If his students can now write coherent college-level essays with the help of machines, then he is free to teach them something more ambitious. Argument structure, epistemology, stylistic nuance—areas where automation falters. He is not naïve; he knows AI is here to stay. But rather than fight it, he welcomes the detour. Less time marking the same intro-to-critical-writing dreck, more time pushing the intellectual envelope. Lucky him.
Jane, however, reads the situation as academic apocalypse. Her course was the product of years of iteration, finely tuned to teach writing through careful scoping, scaffolding, and feedback. Skip the process, she believes, and you skip the learning. The AI is not a tool in her eyes; it is a cheat code, one that threatens to render her teaching obsolete. She starts researching detection tools, imagining a future of surveillance, suspicion, and pedagogical collapse.
These are not just personality quirks or different thresholds for academic dishonesty. What really separates them is how they understand curriculum. For Rodrigo, curriculum is plastic—something owned, revised, improved. He feels empowered to tinker. If a foundational skill can be outsourced, then the baseline has shifted, and he can raise the stakes. A change in student capability is an opportunity, not a crisis.
Jane sees curriculum differently. For her, it is an infrastructure. Complex, interdependent, and not easily re-routed. Learning outcomes, general education requirements, accreditation standards—these are not suggestions, they are fixtures. If a key skill like essay-writing becomes an unreliable indicator of mastery, the whole sequence threatens to unravel. You cannot simply skip a floor in the building and hope the roof holds.
There is a quiet tragedy here. Not because Jane is wrong—her concerns are deeply valid—but because she feels disempowered by a system she herself has worked so hard to uphold. The larger structures of academia—its bureaucracies, its resistance to rapid change—amplify the sense of threat. It is not just that students are using ChatGPT; it is that there is no institutional plan, no curricular pivot, no workflow update to guide faculty through this transition. So each professor is left to improvise, bringing their own philosophies and tolerances to bear.
And that is where the real tension lies. Technology does not just disrupt skills—it exposes fault lines in our educational ideologies. Are we guardians of a process or facilitators of progress? Should we protect the sequence, or adjust the map when shortcuts appear?
Rodrigo shrugs and walks forward. Jane looks for the brakes. But maybe it is not about who is right. Maybe the more urgent task is to build a system where professors do not have to choose between clinging to the past and embracing a future they did not ask for. Because either way, the syllabus is no longer entirely ours to write.
UPD: Thanks to my colleague Hogan Hays for his thoughtful critique of this blog.
Friday, February 28, 2025
The Invasive Innovation, a Case of AI in Education
Disruptive innovations, as defined by Clayton Christensen, typically emerge at the margins before gradually overtaking established players. In contrast, artificial intelligence in education is not a slow-moving force of competitive displacement. It is an invasive innovation—spreading far beyond the predictions of Everett Rogers’ diffusion model, propelled not as much by institutions as by students themselves. This is not a technology waiting to be adopted. It is a reality forcing adaptation. Treating it as an optional innovation mises the point entirely.
Educational leaders, policymakers, and institutions may wish to evaluate AI’s impact, consider best practices, and implement careful frameworks. But they are not in control of the timeline. The adoption is not up for debate. It is already happening, driven by students who see AI not as a tool of the future but as an everyday utility, as natural as search engines or calculators. The question is not when to integrate AI into education, but how to manage an environment where AI is already integrated—often in ways institutions neither designed nor anticipated.
This is why traditional approaches to educational innovations are inadequate. Normally, innovations are assessed, piloted, and then scaled. But AI in education is unfolding more like a crisis than an initiative. It requires a response more akin to a public health emergency than a strategic rollout. When COVID-19 disrupted education, schools did not spend years analyzing whether remote learning was effective. They pivoted overnight, creating imperfect but necessary solutions. AI invasion demands a similar level of urgency.
Institutional response to invasive AI must be comprehensive rather than defensive. How do we accommodate this invasive innovation? Putting AI tools directly in the hands of faculty represents a critical first step, ensuring they engage with the same technologies their students already access freely. This must be paired with robust professional development—not merely technical training but genuine pedagogical exploration that helps educators reimagine their disciplinary approaches in an AI-augmented world.
Beyond tools and training, institutions must invest in creating dedicated time and space for disciplinary teams to collaboratively revise their assignments. The goal is not to create "cheat-proof" assessments but to develop authentic tasks that meaningfully incorporate AI as a partner in learning. This demands protected time away from everyday teaching pressures, where faculty can experiment, fail, refine, and document new approaches that acknowledge AI as a permanent fixture in their educational landscape rather than an external threat to be neutralized.
Saturday, February 1, 2025
Task Decomposition, a core AI skill
The effective use of artificial intelligence depends on our ability to structure problems in ways that align with both human and machine capabilities. While AI demonstrates remarkable computational abilities, its effectiveness relies on carefully structured input and systematic oversight. This suggests that our focus should shift toward understanding how to break down complex tasks into components that leverage the respective strengths of humans and machines.
Task decomposition - the practice of breaking larger problems into manageable parts - predates AI but takes on new significance in this context. Research in expertise studies shows that experienced problem-solvers often approach complex challenges by identifying distinct components and their relationships. This natural human tendency provides a framework for thinking about AI collaboration: we need to recognize which aspects of a task benefit from computational processing and which require human judgment.
The interaction between human users and AI systems appears to follow certain patterns. Those who use AI effectively tend to approach it as a collaborative tool rather than a complete solution. They typically work through multiple iterations: breaking down the problem, testing AI responses, evaluating results, and adjusting their approach. This mirrors established practices in other domains where experts regularly refine their solutions through systematic trial and error.
Consider the task of writing a research paper. Rather than requesting a complete document from AI, a more effective approach involves breaking down the process: developing an outline, gathering relevant sources, analyzing specific arguments, and integrating various perspectives. Similarly, in data analysis, success often comes from methodically defining questions, selecting appropriate datasets, using AI for initial pattern recognition, and applying human expertise to interpret the findings.
This collaborative approach serves two purposes. First, it helps manage complexity by distributing cognitive effort across human and machine resources. Second, it maintains human oversight of the process while benefiting from AI's computational capabilities. The goal is not to automate thinking but to enhance it through structured collaboration.
Current educational practices have not yet fully adapted to this reality. While many institutions offer technical training in AI or discuss its ethical implications, fewer focus on teaching systematic approaches to human-AI collaboration. Students need explicit instruction in how to break down complex tasks and document their decision-making processes when working with AI tools.
To address this gap, educational programs could incorporate several key elements:
- Practice in systematic task analysis and decomposition
- Training in structured approaches to AI interaction
- Documentation of decision-making processes in AI-assisted work
- Critical evaluation of AI outputs and limitations
- Integration of human expertise with AI capabilities
The emergence of AI tools prompts us to examine our own cognitive processes more explicitly. As we learn to structure problems for AI collaboration, we also develop a clearer understanding of our own problem-solving approaches. This suggests that learning to work effectively with AI involves not just technical skills but also enhanced metacognition - thinking about our own thinking.
The future of human-AI collaboration likely depends less on technological advancement and more on our ability to develop systematic approaches to task decomposition. By focusing on this fundamental skill, we can work toward more effective integration of human and machine capabilities while maintaining the critical role of human judgment and oversight.
These observations and suggestions should be treated as starting points for further investigation rather than definitive conclusions. As we gather more evidence about effective human-AI collaboration, our understanding of task decomposition and its role in this process will likely evolve. The key is to maintain a balanced approach that recognizes both the potential and limitations of AI while developing structured methods for its effective use.
Tuesday, September 17, 2024
Why Parallel Integration Is the Sensible Strategy of AI Adoption in the Workplace
Artificial intelligence promises to revolutionize the way we work, offering efficiency gains and new capabilities. Yet, adopting AI is not without its challenges. One prudent approach is to integrate AI into existing workflows in parallel with human processes. This strategy minimizes risk, builds confidence, and allows organizations to understand where AI excels and where it stumbles before fully committing. I have described the problem of AI output validation before; it is a serious impediment to AI integration. Here is how to solve it.
Consider a professor grading student essays. Traditionally, this is a manual task that relies on the educator's expertise. Introducing AI into this process does not mean handing over the red pen entirely. Instead, the professor continues grading as usual but also runs the essays through an AI system. Comparing results highlights discrepancies and agreements, offering insights into the AI's reliability. Over time, the professor may find that the AI is adept at spotting grammatical errors but less so at evaluating nuanced arguments.
In human resources, screening job applications is a time-consuming task. An HR professional might continue their usual screening while also employing an AI tool to assess the same applications. This dual approach ensures that no suitable candidate is overlooked due to an AI's potential bias or error. It also helps the HR team understand how the AI makes decisions, which is crucial for transparency and fairness.
Accountants auditing receipts can apply the same method. They perform their standard checks while an AI system does the same in the background. Any discrepancies can be investigated, and patterns emerge over time about where the AI is most and least effective.
This strategy aligns with the concept of "double-loop learning" from organizational theory, introduced by Chris Argyris. Double-loop learning involves not just correcting errors but examining and adjusting the underlying processes that lead to those errors. By running human and AI processes in parallel, organizations engage in a form of double-loop learning—continually refining both human and AI methods. Note, it is not only about catching and understanding AI errors; the parallel process will also find human errors through the use of AI. The overall error level will decrease.
Yes, running parallel processes takes some extra time and resources. However, this investment is modest compared to the potential costs of errors, compliance issues, or damaged reputation from an AI mishap. People need to trust technology they use, and bulding such trust takes time.
The medical field offers a pertinent analogy. Doctors do not immediately rely on AI diagnoses without validation. They might consult AI as a second opinion, especially in complex cases. This practice enhances diagnostic accuracy while maintaining professional responsibility. Similarly, in business processes, AI can serve as a valuable second set of eyes.
As confidence in the AI system grows, organizations can adjust the role of human workers. Humans might shift from doing the task to verifying AI results, focusing their expertise where it's most needed. This gradual transition helps maintain quality and trust, both internally and with clients or stakeholders.
In short, parallel integration of AI into work processes is a sensible path that balances innovation with caution. It allows organizations to harness the benefits of AI while managing risks effectively. By building confidence through experience and evidence, businesses can make informed decisions about when and how to rely more heavily on AI.
Thursday, August 29, 2024
Why Newsom should veto SB 1047
Moreover, the burdens imposed by this bill will disproportionately affect smaller developers, particularly those on college campuses or within startups, who simply cannot afford the additional costs. This will stifle innovation, further entrenching the dominance of large tech companies and discouraging new entrants from participating in the AI landscape.
Before implementing such heavy-handed regulations, California must first focus on developing clear standards and building the capacity to enforce them. Without this groundwork, the bill will do more harm than good, leading to increased monopolization and a chilling effect on the very innovation it seeks to protect. The Governor should veto this bill and advocate for a more measured, phased approach that prioritizes the development of standards and capacity before regulation.
Friday, August 23, 2024
Filling Voids, Not Replacing Human Experts
Consider healthcare, particularly in underserved areas. It is a truism that a qualified doctor’s advice is better than anything an AI could provide. But what if you live in a rural village where the nearest doctor is hundreds of miles away? Or in a developing country where medical professionals are stretched thin? Suddenly, the prospect of AI-driven medical advice does not seem like a compromise; it feels like a lifeline. While AI lacks the nuanced judgment of an experienced physician, it can provide basic diagnostics, suggest treatments, or alert patients to symptoms that warrant urgent attention. In such scenarios, AI does not replace a doctor—it replaces the silence of inaccessibility with something, however imperfect.
Another case in point is mental health counseling. In many parts of the world, even in affluent countries, mental health services are woefully inadequate. Students at universities often face wait times ranging from weeks to months just to speak with a counselor. During that limbo, the option to interact with an AI, even one with obvious limitations, can be a critical stopgap. It is not about AI outperforming a trained therapist but offering a form of support when no other is available. It can provide coping strategies, lend a sympathetic ear, or guide someone to emergency services. Here, AI does not replace therapy; it provides something valuable in the absence of timely human support.
Education offers another case for AI’s gap-filling potential. Tutoring is an essential resource, but access to quality tutors is often limited, mainly because it is expensive. Universities might offer tutoring services, but they are frequently understaffed or employ peer tutors. Office hours with professors or teaching assistants can be similarly constrained. AI can step into this void. Chatting with an AI about a difficult concept or problem set might not equal the depth of understanding gained from a one-on-one session with a human tutor, but it is unquestionably better than struggling alone. AI does not compete with tutors; it extends their reach into spaces they cannot physically or temporally cover.
The same logic applies to a range of other fields. Legal advice, financial planning, career coaching—all are areas where AI has the potential to add significant value, not by outstripping human expertise but by offering something in environments where professional advice is out of reach. Imagine a low-income individual navigating legal complexities without the means to hire an attorney. An AI could provide at least basic guidance, clarify legal jargon, and suggest possible actions. All of it must be done with proper disclaimers. It is not a substitute for legal representation, but it is a world better than the alternative: no help at all.
In embracing this non-competing stance, we shift the narrative. The role of AI is not to replace human experts but to step in where human services are scarce or nonexistent. The true potential of AI lies in its ability to democratize access to essential services that many people currently go without. When AI is viewed as a bridge rather than a rival, its utility becomes much more evident. AI does not have to be better than a person to be valuable; it just should be better than the void it fills.
Saturday, July 20, 2024
The Three Wave Strategy of AI Implementation
Whether it's a university, a tech giant, a manufacturing company, a public utility, or a government agency, the complexity of operations can be overwhelming. To illustrate this point, Claude and I have generated a list of over 1,150 workflows typical for a large university, many of which can be further broken down into smaller, more specific processes.
Given this complexity, the question arises: Where do we start with AI implementation? The answer lies in a strategic, phased approach that considers the unique characteristics of each workflow and the organization's readiness for AI adoption.
The First Wave: Low-Hanging Fruit
The initial phase of AI implementation should focus on what we call the "low-hanging fruit" - workflows that meet three crucial criteria:
- Self-evident quality: The output quality is immediately obvious and doesn't require complex evaluation.
- Single-person control: The workflow is typically managed or executed by one individual.
- Ready-made AI tools: The process can be enhanced using existing AI tools without requiring specialized development. It is either using one of the primary LLM's or building a custom bot.
These criteria help identify areas where AI can quickly and effectively augment human efforts, improving efficiency and potentially enhancing the quality of service provided. Based on these criteria, here's a priority list of workflows that could be considered for the first wave of AI implementation. These are just examples:
- Student services
- Student and prospective student advising of all kinds
- Resume and Cover Letter Review (Career Services)
- Offering individual resume critiques
- Assisting with cover letter development
- Academic Policy Development and Enforcement (Academic Affairs)
- Drafting and revising academic policies
- Health Education and Outreach (Health and Wellness Services)
- Creating and distributing health education materials
- Sustainability Education and Outreach (Sustainability and Environmental Initiatives)
- Creating sustainability guides and resources for campus community
- Digital Marketing and Social Media Management (University Communications and Marketing)
- Creating and curating content for various platforms
- Grant Proposal Development and Submission (Research and Innovation)
- Assisting faculty with proposal writing
- Financial Aid Counseling (Financial Aid and Scholarships)
- Providing one-on-one counseling sessions
- Offering debt management and financial literacy education
- Alumni Communications (Alumni Relations and Development)
- Producing alumni magazines and newsletters
- Scholarly Communications (Library Services)
- Supporting faculty in publishing and copyright issues
- Providing guidance on research impact metrics
- International Student and Scholar Services (International Programs and Global Engagement)
- Providing immigration advising and document processing
This first wave serves multiple purposes. It demonstrates the proof of principle, making more stakeholders comfortable with AI integration. It also helps build internal expertise and confidence in working with AI technologies. These early successes can pave the way for more ambitious implementations in the future.
The Second Wave: Tackling Costly Workflows
Once the organization has gained experience and confidence from the first wave, it can move on to more complex and costly workflows. These are typically processes that involve significant labor, occur frequently, and have a broad scope of impact on the organization. However, it is crucial to narrow down this list based on feasibility and readiness for AI implementation.
For instance, while teaching is undoubtedly one of the most labor-intensive and impactful processes in a university, we do not yet have sufficient knowledge on how to make it significantly more efficient through AI. Some processes, like teaching, may never be fully optimized by AI because to their inherently relational nature.
Note, this is also an opportunity to review major workflows; they often evolved over the years, and are far from ideal efficiency. AI can help review these workflows, and recommend streamlining. And of course, AI can be integrated into actually doing the work.
The Third Wave: Enterprise-Level Solutions
Only after successfully navigating the first two waves should an organization consider enterprise-level AI solutions. These solutions have the potential to radically redefine the organization's core operations, placing AI at the center of its processes. This level of integration requires a deep understanding of AI capabilities, a clear vision of the organization's future, and a robust infrastructure to support AI-driven operations. Most importantly, it requires specialized tools and high level of security.
The Timeline and Exceptions
This phased approach to AI implementation is not a quick process. For most large, complex organizations, it could take a couple of decades to fully realize the potential of AI across all workflows. However, there are exceptions. Some businesses with simpler and fewer workflows, such as narrowly specialized customer service operations, may be able to leapfrog straight into the third wave, especially if they have prior experience with AI technologies.
But these are the exceptions rather than the rule. For the majority of organizations, the path to comprehensive AI implementation requires a well-thought-out strategy, clear priorities, and a focus on building confidence and expertise over time.
Integrating AI into a complex organization's workflows is a marathon, not a sprint. It asks for patience, strategic thinking, and a willingness to learn and adapt. The key is to approach this journey with a clear strategy, well-defined priorities, and a commitment to building internal AI expertise.
Wednesday, July 17, 2024
AI is not going to implement itself, but governments can help
Tuesday, July 9, 2024
AI-Positive Pedagogy: Navigating the Great Disruption
AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.
While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.
The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.
AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.
AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.
The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.
To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.
It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.
In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.
To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.
The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.
Thursday, June 6, 2024
Using AI without losing control
As AI becomes increasingly capable of performing tasks that were once the exclusive domain of humans, it no longer makes sense to spend a lot of time teaching skills that AI can do better and faster. Just as we largely replaced the teaching of manual drafting with computer-aided design (CAD), we must now reassess the curriculum and remove skills that have been rendered obsolete by AI.
The way we define curriculum reflects our understanding of what it means to be human, a definition that has evolved significantly over the centuries. As machines have become more advanced, skills previously valued and developed by humans have gradually been entrusted to machines.
However, the logic behind this shift is also changing. It is no longer just a matter of what machines can do better, as they seem to be excelling at almost everything. Instead, it is about what we, as humans, choose to retain, enjoy, and feel compelled to do. It is less about competition with machines and more about the arrangement of power and authority. To maintain our human authority, we must continue to perform certain tasks.
One of the most important of these tasks is the ability to set goals and make value judgments about what should or should not be done. This is a complex skill that requires a comprehensive understanding of the world, both in its physical and social aspects, as well as the wisdom to make decisions when faced with competing values and the inherent uncertainty of the future. As AI increasingly mediates our interactions, the ability to determine the goals and direction of these interactions becomes even more crucial.
Equally important is the ability to work with AI to achieve our objectives. This process, sometimes referred to as "prompt engineering," involves iterative interaction and refinement to obtain the most accurate, appropriate, and helpful outputs from AI. Beyond technical skills, this requires robust critical thinking to assess the truthfulness and value of AI-generated content. In essence, it is the skill of editing, not merely for style but for substance, constantly asking: Is this useful? Does it improve upon what already exists? Does it positively contribute to the overall human knowledge and well-being?
These core competencies should shape the curriculum of the future: goal-setting, value judgment, critical discernment, and the ability to collaborate with AI to achieve human objectives. The specifics will evolve as AI capabilities advance, and there will be variations across disciplines.
However, the fundamental challenge remains the same: in a world increasingly shaped by artificial intelligence, we must rededicate ourselves to cultivating human intelligence in its fullest and most distinctively human expressions. Only then can we ensure that the tremendous power of AI serves to elevate humanity rather than diminish it.
Monday, May 6, 2024
In Education, AI is an emergency
On one hand, AI presents an exhilarating leap forward, a
kind of magic wand that promises to transform how we learn and teach. On the other hand, this glam surface lies a grittier reality—one where the very essence of
learning could be at risk.
In education, the core value lies in the process itself. The
act of wrestling with ideas, constructing arguments, and stumbling over complex
problems is where true learning happens. If a student turns to AI to write an
essay, they might technically meet the assignment's requirements, but they've
bypassed the intellectual struggle critical to internalizing knowledge. This
worry has only deepened in the wake of the pandemic, which already strained
educational norms and exposed glaring disparities. Introducing AI into this mix
feels like throwing a wrench into an already delicate machine, risking the
dilution of the educational experience and fostering a generation more adept at
using tools than thinking independently.
Addressing this is no minor feat. It is not about rejecting
AI's benefits outright, but rather about steering its use with a careful hand.
Educators must become architects of a new curriculum that anticipates AI's
influence and actively incorporates it in enriching ways. Perhaps this means designing
projects where AI is expected to be used by the projects are still challenging
and generate growth.
However, such a transformative approach to curriculum
development is a colossal task, varied across academic disciplines and leveels
of education. Educators need robust support systems, time to experiment and
innovate, and backup from policies that understand and address these unique
challenges. Governments and educational leaders must be partners in crafting
policies that nurture educationally effectiveand responsible AI use.
As I reflect on this development, I am struck by the immensity of the challenge before us. It is not just about adapting to a new tool; it is about redefining the very foundations of how we teach and learn. It is about finding a way to harness the power of AI without sacrificing the soul of education. This is a journey that will require bold experimentation, deep collaboration, and a willingness to embrace the unknown. But it is a journey we must undertake, for the stakes are too high to ignore. The future of education hangs in the balance, and it is up to us to shape it with wisdom, courage, and a steadfast commitment to the human experience of learning.
Tuesday, April 23, 2024
AI revolution minus massive unemployment
The conversation on AI often revolves around efficiency and cost reduction, typically translating into fewer jobs. However, a pivotal shift in perspective—from cutting workforce to enhancing and expanding workforce capabilities—can redefine the role of AI in the corporate world. This approach not only preserves jobs but also adds significant value to customer experiences and broadens the spectrum of services and products a company can offer.
The traditional method of dealing with technological disruption—laying off workers and hiring new ones with the necessary skills—is not only a waste of human capital but also disregards the cultural knowledge embedded within an organization's existing workforce. Retraining keeps people within the organization, allowing them to shift roles while retaining and applying their invaluable understanding of the company's ethos and operations in new ways.
The first step in a proactive workforce transformation strategy is to map out the anticipated skills and roles that will be in demand. This is not just about foreseeing the obsolescence of certain skills but identifying emerging opportunities where AI can augment human capabilities. For instance, with the rise of AI-driven analytics, there is a growing need for professionals who can interpret and leverage these insights into strategic decisions, enhancing business intelligence far beyond current levels.
Once future needs are mapped, the next step is to develop a compelling incentive structure for retraining. Traditional models of employee development often rely on mandatory training sessions that might not align with personal or immediate business goals. Instead, companies should offer tailored learning pathways that align with career progression and personal growth, supported by incentives such as bonuses, career advancement opportunities, and recognition programs. This approach not only motivates employees to embrace retraining but also aligns their development with the strategic goals of the organization.
With AI's capacity to handle repetitive and mundane tasks, employees can redirect their efforts towards more complex, creative, and meaningful work. This shift enables businesses to expand their service offerings or enhance their product features, adding significant value to what customers receive. For example, financial advisors, freed from the tedium of data analysis by AI tools, can focus on crafting bespoke investment strategies that cater to the intricate preferences and needs of their clients. Similarly, customer service representatives can use insights generated by AI to provide personalized service experiences, thereby increasing customer satisfaction and loyalty.
AI not only optimizes existing processes but also opens new avenues for innovation. For instance, in the healthcare sector, AI can manage diagnostic data with high efficiency, which allows healthcare providers to extend their services into preventive health management and personalized medicine, areas that were previously limited by resource constraints. In the retail sector, AI-enhanced data analysis can lead to the creation of highly personalized shopping experiences, with recommendations and services tailored to the individual preferences of each customer, transforming standard shopping into curated personal shopping experiences.
For successful implementation, organizations must foster a culture that views AI as a tool for empowerment rather than a threat to employment. Leadership should communicate clearly about the ways AI will be used to enhance job roles and the benefits it will bring to both employees and the company. Regular feedback loops should be established to adjust training programs based on both employee input and evolving industry demands, ensuring that retraining remains relevant and aligned with market realities.
By focusing on retraining the workforce to harness AI effectively, businesses can transform potential disruptions into opportunities for growth and innovation. This approach not only preserves jobs but also enhances them, adding unprecedented value to the company and its customers, and paving the way for a future where human ingenuity and artificial intelligence work hand in hand to achieve more than was ever possible before.
Monday, April 22, 2024
The Disruptive Potential of AI: Lessons from Clayton Christensen's Theory
As AI continues to make inroads into various industries, it is easy to dismiss its current shortcomings and remain complacent. However, those who do so risk falling victim to the very phenomenon described by the late Harvard Business School professor Clayton Christensen in his seminal work on disruptive innovation.
Christensen's theory posits that disruptive technologies often start at the bottom of the market, offering inferior performance compared to incumbent solutions. However, these technologies are typically cheaper and more accessible, allowing them to gain a foothold among less demanding customers. Over time, as the technology improves, it begins to meet the needs of more sophisticated users, eventually displacing the incumbent players entirely.
The parallels with AI are striking. Today, we may scoff at awkward AI-generated movies featuring characters with anatomical oddities or primitive music engines churning out cliched tunes. However, it would be foolish to assume that these technologies will not improve. Just as the early smartphones were no match for desktop computers, the AI of today is merely a stepping stone to more advanced systems that will rival and surpass human capabilities in various domains.
The rapid pace of investment in AI only serves to underscore this point. With billions of dollars pouring into research and development, the march of progress is inexorable. While the exact timeline remains uncertain, it is clear that AI will continue to evolve at a brisk pace, transforming industries and reshaping the nature of work itself.
In light of this reality, policymakers and leaders in government and philanthropy would be wise to start planning for a future in which the skills demanded by the job market are in a constant state of flux. Rather than clinging to the status quo, we must embrace the disruptive potential of AI and invest in education and training programs that will equip workers with the adaptability and resilience needed to thrive in an era of rapid technological change.
To ignore the lessons of Clayton Christensen's theory would be to court disaster. The question is not whether AI will disrupt our world, but rather how we will rise to meet the challenges and opportunities it presents. By proactively preparing for this future, we can ensure that the benefits of AI are widely shared and that no one is left behind in the great transformations to come.
Saturday, April 13, 2024
The Broken Ladder, Or A Clarion Call for a New Learning Theory in the Age of AI
As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer sufficient to explain how people now learn, and how to adjust education accordingly.
Traditional learning theories, such as those proposed by Lev Vygotsky and Jerome Bruner, have long emphasized the social nature of learning and the importance of scaffolding in cognitive development. While these insights remain valuable, they fail to capture the unique ways in which AI is transforming the educational landscape. Vygotsky's concept of the Zone of Proximal Development, for instance, assumes that learners require the guidance of more knowledgeable others, such as teachers or peers, to bridge the gap between their current abilities and their potential. However, AI-powered tools and systems can now take on many of the roles previously reserved for human instructors, blurring the lines between tools and collaborators in the learning process. Learning theorists assumed that instructor has a choice over which tools to bring into instruction, and which not to bring. Well, AI imposes itself in instruction wether we want it or not.
Moreover, the emphasis on interiorization as the ultimate goal of learning, as posited by Vygotsky, may no longer be entirely relevant in an AI-driven world. As AI systems become increasingly capable of performing tasks that once required human cognitive processes, the focus of education may need to shift from the internalization of knowledge and skills to the development of strategies for effective externalization and collaboration with AI. In other words, the aim of education shifts from an individual learner to a symbiosis of a human and a machine.
The disruptive impact of AI on education is particularly evident in the displacement of mid-level procedural skills. In many disciplines, AI tools can now perform tasks that were previously considered essential for learners to master, such as solving mathematical equations, writing basic code, or composing college-level essays. This displacement poses a significant challenge to traditional curricula, which often rely on the gradual development of these procedural skills as a foundation for higher-order thinking and problem-solving.
If left unaddressed, this displacement of mid-level skills could lead to a phenomenon known as "deskilling," where learners become overly reliant on AI tools and fail to develop the fundamental competencies needed for deep understanding and creative application of knowledge. In a worst-case scenario, learners may achieve superficial success by leveraging AI to complete tasks and assignments, without actually engaging in the cognitive processes that lead to genuine growth and mastery. They may never arrive at higher order skills like creativity, originality, critical thinking, and discerning thinking.
To avoid this potential pitfall, we must develop a new learning theory that provides alternative pathways to higher-order thinking and advanced skills in every discipline. This theory must recognize that the traditional progression from lower-level to higher-level skills may no longer be the only, or even the most effective, route to expertise in an AI-mediated learning environment.
Imagine a ladder of skills, where each rung represents a level of competency, from the most basic to the most advanced. Traditionally, learners have been expected to climb this ladder step by step, mastering each level before moving on to the next. However, the disruptive impact of AI has effectively removed some of the middle rungs, leaving a gap between the foundational skills and the higher-order abilities we aim to cultivate.
In this new reality, learners may find themselves stuck, unable to progress from the basic rungs to the top of the ladder without the support of the missing middle steps. Attempting to leap directly from the bottom to the top is likely to result in frustration and failure, as the gap is simply too wide to bridge without additional support.
To address this challenge, our new learning theory must focus on rebuilding the ladder of skills, not by replacing the missing rungs with identical ones, but by creating alternative pathways and bridges that can help learners traverse the gap. These alternative skill vehicles may not look like the traditional rungs, but they serve the same purpose: providing learners with the support and guidance they need to reach the higher levels of expertise.
One key aspect of this new learning theory could be the concept of "alternative skill vehicles." Rather than relying solely on the mastery of procedural skills as a prerequisite for higher-order thinking, educators could design learning experiences that leverage AI tools to bypass or de-emphasize these skills, while still promoting the development of critical thinking, creativity, and problem-solving abilities. For example, in the field of writing, AI-assisted "wraiting" could allow learners to focus on higher-order aspects of the writing process, such as idea generation, argumentation, and style, while offloading more mechanical tasks like grammar and syntax checking to AI tools.
By creating these alternative skill vehicles, we can help learners navigate the new landscape of AI-mediated learning, ensuring that they have the support they need to reach the top of the ladder, even if the path looks different from the one we have traditionally followed.
Another crucial component of a new learning theory for the age of AI would be the cultivation of "blended intelligence." This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Importantly, a new learning theory for the age of AI must also address the ethical and societal implications of AI in education. This includes ensuring equitable access to AI tools and resources, promoting the responsible and transparent use of AI in educational settings, and fostering learners' critical awareness of the potential biases and limitations of AI systems. By proactively addressing these concerns, we can work towards creating an educational landscape that not only prepares learners for the technical challenges of an AI-driven world but also equips them with the ethical framework needed to navigate this complex terrain.
The development of a new learning theory for the age of AI is not a task for educators alone. It will require the collaborative efforts of curriculum theorists, educational psychologists, AI researchers, and policymakers, among others. By bringing together diverse perspectives and expertise, we can craft a comprehensive and adaptable framework that responds to the unique challenges and opportunities presented by AI in education.
The imperative for this new learning theory is clear. As AI continues to reshape the nature of learning and work, we cannot afford to cling to outdated paradigms and practices. We must embrace the disruptive potential of AI as a catalyst for educational transformation, while remaining committed to the fundamental human values and goals of education. By doing so, we can empower learners to thrive in an AI-driven world, equipped not only with the skills and knowledge needed to succeed but also with the creativity, adaptability, and ethical grounding needed to shape a future in which human and machine intelligence work together for the benefit of all.
Tuesday, April 9, 2024
Why doing nothing with AI is not an option
In the business of technology adoption, the prudent path often lies in inaction. Education, in particular, has a natural proclivity for sifting through the chaff of technological fads, embracing only those innovations that truly enhance learning outcomes or make educators' lives easier. This organic process of selection has served the sector well, allowing it to evolve at a measured pace without succumbing to the allure of every shiny new tool. However, the emergence of AI presents a singular challenge, one that makes doing nothing all but impossible.
The disruptive potential of AI in education cannot be overstated. For centuries, the cornerstone of our pedagogical approach has been the written word – assignments and assessments that serve as both a means of developing and gauging understanding. The AI-powered tools capable of generating human-like responses threaten to undermine this foundational element of education. Inaction in the face of this shift is not merely ill-advised; it is a recipe for curricular erosion and a potential deskilling of an entire generation. Most educators intuitively understand the threat, hence the tinge of moral panic surrounding the AI invasion of education.
Moreover, a passive approach to AI in education risks exacerbating existing inequities. As Leon Furze, a prominent voice in the field, has vividly described, policing student use of AI tools will inevitably lead to a new digital divide. Access to these technologies, even at the seemingly modest price point of $20 per month, can serve as a significant barrier for many students. The solution lies not in restriction, but in universal training – ensuring that all students are equipped with the skills to harness AI efficiently, thus leveling the playing field.
The stakes extend beyond the classroom. Higher education and K-12 institutions that fail to adapt to the AI revolution risk further straining their already tenuous relationships with employers. In an era where the relevance of traditional education is increasingly questioned, ignoring the clear signals from the labor market is a perilous path. It leaves educational institutions vulnerable to political attacks and diminishes their ability to prepare students for the realities of the modern workforce.
The imperative, then, is clear: embrace the bots. This is not a call for wholesale abandonment of traditional pedagogy, but rather a recognition that AI must be woven into the fabric of our educational approach. Curriculum must be revised, assignments and assessments reimagined to not only incorporate but require the use of AI. Every student, regardless of background or discipline, should be exposed to and ideally proficient in leveraging these tools.
Such a transformation is no small undertaking. It demands resources, commitment, visionary leadership, and a comprehensive institutional strategy. But the alternative – a slow, painful descent into irrelevance – is far more daunting. The question is not whether education can afford to embrace AI, but whether it can afford not to. In this particular case, inaction is the riskiest action of all.
Wednesday, February 28, 2024
Hackers vs. Handlers: The Battle for Equity in the Generative AI Revolution
Handlers, often backed by deep pockets and corporate interests, are the architects of proprietary systems. They package, distribute, and regulate access to AI technologies, aiming to create comprehensive solutions that cater to market demands. Their approach, while fostering innovation and ensuring quality, often leads to restricted access and a consolidation of power, raising concerns about equity and inclusivity in the technological realm. The curious fact is that many handlers are former hackers, who made it in the startup world.
Hackers, in contrast, are the rebels of the digital age. They advocate for a more open and collaborative approach to AI development, believing that technology should be a public good, accessible to all. They prefer the do-it-yourself, scrappy solutions. Their efforts are not driven by profit but by a conviction that broader access to AI tools can level the playing field, enabling a more diverse set of voices to contribute to and benefit from technological advancements.
The clash between hackers and handlers is emblematic of a larger debate about the future of technology and its role in society. While handlers bring structure and scalability, hackers inject diversity, creativity, and a sense of community. The balance between these forces is crucial. An overemphasis on control and commercialization risks stifling innovation and perpetuating inequalities, while unchecked openness may lead to issues of quality and security.
The generative AI revolution presents an opportunity to recalibrate this balance. Supporting hackers and their open-access ethos can foster a more equitable technological landscape, where innovation is not the exclusive domain of the well-funded. This means championing open-source projects, recognizing community-driven initiatives, and creating legal frameworks that protect the principles of openness and collaboration.
As we stand at the precipice of this AI revolution, the choices the societies make will have far-reaching implications. Supporting the hacker ethos without alienating the handlers, and promoting broader access to AI technologies can ensure that the benefits of this revolution are shared by all, not just the privileged few. It is time to shift the balance in favor of equity, inclusivity, and the collective advancement of society.
Saturday, February 17, 2024
Curb your enthusiasm
Do we learn how to use the current versions of AI, or wait for them to get much better very soon? The excitement around AI's exponential growth mirrors a pattern we've seen with other technologies: a burst of initial progress followed by the hard reality of limitations. History offers lessons from nuclear fusion to space exploration, where initial optimism ran into practical and technological barriers.
Nuclear fusion, which began its journey as a promising energy solution in the 1950s, has yet to deliver on its promise of endless clean energy. The technical and financial challenges have proven to be more complex and enduring than anticipated. Similarly, space exploration, once thought to usher in an era of human settlement in outer space, has been tempered by the harsh realities of cost, distance, and survival in a hostile environment.
As AI technologies, particularly generative AI like ChatGPT, race ahead, they too may face significant hurdles. The rapid development and deployment of these technologies have revealed challenges, notably the increasing demand for computing power. This situation is exacerbated by the competitive push from tech giants like Google and Meta, highlighting the difficulty of sustaining rapid advancement.
One potential game-changer on the horizon is quantum computing. This emerging field promises to revolutionize computing power, potentially overcoming current limitations in a way we can barely imagine. The impact of quantum computing on AI could be profound, offering solutions to problems that are currently intractable and opening new avenues for advancement.
Yet, even with quantum computing, it's wise to temper our expectations, at least until practical and cheap quantum computers become a reality. Each technological leap brings its own set of challenges and unknowns. Rather than waiting for miraculous breakthroughs, a more pragmatic approach is to focus on optimizing current AI technologies. Understanding and working within their limitations can lead to significant improvements and applications that are both practical and impactful now.
This approach doesn't mean halting innovation but rather balancing the pursuit of new technologies with the efficient exploitation of existing ones. By learning from the past and being mindful of the inherent challenges in technological progress, we can navigate the complexities of innovation more effectively. Quantum computing may indeed provide the next significant leap, but until then, making the most of current AI capabilities is both a wise and necessary strategy.
Friday, February 9, 2024
The Advising Bot Dilemma
In educational organizations, the integration of AI, particularly through automated advising tools like chatbots, embodies a strategic advancement yet introduces a complex dilemma. These digital advisors, designed to navigate queries ranging from academic programs to student services, highlight a pivotal choice between precision and broad utility.
At one pole, AI bots can be meticulously engineered to handle vaguely formulated inquiries, but only providing correct answers manually curated by humans. This approach, while ensuring a high level of fidelity, is marked by a slow and expensive development process. For entities with vast knowledge bases or intricate operations, the manual input required could significantly dilute the efficiency gains such tools promise to deliver.
Conversely, AI advisors programmed for wider application operate by not only interpreting queries, but also sourcing answers from a pre-existing repository of documents and websites. This method, though expedient, compromises on accuracy, a drawback that becomes more pronounced within the context of large and diverse information repositories.
A balanced strategy proposes the coexistence of both high and low-fidelity bots within the educational sphere. Low-fidelity bots offer an expedient first layer of support, adept at managing basic inquiries through triage advising. Tailoring these bots to specific domains and incorporating clear disclaimers could mitigate the risk of misinformation, directing students towards accurate resources while alleviating the administrative burden on staff.
For situations where accuracy is paramount, a semi-automatic model emerges as a superior alternative, at least for now. This model envisions a symbiotic relationship between AI systems and human advisors, with AI proposing potential responses and the advisor ensuring their validity. Such a configuration enhances efficiency without compromising the integrity of the advice provided.
AI imperfections sometimes may be tolerated. AI adoption required a pragmatic cost-benefit analysis. The evaluation hinges on whether the operational efficiencies gained through deploying lower-fidelity systems justify the associated risks. We must compare them not to very expensive and very reliable alternative, but to not getting any advicу at all, or receiving it from roommates and random sources. The decision on whether to limit these systems to straightforward queries or to implement them within defined subject areas requires careful consideration.
Addressing these trade-offs is crucial for harnessing AI's potential in educational settings. This nuanced approach, advocating for a judicious blend of high and low-fidelity advising tools, underscores the importance of strategic planning in AI deployment. It offers a pathway to leverage technological advancements, ensuring they complement rather than complicate the educational mission.
Tuesday, January 30, 2024
The tiny tools issue
The world of AI implementation has three tiers. At the base are user-friendly, ready-to-use AI tools – the digital world's equivalent of instant coffee: one can simply go to a chatbot and do your thing. Ascending a level, there is the realm of tiny tools like APIs, a middle ground easily accessible to coders but mystifying to the layperson. The apex of this hierarchy is reserved for integrated, complex AI solutions – the grand orchestras of technology, both sophisticated and costly.
The drama in AI implementation, however, is not rooted in the existence of these tiers, but in their portrayal and accessibility. Providers, often driven by material interests, tend to downplay the simplicity and adequacy of the lower tiers. This misrepresentation is not just about pushing expensive solutions; it is a deeper issue of monopolizing knowledge and perpetuating power imbalances. Of course, if one knows how to do something that others do not, they want to make themselves look more essential, so they can sell their expertise.
The key takeaway here is to be a discerning consumer. Before opting for an expensive, integrated solution, consider first if one can do it themselves, and if not, if a tiny tool would suffice. Perhaps a computer science student could craft a solution efficiently and affordably. Or there might be a vendor that sells just the tiny tool needed. This approach is not just about saving resources; it is about maintaining control in an increasingly technology-driven world. Surrendering to high-end solutions can lead to a loss of autonomy, difficult to reclaim once given up.
When faced with AI implementation decisions, balance is crucial. It is essential to recognize where one's needs fit within this spectrum and make choices that harmonize practicality, cost, and control. The most effective tool at our disposal is not always the most complex or expensive one, but often our own critical thinking. By understanding the nuances of these AI tiers, we can make informed decisions that preserve our technological independence and prevent being unwittingly upsold solutions that far exceed our actual needs.
Monday, January 22, 2024
Why AI is unlikely to replace teachers
The allure of a tech-driven utopia in education is not new. Radios, televisions, the internet, MOOCs – each has been heralded as a harbinger of the traditional teacher's obsolescence. Today, AI steps into this familiar spotlight, with some prophesizing a future with fewer educators. Understanding this perspective isn't challenging, given the enormity of public education's budget, the stubborn inequalities it harbors, and its notorious resistance to reform. However, the notion of significantly reducing teacher numbers through AI implementation seems, at best, a distant fantasy.
Chatbots, the latest prodigies of AI, have proven to be exceptional personal tutors. They can tailor information delivery to individual needs, offering a level of customization that traditional education struggles to match. But here's the rub: education is not merely about transferring information. It's about fostering a unique educational relationship that optimizes learning. For all its sophistication, AI lacks the capacity to replicate this.
AI indeed creates a paradise for autodidacts. Those with a natural inclination towards self-directed learning, armed with motivation and discipline, find in AI a boundless resource. However, the majority aren't autodidacts. They thrive in a relational context that not only motivates but also facilitates learning. This is a foundational principle in major learning theories, from Vygotsky's social development theory to Bandura's social learning theory and Bruner's constructivist theory. The invisible labor of a teacher or a college instructor lies in creating and nurturing this context. Presently, there is nothing in AI that can substitute this critical human element.
Furthermore, educational institutions have become integral to societal fabric, not merely as centers of learning but as community hubs. Imagining what millions of children and young adults would do without the structure of schools and colleges opens a Pandora's box of societal and developmental questions. These institutions require adult presence, not just for educational delivery, which AI might partly assume, but for the overarching environment of care and socialization they provide.
My prognosis? Unlike other industries where automation has resulted in significant workforce reductions, the field of education, particularly the teaching staff, will likely remain unscathed in this aspect. There's no need for panic among educators, but there is a need for adaptation. Learning to harness AI's capabilities will be crucial, not to replace teachers, but to complement them, freeing up time for the more nuanced, relational, and affective aspects of their roles. Additionally, educators must remain agile, adapting curricula to include skills that future employers will value, ensuring students are well-equipped for the evolving workforce.
In essence, AI in education is not a replacement, but a tool – one that, if used wisely, can enhance the educational experience without displacing its most vital component: the human educator.
Freeze-Dried Text Experiment
It is like instant coffee, or a shrunken pear: too dry to eat, but OK if you add water. Meet "freeze-dried text" – concentrated i...
-
The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...
-
As someone who remembers using paper maps and phone books, I find myself fascinated by Michael Gerlich's new study in Societies about AI...
-
As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer suffic...