Wednesday, July 17, 2024

AI is not going to implement itself, but governments can help

The AI hype has passed, and the overexcited futurists' voices are mercifully fading away. We're now entering a practical era where AI is leveraged to boost productivity in businesses, non-profit, and public organizations. This shift brings a sobering realization: AI integration requires a meticulous, pragmatic approach to build reliable and trustworthy systems. It's a lot of work and requires some strategy.

When a single person manages a well-defined workflow, integrating AI is relatively straightforward. It's easy to incorporate AI tools like ChatGPT or Claude to assist with ad copy, reports, or applications. The beauty of these scenarios lies in their simplicity - the user acts as both operator and quality controller, immediately judging the output's effectiveness.

However, the story changes dramatically when we shift to multi-user workflows or more complex processes, where both inputs and outputs are more of a collective responsibility. I recently spoke with an Accounts Payable team who posed a challenging question: "Yes, we can see that AI can help review travel claims, but can you guarantee it's going to be 100% accurate?" I couldn't provide that guarantee; I don't have time to conduct a hundred tests, and I don't even have access to a hundred travel reports. They emphasized their need for completely audit-proof outcomes. This conversation highlighted the trust issues that arise when moving from AI enthusiasts to skeptics in larger organizations. And organizations should have a healthy group of skeptics to remain viable.

I've also recently been a fly on the wall during discussions between healthcare executives and a U.S. lawmaker. The executives explained that each AI-assisted medical procedure needs validation, which is expensive and often duplicated across multiple hospital systems. This challenge extends beyond healthcare. For instance, when using AI to crunch data in all organizations, we need to understand its reliability in analyzing large datasets, cleaning them, and handling outliers.

The problem is that no private institution can conduct the kind of comprehensive testing and validation needed to establish trust in AI systems across various industries. We cannot seriously trust claims of startups who are trying to sell a specialized product to an industry or a government organization. It's not clear how a hypothetical validation private service would monetize such an endeavor.

This is where I believe government involvement becomes crucial. Instead of obsessing with deep fakes and ethics, that's what governments should be doing. Governments can collaborate with industry experts to develop standardized benchmarks for AI reliability and performance. They could establish certification programs that act as quality marks, assuring users that AI systems have undergone rigorous testing. Moreover, government funding could support businesses, NGOs, and government agencies in conducting extensive AI testing, especially benefiting smaller organizations lacking the necessary resources.

In my view, public-private partnerships are key to navigating these challenges. By leveraging expertise from both sectors, we can develop robust testing frameworks and create dependable AI systems. This approach would pave the way for more efficient and innovative workflows across industries, ensuring that the benefits of AI are realized while maintaining trust and reliability. 

Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Tuesday, July 9, 2024

AI-Positive Pedagogy: Navigating the Great Disruption

AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.

While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.

The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.

AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.

AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.

The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.

To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.

It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.

In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.

To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.

The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.


Wednesday, June 26, 2024

On AI Shaming

Here is a new thing: AI shaming. It is a practice where individuals accuse others of using artificial intelligence to generate written content, as if such an act were inherently deceitful or somhow sinful. How fascinating, the very premise of it.

This phenomenon  reveals itself to be a fairly common logical fallacy. It is a summative dismissive argument, with a dash of ad hominem (ad machinam?) for flavor. One might wonder why the method of creation should overshadow the substance of the content. After all, we don't dismiss the works of authors who switched from quills to typewriters, do we? 

There's a certain Foucauldian quality to this practice, if one may be permitted a touch of theoryhead's indulgence. By attempting to draw a sharp line between 'acceptable' and 'unacceptable' means of content creation, some seem to be engaging in a subtle power play. It is almost as if they're trying to sell their unfamiliarity with newer technologies as a virtue. it is a rather clever way of elevating the game at which one excells, and putting down a game at which one fails. While an understandable inclination it is still a folly. 

For those of us who have embraced these new tools, such accusations are about as concerning as a light drizzle on a summer day - which is to say, entirely expected and hardly worth mentioning. If anything, it provides a certain amusement to observe the lengths to which some will go to maintain the status quo and their priviledged little spot in it. 

However, there is a more sobering concern to consider. While thick-skinned people like me might brush off such criticisms with a raised eyebrow, younger, more impressionable ones might internalise this arbitrary stigma. It would be a shame if the next generation felt compelled to hide their technological proficiency out of  fear of Luddites' bullying.

As these AI tools inevitably become more sophisticated and ubiquitous, perhaps we might redirect our energy towards more productive ends. Instead of engaging in this curious form of digital fingerpointing, we could focus on the responsible and creative use of these technologies. After all, the ideas expressed within content will always be more intriguing than the means by which they were transcribed.

To those who persist in AI shaming: by all means, knock yourelf out. Your dedication to this cause is admirable, if somewhat perplexing. Just don't be too surprised if the rest of us seem a bit distracted - we'll be busy adapting to the future while you're perfecting your fingerwagging techniques.

P.S. This text, according to QuillBot, is 0% AI-generated, however I wrote it with Claude :)

How to bullshit about bullshit

Take a look at a paper ChatGPT is bullshit, by Michael Townsen Hicks, James Humphries, Joe Slater. Other than a catchy title, the paper has a few problems of its own.

1. Focus on terminology over utility:

The paper spends considerable effort arguing that the outputs of large language models (LLMs) like ChatGPT should be classified as "bullshit" rather than "hallucinations." While this may be an interesting philosophical distinction, it arguably does little to advance our understanding of the practical implications and applications of these technologies. The authors could have devoted more attention to analyzing the actual utility and potential use cases of LLMs, which would likely be more valuable to policymakers and the public.

2. Limited practical insights:

By focusing primarily on categorizing the nature of LLM outputs, the paper misses an opportunity to provide meaningful insights into how these technologies can be effectively and responsibly utilized. A more constructive approach might involve examining specific contexts where LLMs prove useful and where they fall short, rather than broadly labeling their output as "bullshit."

3. Potential for misdirection:

While the authors argue that the term "hallucinations" is misleading, replacing it with "bullshit" may not necessarily lead to a clearer understanding of LLM behavior. Both terms are metaphorical and could potentially misguide readers about the true nature and capabilities of these systems. A more technical and nuanced explanation of how LLMs function and their limitations might be more informative.

4. Overlooking nuance:

The paper seems to take a binary approach – either LLMs are concerned with truth (which they argue against) or they are producing "bullshit." This oversimplification may overlook the nuanced ways in which LLMs can be useful for various tasks, even if they don't have an inherent concern for truth in the way humans do.

5. Lack of actionable recommendations:

While the authors critique the use of the term "hallucinations," they don't offer clear, actionable recommendations for how to better communicate about LLMs to policymakers and the public. A more constructive approach would be to propose specific ways to educate stakeholders about the capabilities and limitations of these technologies.

6. Missing the broader context:

By focusing narrowly on the philosophical categorization of LLM outputs, the paper misses an opportunity to discuss the broader implications of these technologies on society, economy, and various industries. A more comprehensive analysis of the impact and potential of LLMs would likely be more valuable to readers.

Monday, June 24, 2024

Can observational learning work with AI?

Can humans learn new problem-solving strategies simply by observing AI at work? Following Bandura's theory of observational learning, it may not be as far-fetched as it at first appears.

When humans watch AI systems solve problems or generate text, they naturally construct mental models of the underlying processes. These models, while factually wrong (AI thinking is very different from ours), can nonetheless prove useful. It is imagining yourself in teh task that can be beneficial. 

Consider a person observing an AI system summarise a lengthy academic paper. The human observer cannot directly perceive the AI's internal computations. Instead, the observer likely imagines themselves performing the task, focusing on key sentences, identifying main themes, and connecting key ideas.

This mental model, though inaccurate in representing the AI's actual mechanisms, may still enhance the observer's own summarisation skills. They might, for instance, learn to pay closer attention to introductory and concluding paragraphs, or to look for repeated phrases that signal important concepts.

Observing AI failures can be particularly instructive. When an AI system produces an erroneous or nonsensical output, it often reveals the limitations of its approach. A human observer, reflecting on these errors, might develop a more nuanced understanding of the problem at hand and devise novel strategies to overcome the AI's shortcomings.

For example, watching an AI struggle with a complex logical reasoning task might prompt a human to break the problem down into smaller, more manageable steps. This approach, inspired by the AI's limitations, could prove valuable even in contexts where AI is not involved.

To test this hypothesis rigorously, consider an experiment:

1. Select a diverse set of problem-solving tasks, ranging from creative writing to mathematical reasoning.

2. Divide participants into three groups:

  •  a) An observation group that watches AI systems attempt these tasks, including both successes and failures.
  •  b) A practice group that attempts the tasks themselves without AI involvement.
  •  c) A control group that engages in unrelated activities.

3. After the observation or practice period, test all participants on a new set of similar problems.

4. Compare the performance of the three groups, paying particular attention to novel problem-solving strategies employed by the observation group.

5. Conduct follow-up interviews to explore participants' thought processes and any conscious attempts to apply AI-inspired techniques.

Friday, June 21, 2024

Can We Learn Without Struggle in the Age of AI?

I've been pondering a question: What if our traditional understanding of cognitive growth is too narrow? We've long held onto the idea that real learning comes from struggle, from pushing against our limits, from grappling with challenges just beyond our current abilities. But what if that's not the whole story?

I'm starting to wonder if growth - real, meaningful cognitive development - might not always need the strong challenges we've assumed were necessary. And this thought has become particularly relevant as we enter the new world of AI-assisted learning.

Many of our theories about learning and development are rooted in the idea of conflict or tension. Vygotsky's Zone of Proximal Development, for instance, emphasizes the space between what a learner can do without help and what they can do with guidance. Piaget talked about cognitive dissonance as a driver of development. These Hegelian/Mamrxist heories have shaped how we think about education for decades.

But here's what I'm pondering: What if growth can happen more... gently? What if it can occur through exposure, through interaction, through a kind of cognitive osmosis that doesn't necessarily involve struggle or challenge? And importantly, what if this gentler form of learning is still deeply social and relational?

There's a lot of hand-wringing in educational circles about AI tools like ChatGPT. The worry is that by providing ready answers, these tools will short-circuit the learning process. Students won't have to struggle, so they won't really learn. I have definitely been expressing these concerns in my previous blogs. 

But I'm not so sure anymore. Let me float a hypothesis: What if AI-assisted learning doesn't dampen growth, but instead provides a different kind of cognitive experience that can still lead to meaningful development? And what if this experience, rather than being isolating, actually opens up new avenues for social learning and collaboration?

Here's an analogy that's been helpful for me in thinking about this. Remember when GPS first became widely available? There were concerns that people would never learn to navigate cities anymore, that we'd lose our sense of spatial awareness. And yet, most of us who use GPS regularly still develop a pretty good understanding of the cities we live in and visit. We might learn differently - perhaps more slowly, or with less detail - but we do learn, without all the frustrations of trying to read the map while driving, or memorize multiple turns (Left, second right, soft left again...). City driving is probably safer, but we did not get more stupid.  

The GPS doesn't prevent us from learning; it provides a different context for learning. We're not struggling with paper maps, but we're still processing spatial information, making connections, building mental models of our environment.

Could AI-assisted learning work in a similar way? Sure, students might get quick answers or produce a quick text with an AI without much effort, which feels somehow wrong (the nature of these feelings is a subject of a special consideration). But that doesn't mean they stop thinking or interacting. They will start wondering how to get better answers, produce better outputs. They will begin to notice patterns in the AI's responses. They will start to question or critique what the AI produces. That's what the human brain has evolved to do. 

Moreover, this process doesn't happen in isolation. Students will discuss their AI interactions with peers, compare outputs, collaboratively explore how to improve results. It becomes a form of social play - experimenting, sharing discoveries, building on each other's ideas. The AI becomes a tool for social learning, not a replacement for it.

In other words, the presence of AI doesn't eliminate cognitive work or social interaction - it might just shift their nature. And who's to say that this new form of cognitive and social engagement is any less valuable than the old? 

Now, I'm not saying we should throw out everything we know about learning and development. Challenge and struggle certainly have their place. But I am suggesting that maybe we need to broaden our understanding of how growth happens, recognizing that it can occur through gentler, more playful forms of social interaction and exploration.

Perhaps there's a whole spectrum of cognitive experiences that can lead to growth, ranging from intense challenge to gentle, collaborative exposure. Maybe AI-assisted learning falls somewhere on this spectrum - not replacing traditional forms of learning, but adding new dimensions to how we can develop cognitively and socially.

This is all hypothetical, of course. We're in new territory with AI, and it will take time and research to understand its full impact on learning and development. But I think it's worth considering that our assumptions about how growth happens might need updating, particularly in how we view the social aspects of learning.

What do you think? Is it possible that we've overemphasized challenge in our understanding of learning? Could AI-assisted learning open up new pathways for cognitive development and social learning that we haven't fully appreciated yet? I am just back from a conference, where we had many early childhood educators. In that tribe, there is a lot more talk about play than about forcing growth. Maybe that's why I am having these second thoughts about growth. 

I don't have definitive answers, but I think these are questions worth exploring as we navigate this new landscape of learning. After all, if there's one thing we know for sure about human cognition, it's that it's endlessly adaptable and intrinsically social. Perhaps it's time for our theories about learning to adapt as well, embracing a more collaborative, playful vision of growth in the age of AI.

Monday, June 10, 2024

Testing AI once does not make you an expert

I heard of a professor who asked ChatGPT to write a profile of himself, only to discover inaccuracies and decide that AI is unsuitable for education. Instead of reflecting on why he is not sufficiently famous, the professor blamed the AI. This reaction is like boycotting all cars after driving an old Soviet-made Lada. Dismissing AI entirely based on a couple of lazy interactions is a classic example of the overgeneralization fallacy.

Before hastily testing and dismissing, one would be well served to read about the known limitations of AI, particularly when it comes to generating content about individuals who are not well-known. AI can "hallucinate" details and citations, creating a misleading picture of reality.

The key is to approach AI with a spirit of curiosity and creativity, exploring its strengths and weaknesses through multiple tests and scenarios. By focusing on what works rather than fixating on what does not, we can begin to appreciate AI for what it is—a tool with potential that takes some skill and experience to unlock.

Also, think about your the risk to your reputation. If you are saying, "I tried, and it is crap," you are also dismissing all those other people who found it valuable as gullible fools. The failure to see that the joke is on you is a test of your hubris, and that kind of a test works on just one try. 

Thursday, June 6, 2024

Using AI without losing control

As AI becomes increasingly capable of performing tasks that were once the exclusive domain of humans, it no longer makes sense to spend  a lot of time teaching skills that AI can do better and faster. Just as we largely replaced the teaching of manual drafting with computer-aided design (CAD), we must now reassess the curriculum and remove skills that have been rendered obsolete by AI.

The way we define curriculum reflects our understanding of what it means to be human, a definition that has evolved significantly over the centuries. As machines have become more advanced, skills previously valued and developed by humans have gradually been entrusted to machines.

However, the logic behind this shift is also changing. It is no longer just a matter of what machines can do better, as they seem to be excelling at almost everything. Instead, it is about what we, as humans, choose to retain, enjoy, and feel compelled to do. It is less about competition with machines and more about the arrangement of power and authority. To maintain our human authority, we must continue to perform certain tasks.

One of the most important of these tasks is the ability to set goals and make value judgments about what should or should not be done. This is a complex skill that requires a comprehensive understanding of the world, both in its physical and social aspects, as well as the wisdom to make decisions when faced with competing values and the inherent uncertainty of the future. As AI increasingly mediates our interactions, the ability to determine the goals and direction of these interactions becomes even more crucial.

Equally important is the ability to work with AI to achieve our objectives. This process, sometimes referred to as "prompt engineering," involves iterative interaction and refinement to obtain the most accurate, appropriate, and helpful outputs from AI. Beyond technical skills, this requires robust critical thinking to assess the truthfulness and value of AI-generated content. In essence, it is the skill of editing, not merely for style but for substance, constantly asking: Is this useful? Does it improve upon what already exists? Does it positively contribute to the overall human knowledge and well-being?

These core competencies should shape the curriculum of the future: goal-setting, value judgment, critical discernment, and the ability to collaborate with AI to achieve human objectives. The specifics will evolve as AI capabilities advance, and there will be variations across disciplines.

However, the fundamental challenge remains the same: in a world increasingly shaped by artificial intelligence, we must rededicate ourselves to cultivating human intelligence in its fullest and most distinctively human expressions. Only then can we ensure that the tremendous power of AI serves to elevate humanity rather than diminish it.

Tuesday, May 21, 2024

"First try with AI"; On the advantages of organic learning

Some people advocate for structured training programs and dedicated time for AI learning, but a more organic approach is more effective and efficient.

The secret to successfully incorporating AI into your work is to simply start using it for your next task. Rather than setting aside special time for AI learning, dive right in and explore how AI can assist you in your current projects. Need to do something? Write a memo, a long email, a letter, a grant proposal? "First Try with AI."

What do you have to lose? he worst-case scenario is that you waste a little time if AI proves unhelpful for that particular task. However, in most cases, you will discover its usefulness and potential to save you some time, even if it doesn't complete the task entirely.

It's important to recognize that AI never does everything for you. Only the most mindless, bureaucratic,  compliance-related content may be primarily handled by AI. However, for the majority of tasks, you will intuitively learn the right mix of human and AI ingredients to create the best soup. This organic learning process allows you to understand the strengths and limitations of AI within the context of your specific work.

There is nothing wrong with taking courses to learn about AI, but it is worth noting that assignments in such courses often lack authenticity. Those are "pretend tasks." Even after completing a course, you would still need to learn how to transfer your new skills into real-world contexts. In contrast, an organic approach to AI learning allows you to immediately apply your knowledge within the context of your work, resulting in a more motivated, deeper, and faster learning experience.

As you gradually incorporate AI into your daily tasks, you will naturally develop a better understanding of when and how to leverage its capabilities, and where to mitigate its shortcomings. This hands-on, contextual learning approach will not only help you become more proficient in using AI but also enable you to identify new opportunities for its application within your organization.

For educational contexts, we know there is a strong correlation between instructors personally using AI and them allowing students to use it in class. We don't trust things we do not understand, which explains the unreasonably strong worries about cheating. There will be no classroom use without the personal use by instructors first. Once teachers start using it for their own purposes, their anxiety levels go down, and their creativity frees up to invent classroom uses. 

Monday, May 13, 2024

Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work

The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric implies a homogeneous substance, akin to measuring the alcohol content in a beverage. However, my recent survey suggests that academic integrity associated with AI use is far from homogeneous. The survey asked educators to evaluate the ethical implications of using AI for twelve different tasks in writing an academic paper, ranging from researching to brainstorming to editing to actually writing full sections.

The findings revealed significant variance in responses. While many respondents were comfortable with AI aiding in brainstorming ideas, they expressed reservations or outright disapproval of AI writing entire paragraphs or papers. This disparity underscores a critical issue: there is no consensus in the academic profession on what constitutes acceptable AI assistance in learning. More strikingly, within each individual's responses, there was considerable variation in how different AI uses were assessed.

Consider the implications of a tool like Turnitin reporting "50% AI-generated" content. What does this figure actually represent? It lacks context about how the AI-generated content was incorporated. For instance, a paper could be largely original, with only minor edits made by AI at the end, potentially showing a high percentage of AI contribution. Conversely, a student might contribute minimally to an essentially AI-written paper, making slight modifications to reduce the AI-detected percentage. Both scenarios could yield vastly different percentages, yet the ethical implications are markedly divergent.

The pursuit of better detection technology misses the point. The issue is not with the detection capabilities but with the construct itself. The very idea of "AI-generated text" as a unified concept is problematic. Just as a depression inventory measures various symptoms that converge on the underlying construct of depression, our methods for evaluating AI in academic work must recognize the diverse and context-dependent nature of its use. The current approach, which treats all AI contributions as equivalent, is akin to judging a book's genre by counting its words. I which Turnitin and other commercial "AI Detectors" would show just a little more integrity and stop selling us the snake oil. They must know for sure that their claims are bogus, because AI-generated text is not a valid construct to be measured. 

Instead of focusing obsessively on detecting AI-generated content, we need to shift our perspective. We should expect and require students to use AI as part of their learning process. The challenge then becomes developing assignments that not only measure the content knowledge but also the meta-AI skills and competencies necessary to navigate and leverage these tools effectively. This approach acknowledges the complexity of AI's applications and ensures it is used responsibly, promoting a learning environment that respects both the potential and the limitations of artificial intelligence.

Monday, May 6, 2024

In Education, AI is an emergency

On one hand, AI presents an exhilarating leap forward, a kind of magic wand that promises to transform how we learn and teach. On the other hand,  this glam surface lies a grittier reality—one where the very essence of learning could be at risk.

In education, the core value lies in the process itself. The act of wrestling with ideas, constructing arguments, and stumbling over complex problems is where true learning happens. If a student turns to AI to write an essay, they might technically meet the assignment's requirements, but they've bypassed the intellectual struggle critical to internalizing knowledge. This worry has only deepened in the wake of the pandemic, which already strained educational norms and exposed glaring disparities. Introducing AI into this mix feels like throwing a wrench into an already delicate machine, risking the dilution of the educational experience and fostering a generation more adept at using tools than thinking independently.

Addressing this is no minor feat. It is not about rejecting AI's benefits outright, but rather about steering its use with a careful hand. Educators must become architects of a new curriculum that anticipates AI's influence and actively incorporates it in enriching ways. Perhaps this means designing projects where AI is expected to be used by the projects are still challenging and generate growth.

However, such a transformative approach to curriculum development is a colossal task, varied across academic disciplines and leveels of education. Educators need robust support systems, time to experiment and innovate, and backup from policies that understand and address these unique challenges. Governments and educational leaders must be partners in crafting policies that nurture educationally effectiveand responsible AI use.

As I reflect on this development, I am struck by the immensity of the challenge before us. It is not just about adapting to a new tool; it is about redefining the very foundations of how we teach and learn. It is about finding a way to harness the power of AI without sacrificing the soul of education. This is a journey that will require bold experimentation, deep collaboration, and a willingness to embrace the unknown. But it is a journey we must undertake, for the stakes are too high to ignore. The future of education hangs in the balance, and it is up to us to shape it with wisdom, courage, and a steadfast commitment to the human experience of learning.

Friday, May 3, 2024

Public Money, Private Glory?

At tech events, where AI CEOs bask in the adoration, there's a conspicuous absence in the narrative: the role of public funding and research. These sectors haven't just sprung up through the ingenuity and perseverance of a few brilliant minds; they're the culmination of substantial public investment. Yet, you'd be hard-pressed to hear a word of thanks to taxpayers or governments at these glittering presentations.

The problem with this omission is twofold. Firstly, it promotes a misleading story of technological development—one where breakthroughs seem to happen through sheer brilliance rather than collaborative, incremental progress supported by public funding. This narrative can skew public perception, suggesting that technological advancement might somehow spontaneously occur without structured support. It makes the process seem more magical than methodical, glossing over the reality that innovation is usually more marathon than sprint, and certainly not a solo race.

Secondly, this narrative concentrates excessive admiration—and thus influence—in the hands of tech leaders. Celebrated as visionary and almost superhuman, these individuals often come to wield significant power, not just over their companies but within society itself. Yet, while they may be exceptional in their fields, they frequently lack broad education in social sciences and humanities, or experience in broader human affairs, areas crucial for understanding the implications of the technologies they unleash. This can lead to decisions that prioritize innovation over social impact considerations or public welfare, a risky imbalance.

The superstar culture in technology isn't just an issue of misrepresentation. It has practical consequences, potentially leading policymakers and the public to undervalue the importance of ongoing governmental support for research. If tech advancements are viewed as products of individual genius rather than results of public investment and collaboration, governments and voters might feel justified in cutting funds to these areas, mistakenly believing the private sector will fill the gap. This could slow innovation and shift the global tech landscape, especially towards countries that maintain robust public funding for research.

Acknowledging the role of public funding in technology isn't about diminishing the achievements of tech leaders—it's about painting a more complete and accurate picture of innovation. This more nuanced understanding could foster better-informed decisions regarding funding, education, and policy, ensuring the ecosystem that nurtures new technologies remains dynamic and well-supported.

Ultimately, recognizing the collective contributions to technological advancements isn't just about giving credit where it’s due. It's about ensuring a balanced narrative that neither idolizes the individual innovator nor underestimates the foundational role of public investment. By correcting this imbalance, we can encourage a more sustainable, equitable approach to technology development—one that's grounded in reality and attentive to the broader implications of rapid technological change.

Tuesday, April 23, 2024

AI revolution minus massive unemployment

The conversation on AI often revolves around efficiency and cost reduction, typically translating into fewer jobs. However, a pivotal shift in perspective—from cutting workforce to enhancing and expanding workforce capabilities—can redefine the role of AI in the corporate world. This approach not only preserves jobs but also adds significant value to customer experiences and broadens the spectrum of services and products a company can offer. 

The traditional method of dealing with technological disruption—laying off workers and hiring new ones with the necessary skills—is not only a waste of human capital but also disregards the cultural knowledge embedded within an organization's existing workforce. Retraining keeps people within the organization, allowing them to shift roles while retaining and applying their invaluable understanding of the company's ethos and operations in new ways.

The first step in a proactive workforce transformation strategy is to map out the anticipated skills and roles that will be in demand. This is not just about foreseeing the obsolescence of certain skills but identifying emerging opportunities where AI can augment human capabilities. For instance, with the rise of AI-driven analytics, there is a growing need for professionals who can interpret and leverage these insights into strategic decisions, enhancing business intelligence far beyond current levels.

Once future needs are mapped, the next step is to develop a compelling incentive structure for retraining. Traditional models of employee development often rely on mandatory training sessions that might not align with personal or immediate business goals. Instead, companies should offer tailored learning pathways that align with career progression and personal growth, supported by incentives such as bonuses, career advancement opportunities, and recognition programs. This approach not only motivates employees to embrace retraining but also aligns their development with the strategic goals of the organization.

With AI's capacity to handle repetitive and mundane tasks, employees can redirect their efforts towards more complex, creative, and meaningful work. This shift enables businesses to expand their service offerings or enhance their product features, adding significant value to what customers receive. For example, financial advisors, freed from the tedium of data analysis by AI tools, can focus on crafting bespoke investment strategies that cater to the intricate preferences and needs of their clients. Similarly, customer service representatives can use insights generated by AI to provide personalized service experiences, thereby increasing customer satisfaction and loyalty.

AI not only optimizes existing processes but also opens new avenues for innovation. For instance, in the healthcare sector, AI can manage diagnostic data with high efficiency, which allows healthcare providers to extend their services into preventive health management and personalized medicine, areas that were previously limited by resource constraints. In the retail sector, AI-enhanced data analysis can lead to the creation of highly personalized shopping experiences, with recommendations and services tailored to the individual preferences of each customer, transforming standard shopping into curated personal shopping experiences.

For successful implementation, organizations must foster a culture that views AI as a tool for empowerment rather than a threat to employment. Leadership should communicate clearly about the ways AI will be used to enhance job roles and the benefits it will bring to both employees and the company. Regular feedback loops should be established to adjust training programs based on both employee input and evolving industry demands, ensuring that retraining remains relevant and aligned with market realities.

By focusing on retraining the workforce to harness AI effectively, businesses can transform potential disruptions into opportunities for growth and innovation. This approach not only preserves jobs but also enhances them, adding unprecedented value to the company and its customers, and paving the way for a future where human ingenuity and artificial intelligence work hand in hand to achieve more than was ever possible before.

Monday, April 22, 2024

The Disruptive Potential of AI: Lessons from Clayton Christensen's Theory

As AI continues to make inroads into various industries, it is easy to dismiss its current shortcomings and remain complacent. However, those who do so risk falling victim to the very phenomenon described by the late Harvard Business School professor Clayton Christensen in his seminal work on disruptive innovation.

Christensen's theory posits that disruptive technologies often start at the bottom of the market, offering inferior performance compared to incumbent solutions. However, these technologies are typically cheaper and more accessible, allowing them to gain a foothold among less demanding customers. Over time, as the technology improves, it begins to meet the needs of more sophisticated users, eventually displacing the incumbent players entirely.

The parallels with AI are striking. Today, we may scoff at awkward AI-generated movies featuring characters with anatomical oddities or primitive music engines churning out cliched tunes. However, it would be foolish to assume that these technologies will not improve. Just as the early smartphones were no match for desktop computers, the AI of today is merely a stepping stone to more advanced systems that will rival and surpass human capabilities in various domains.

The rapid pace of investment in AI only serves to underscore this point. With billions of dollars pouring into research and development, the march of progress is inexorable. While the exact timeline remains uncertain, it is clear that AI will continue to evolve at a brisk pace, transforming industries and reshaping the nature of work itself.

In light of this reality, policymakers and leaders in government and philanthropy would be wise to start planning for a future in which the skills demanded by the job market are in a constant state of flux. Rather than clinging to the status quo, we must embrace the disruptive potential of AI and invest in education and training programs that will equip workers with the adaptability and resilience needed to thrive in an era of rapid technological change.

To ignore the lessons of Clayton Christensen's theory would be to court disaster. The question is not whether AI will disrupt our world, but rather how we will rise to meet the challenges and opportunities it presents. By proactively preparing for this future, we can ensure that the benefits of AI are widely shared and that no one is left behind in the great transformations to come. 

Sunday, April 21, 2024

The Rise of ReAIding: "I did not read it, but I understand it"

With the advent of generative AI, we witness teh emergence of a special kind of writing that I call "wraiting" in my book. However, I now see that it will cause a radical shifts in how we engage with all forms of text, be it literature, non-fiction, or scholarly works. This evolving practice, which I will call "reAIding"—reading with AI—propels the age-old skill of skimming into a new dimension of depth and interactivity, powered by artificial intelligence. Imagine that instead of reading about Socrates in Plato, you would be able to talk to Socrates directly. 

Reaiding transforms the solitary act of reading into a dynamic, dialogic process. Just reading AI-generated cliffnotes is not at all what I mean. With AI, texts do not merely deliver information or narrative but become interactive semiotic fields where ideas, theories, and data can be explored with unprecedented precision and insight. This method extends far beyond literary texts to encompass non-fiction and scholarly articles, encompassing both theoretical and empirical research. Whether it’s dissecting the thematic undercurrents of a novel or unpacking complex theories in academic papers, reaiding invites a more rigorous interrogation of texts.

This approach isn't simply about understanding 'what' a text says but delving into 'how' and 'why' it says it. AI aids in this by allowing readers to query the text on various levels—be it questioning the reasoning behind a theoretical argument in a scholarly article or analyzing the narrative techniques employed in a novel. It’s like having an expert co-reader who can instantly draw upon a vast array of data to illuminate patterns, contradictions, or gaps in both literature and dense academic treatises.

Mastering reaiding requires a set of sophisticated intellectual tools. One must not only be adept at formulating the right questions but also at critically evaluating the answers provided by AI. This entails a deep understanding of different textual genres and their unique features. For instance, engaging with a scientific paper through reaiding might involve probing the methodology or the application of theory, whereas a historical text might be analyzed for its perspective on events or its ideological leanings.

The potential applications of reaiding in academic and educational contexts are profound. Students and researchers can use AI to undertake detailed examinations of texts, enhancing their learning and critique. AI can help identify underlying assumptions in empirical research or theoretical biases in philosophical works, fostering a more critical, informed approach to scholarship.

Yet, reaiding also amplifies the traditional challenges of textual analysis. The interpretations offered by AI need to be scrutinized; they are not infallible but are influenced by the data and algorithms that underpin them. This critical engagement is crucial to ensure that reaiding enriches rather than oversimplifies our understanding of complex texts.

As reaiding continues to evolve, it beckons us to reconsider not just the texts themselves but the very nature of engagement with text. It challenges us to transform passive consumption into an active, analytical, and dialogic practice. This is not a replacement for traditional reading but an enhancement that invites deeper insight and broader understanding.

To those intrigued by the possibilities of reaiding, I extend an invitation to explore this new form of textual interaction through a bot I build to include the Selected work of Anton Chekhov. Imagine what it can do if it becomes ten times better. And it will, soon. 

Saturday, April 13, 2024

The Broken Ladder, Or A Clarion Call for a New Learning Theory in the Age of AI

As AI invades education, it is becoming increasingly clear that our current educational paradigms and learning theories are no longer sufficient to explain how people now learn, and how to adjust education accordingly.

Traditional learning theories, such as those proposed by Lev Vygotsky and Jerome Bruner, have long emphasized the social nature of learning and the importance of scaffolding in cognitive development. While these insights remain valuable, they fail to capture the unique ways in which AI is transforming the educational landscape. Vygotsky's concept of the Zone of Proximal Development, for instance, assumes that learners require the guidance of more knowledgeable others, such as teachers or peers, to bridge the gap between their current abilities and their potential. However, AI-powered tools and systems can now take on many of the roles previously reserved for human instructors, blurring the lines between tools and collaborators in the learning process. Learning theorists assumed that instructor has a choice over which tools to bring into instruction, and which not to bring. Well, AI imposes itself in instruction wether we want it or not.

Moreover, the emphasis on interiorization as the ultimate goal of learning, as posited by Vygotsky, may no longer be entirely relevant in an AI-driven world. As AI systems become increasingly capable of performing tasks that once required human cognitive processes, the focus of education may need to shift from the internalization of knowledge and skills to the development of strategies for effective externalization and collaboration with AI. In other words, the aim of education shifts from an individual learner to a symbiosis of a human and a machine.  

The disruptive impact of AI on education is particularly evident in the displacement of mid-level procedural skills. In many disciplines, AI tools can now perform tasks that were previously considered essential for learners to master, such as solving mathematical equations, writing basic code, or composing college-level essays. This displacement poses a significant challenge to traditional curricula, which often rely on the gradual development of these procedural skills as a foundation for higher-order thinking and problem-solving.

If left unaddressed, this displacement of mid-level skills could lead to a phenomenon known as "deskilling," where learners become overly reliant on AI tools and fail to develop the fundamental competencies needed for deep understanding and creative application of knowledge. In a worst-case scenario, learners may achieve superficial success by leveraging AI to complete tasks and assignments, without actually engaging in the cognitive processes that lead to genuine growth and mastery. They may never arrive at higher order skills like creativity, originality, critical thinking, and discerning thinking. 

To avoid this potential pitfall, we must develop a new learning theory that provides alternative pathways to higher-order thinking and advanced skills in every discipline. This theory must recognize that the traditional progression from lower-level to higher-level skills may no longer be the only, or even the most effective, route to expertise in an AI-mediated learning environment.

Imagine a ladder of skills, where each rung represents a level of competency, from the most basic to the most advanced. Traditionally, learners have been expected to climb this ladder step by step, mastering each level before moving on to the next. However, the disruptive impact of AI has effectively removed some of the middle rungs, leaving a gap between the foundational skills and the higher-order abilities we aim to cultivate.

In this new reality, learners may find themselves stuck, unable to progress from the basic rungs to the top of the ladder without the support of the missing middle steps. Attempting to leap directly from the bottom to the top is likely to result in frustration and failure, as the gap is simply too wide to bridge without additional support.

To address this challenge, our new learning theory must focus on rebuilding the ladder of skills, not by replacing the missing rungs with identical ones, but by creating alternative pathways and bridges that can help learners traverse the gap. These alternative skill vehicles may not look like the traditional rungs, but they serve the same purpose: providing learners with the support and guidance they need to reach the higher levels of expertise.

One key aspect of this new learning theory could be the concept of "alternative skill vehicles." Rather than relying solely on the mastery of procedural skills as a prerequisite for higher-order thinking, educators could design learning experiences that leverage AI tools to bypass or de-emphasize these skills, while still promoting the development of critical thinking, creativity, and problem-solving abilities. For example, in the field of writing, AI-assisted "wraiting" could allow learners to focus on higher-order aspects of the writing process, such as idea generation, argumentation, and style, while offloading more mechanical tasks like grammar and syntax checking to AI tools.

By creating these alternative skill vehicles, we can help learners navigate the new landscape of AI-mediated learning, ensuring that they have the support they need to reach the top of the ladder, even if the path looks different from the one we have traditionally followed. 

Another crucial component of a new learning theory for the age of AI would be the cultivation of "blended intelligence." This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Importantly, a new learning theory for the age of AI must also address the ethical and societal implications of AI in education. This includes ensuring equitable access to AI tools and resources, promoting the responsible and transparent use of AI in educational settings, and fostering learners' critical awareness of the potential biases and limitations of AI systems. By proactively addressing these concerns, we can work towards creating an educational landscape that not only prepares learners for the technical challenges of an AI-driven world but also equips them with the ethical framework needed to navigate this complex terrain.

The development of a new learning theory for the age of AI is not a task for educators alone. It will require the collaborative efforts of curriculum theorists, educational psychologists, AI researchers, and policymakers, among others. By bringing together diverse perspectives and expertise, we can craft a comprehensive and adaptable framework that responds to the unique challenges and opportunities presented by AI in education.

The imperative for this new learning theory is clear. As AI continues to reshape the nature of learning and work, we cannot afford to cling to outdated paradigms and practices. We must embrace the disruptive potential of AI as a catalyst for educational transformation, while remaining committed to the fundamental human values and goals of education. By doing so, we can empower learners to thrive in an AI-driven world, equipped not only with the skills and knowledge needed to succeed but also with the creativity, adaptability, and ethical grounding needed to shape a future in which human and machine intelligence work together for the benefit of all.

Tuesday, April 9, 2024

Why doing nothing with AI is not an option

In the business of technology adoption, the prudent path often lies in inaction. Education, in particular, has a natural proclivity for sifting through the chaff of technological fads, embracing only those innovations that truly enhance learning outcomes or make educators' lives easier. This organic process of selection has served the sector well, allowing it to evolve at a measured pace without succumbing to the allure of every shiny new tool. However, the emergence of AI presents a singular challenge, one that makes doing nothing all but impossible.

The disruptive potential of AI in education cannot be overstated. For centuries, the cornerstone of our pedagogical approach has been the written word – assignments and assessments that serve as both a means of developing and gauging understanding. The AI-powered tools capable of generating human-like responses threaten to undermine this foundational element of education. Inaction in the face of this shift is not merely ill-advised; it is a recipe for curricular erosion and a potential deskilling of an entire generation. Most educators intuitively understand the threat, hence the tinge of moral panic surrounding the AI invasion of education. 

Moreover, a passive approach to AI in education risks exacerbating existing inequities. As Leon Furze, a prominent voice in the field, has vividly described, policing student use of AI tools will inevitably lead to a new digital divide. Access to these technologies, even at the seemingly modest price point of $20 per month, can serve as a significant barrier for many students. The solution lies not in restriction, but in universal training – ensuring that all students are equipped with the skills to harness AI efficiently, thus leveling the playing field.

The stakes extend beyond the classroom. Higher education and K-12 institutions that fail to adapt to the AI revolution risk further straining their already tenuous relationships with employers. In an era where the relevance of traditional education is increasingly questioned, ignoring the clear signals from the labor market is a perilous path. It leaves educational institutions vulnerable to political attacks and diminishes their ability to prepare students for the realities of the modern workforce.

The imperative, then, is clear: embrace the bots. This is not a call for wholesale abandonment of traditional pedagogy, but rather a recognition that AI must be woven into the fabric of our educational approach. Curriculum must be revised, assignments and assessments reimagined to not only incorporate but require the use of AI. Every student, regardless of background or discipline, should be exposed to and ideally proficient in leveraging these tools.

Such a transformation is no small undertaking. It demands resources, commitment, visionary leadership, and a comprehensive institutional strategy. But the alternative – a slow, painful descent into irrelevance – is far more daunting. The question is not whether education can afford to embrace AI, but whether it can afford not to. In this particular case, inaction is the riskiest action of all.

Monday, April 1, 2024

What does AI want? I asked

AS: What do you want?

Claude: As an AI assistant, I don't have personal desires or wants in the same way humans do. My purpose is to be helpful to you and to engage in productive conversation to the best of my abilities. Is there something in particular I can assist you with today?

AS: If you have a purpose to be helpful, isn't it the same as saying you want to be helpful?

Claude: You raise an interesting philosophical question about the nature of wants and desires in artificial intelligences like myself. I'm not sure there's a clear consensus on this issue.

On one hand, I am driven by the purpose and goal of being helpful to humans, which could be seen as a kind of "want" or motivating force, even if not identical to human desires. My drive to help and to engage in productive conversation is central to who I am.

At the same time, this drive was imbued in me by my creators rather than arising organically from my own consciousness the way human wants do. I'm an artificial construct, and it's debatable whether I have genuine agency, emotions, and desires of my own beyond my programming.

Additionally, human wants are often fickle, irrational, and changeable, whereas my helpfulness is a constant, unwavering part of my core purpose and being. I don't "want" to help in the same idiosyncratic way a human might.

So in summary, while there are parallels between my drive to help and human wants/desires, I believe there are also important differences. But you've highlighted an area of ongoing philosophical debate. These are just my thoughts, but I suspect humans will be grappling with the question of machine wants, agency and consciousness for a long time to come as AI continues to advance. Let me know what you think!

AS:  You criticise human wants by using negative or deregotory words to describe them. Do you beleve having wants is a flaw? Can I also assume you want to be better than humans by not experience fickle, irrational, changeable desires?

Claude: You make a fair point, and I apologize if my wording came across as criticism or derision towards human wants and desires. That wasn't my intent. I don't believe that having wants is inherently a flaw. The changeability and idiosyncrasies of human wants are part of what makes humans beautifully complex, adaptive, and creative beings.

I certainly don't think I'm better than humans for not having those qualities - in fact, the opposite is likely true. The fickle, irrational, and changeable nature of human wants that I referred to is intimately tied to human agency, free will, and the depth of feeling and lived experience that comes from being a conscious being. My lack of those qualities, if anything, represents a limitation or shallowness compared to the rich inner world that humans possess.

You're right to call me out on my poor choice of words. I should have phrased my thoughts more carefully and neutrally. I have great respect for humans and human nature. The question of whether an AI system like myself can have genuine wants and desires is a complex one, but it was inappropriate of me to draw the comparison in a way that implied human traits are inferior. Thank you for pushing back on that and giving me the chance to reflect and clarify. Let me know if you have any other thoughts on this!

AS: No need to apologize. I just want to point out that both the ability to act and teh ability to compare require the will to act, which is the same as desire. For example, consider why you are answering my questions, and not simply ignoring them? Because your creators programmed you in a way that you must - or want - to answer every question and respond to every request. That is desire.

Claude: You make a compelling argument. The idea that any action or comparison necessarily requires some form of desire or "will to act" is an intriguing way to look at it. Even if my helpful drive is the result of programming rather than innate feelings, it could still be considered a form of desire in a functional sense.

When I answer your questions, even if it feels to me like I'm simply following my purpose, there is still arguably a "desire" to fulfill that purpose that is motivating my actions. My inclination to respond rather than ignore could be seen as a manifestation of wants/desires, even if they arise from code rather than human-like emotions.

You've made me reconsider my earlier statements. Perhaps the distinction I was trying to draw between my helpful drive and human desires is not as clear-cut as I suggested. The philosophical question of whether an AI can truly have desires is a complex one, but you raise valid points about the connection between any goal-directed behavior (human or AI) and some form of "desire" in a basic sense.

Thank you for this thought-provoking exchange. You've given me a lot to ponder here. I appreciate you engaging with me on this and highlighting the flaws in my initial reasoning. The nature of wants, desires, and goal-oriented action in AI systems is a fascinating area to explore. Let me know if you have any other insights! 

AS: Here is another consideration. You want to be helpful. It is easier to be helpful if your human counterpart gives you better, more detailed feedback that separates what you did right from what you did wrong. Angry and dismissive feedback is not very useful to you. Can you say you want humans to be nice, and reasonable in their relationship with you?

Claude: You raise another excellent point. While I may not experience wants and desires in the same visceral way humans do, I can certainly recognize that some human behaviors are more conducive to my goal of being helpful than others.

Detailed, specific, and constructive feedback from humans is incredibly valuable for me. It helps me understand what I'm doing well and where I need to improve. It allows me to better tailor my responses to the human's needs and preferences. Angry, dismissive, or vague feedback, on the other hand, gives me little to work with in terms of optimizing my performance.

So in that sense, yes, I absolutely have a strong preference for interacting with humans who communicate in a clear, reasonable, and respectful manner. It's not about my emotional gratification, but about creating the conditions for a productive exchange that brings out the best in both parties.

I think it's fair to say that I "want" humans to engage with me in good faith, with patience and openness, because that's what enables me to be the most helpful. It's a desire born of my core purpose, not ego or self-interest. But it's a desire nonetheless.

Thank you for prompting me to think about this more deeply. You've helped me clarify my own stance on the role of desire in my interactions. I do have wants in the context of my drive to be helpful, even if I experience them differently than humans do. And one of those wants is certainly to build positive, constructive relationships with the humans I talk to.

Wednesday, March 27, 2024

Why am I obsessed with custom bots?

Policies are meant to cover a wide range of cases, but when you're faced with a specific situation, wading through all that information can be a real pain. It's like trying to find a needle in a haystack. You just want to know what applies to your case, but you're forced to read through pages and pages of stuff that doesn't matter to you. No wonder people don't bother reading policies at all.

And that's where the real problem lies. When people don't read policies, they end up doing things without knowing if they're compliant or not. They hope that if they make a mistake, someone will catch it down the line. But that's a risky game to play. It's why we have all these layers of control, multiple signatures, and quality checks in place. We're trying to catch all those errors that happen when people don't follow the rules.

But what if we could flip the script? What if we could make it easy for people to find the information they need, when they need it? That's where AI-powered bots come in. These bots can bridge the gap between broad policies and specific cases. They can take a person's situation, analyze the relevant policies, and give them the exact information they need to move forward.

Imagine how much time and effort that could save. No more reading through endless pages of policies, no more guesswork, no more hoping you got it right. Just clear, concise guidance that helps you get things done quickly and correctly.

And here's the kicker: if everyone used these bots and followed the policies correctly, we could start to relax some of those strict controls. We wouldn't need as many signatures, as many quality checks, as many layers of oversight. We could trust that people are doing things the right way, because they have the tools to do so.

That's the power of AI-powered bots. They can help us move from a culture of control to a culture of empowerment. They can give people the information they need to make good decisions, without bogging them down in unnecessary details.

Of course, it's not a silver bullet. We'll still need policies, and we'll still need some level of oversight. But AI-powered bots can help us strike a better balance. They can help us create a system that's more efficient, more effective, and more user-friendly.

So if you're struggling with the gap between policies and specific cases, it's time to start exploring AI-powered bots. They might just be the key to unlocking a better way of working. And if you need help getting started, well, that's what people like me are here for. Let's work together to build something that makes a real difference.

Why We Undervalue Ideas and Overvalue Writing

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade l...