Wednesday, July 24, 2024

What percentage of my text is AI-generated?

Go ahead, ask me the question. However, I would in turn ask you to specify which of the following kinds of assistance from AI you are interested in.  

  1. Distilling information into summaries
  2. Revamping and recasting content
  3. Polishing grammar, spelling, and punctuation
  4. Sparking ideas and crafting titles
  5. Conjuring additional arguments or perspectives
  6. Spotting potential counterarguments or objections
  7. Constructing and organizing content
  8. Juxtaposing points from multiple sources
  9. Scrutinizing and refining existing content
  10. Demystifying complex ideas or jargon
  11. Architecting outlines and organizational structures
  12. Fashioning examples or illustrations
  13. Tailoring content for different audiences or formats
  14. Forging hooks or attention-grabbing openings
  15. Sculpting strong conclusions or call-to-actions
  16. Unearthing relevant quotes or citations
  17. Decoding concepts in simpler terms
  18. Fleshing out brief points or ideas
  19. Trimming verbose text
  20. Honing clarity and coherence
  21. Smoothing the flow between paragraphs or sections
  22. Concocting metaphors or analogies
  23. Verifying and authenticating information
  24. Proposing synonyms or alternative phrasing
  25. Pinpointing and eliminating redundancies
  26. Diversifying sentence variety and structure
  27. Maintaining consistency in tone and style
  28. Aligning content with specific style guides
  29. Devising keywords for SEO optimization
  30. Assembling bullet points or numbered lists
  31. Bridging sections with appropriate transitions
  32. Flagging areas that need more elaboration
  33. Accentuating key takeaways or main points
  34. Formulating questions for further exploration
  35. Contextualizing with background information
  36. Envisioning visual elements or data representations
  37. Detecting potential areas of bias or subjectivity
  38. Inventing catchy titles or headlines
  39. Streamlining the logical flow of arguments
  40. Boosting text engagement and persuasiveness
  41. Rooting out and rectifying logical fallacies
  42. Imagining hypothetical scenarios or case studies
  43. Illuminating alternative perspectives on a topic
  44. Weaving in storytelling elements
  45. Uncovering gaps in research or argumentation
  46. Producing counterexamples or rebuttals
  47. Bolstering weak arguments
  48. Harmonizing tense and voice inconsistencies
  49. Composing topic sentences for paragraphs
  50. Integrating data or statistics effectively
  51. Devising analogies to explain complex concepts
  52. Injecting humor or wit
  53. Eradicating passive voice usage
  54. Compiling topic-specific vocabulary lists
  55. Enhancing paragraph transitions
  56. Untangling run-on sentences
  57. Articulating thesis statements or main arguments
  58. Infusing content with sensory details
  59. Resolving dangling modifiers
  60. Conceiving potential research questions
  61. Incorporating rhetorical devices
  62. Rectifying pronoun inconsistencies
  63. Anticipating potential counterarguments
  64. Embedding anecdotes effectively
  65. Mending comma splices
  66. Drafting potential interview questions
  67. Sprinkling in cultural references
  68. Correcting subject-verb agreement errors
  69. Designing potential survey questions
  70. Adorning text with figurative language
  71. Repositioning misplaced modifiers
  72. Brainstorming potential titles for sections or chapters
  73. Integrating expert opinions
  74. Paring down wordiness
  75. Exploring potential subtopics
  76. Weaving in statistical data
  77. Eliminating tautologies
  78. Coining potential taglines or slogans
  79. Embedding historical context
  80. Untangling mixed metaphors
  81. Developing potential FAQs and answers
  82. Incorporating scientific terminology
  83. Fixing split infinitives
  84. Generating potential discussion points
  85. Blending in technical jargon
  86. Expunging clichés
  87. Crafting potential calls-to-action
  88. Inserting industry-specific terms
  89. Replacing euphemisms
  90. Extracting potential pullout quotes
  91. Interweaving mathematical concepts
  92. Eliminating redundant phrasing
  93. Compiling potential glossary terms and definitions
  94. Introducing philosophical concepts
  95. Standardizing formatting
  96. Curating potential appendix content
  97. Incorporating legal terminology
  98. Clarifying ambiguous pronouns
  99. Cataloging potential index terms
  100. Synthesizing interdisciplinary perspectives
  101. Writing long list of AI uses for content generation



Saturday, July 20, 2024

The Three Wave Strategy of AI Implementation

Whether it's a university, a tech giant, a manufacturing company, a public utility, or a government agency, the complexity of operations can be overwhelming. To illustrate this point, Claude and I have generated a list of over 1,150 workflows typical for a large university, many of which can be further broken down into smaller, more specific processes.

Given this complexity, the question arises: Where do we start with AI implementation? The answer lies in a strategic, phased approach that considers the unique characteristics of each workflow and the organization's readiness for AI adoption.

The First Wave: Low-Hanging Fruit

The initial phase of AI implementation should focus on what we call the "low-hanging fruit" - workflows that meet three crucial criteria:

  1. Self-evident quality: The output quality is immediately obvious and doesn't require complex evaluation.
  2. Single-person control: The workflow is typically managed or executed by one individual.
  3. Ready-made AI tools: The process can be enhanced using existing AI tools without requiring specialized development. It is either using one of the primary LLM's or building a custom bot.

These criteria help identify areas where AI can quickly and effectively augment human efforts, improving efficiency and potentially enhancing the quality of service provided. Based on these criteria, here's a priority list of workflows that could be considered for the first wave of AI implementation. These are just examples:

  1. Student services
    • Student and prospective student advising of all kinds
  2. Resume and Cover Letter Review (Career Services)
    • Offering individual resume critiques
    • Assisting with cover letter development
  3. Academic Policy Development and Enforcement (Academic Affairs)
    • Drafting and revising academic policies
  4. Health Education and Outreach (Health and Wellness Services)
    • Creating and distributing health education materials
  5. Sustainability Education and Outreach (Sustainability and Environmental Initiatives)
    • Creating sustainability guides and resources for campus community
  6. Digital Marketing and Social Media Management (University Communications and Marketing)
    • Creating and curating content for various platforms
  7. Grant Proposal Development and Submission (Research and Innovation)
    • Assisting faculty with proposal writing
  8. Financial Aid Counseling (Financial Aid and Scholarships)
    • Providing one-on-one counseling sessions
    • Offering debt management and financial literacy education
  9. Alumni Communications (Alumni Relations and Development)
    • Producing alumni magazines and newsletters
  10. Scholarly Communications (Library Services)
    • Supporting faculty in publishing and copyright issues
    • Providing guidance on research impact metrics
  11. International Student and Scholar Services (International Programs and Global Engagement)
    • Providing immigration advising and document processing

This first wave serves multiple purposes. It demonstrates the proof of principle, making more stakeholders comfortable with AI integration. It also helps build internal expertise and confidence in working with AI technologies. These early successes can pave the way for more ambitious implementations in the future.

The Second Wave: Tackling Costly Workflows

Once the organization has gained experience and confidence from the first wave, it can move on to more complex and costly workflows. These are typically processes that involve significant labor, occur frequently, and have a broad scope of impact on the organization. However, it is crucial to narrow down this list based on feasibility and readiness for AI implementation.

For instance, while teaching is undoubtedly one of the most labor-intensive and impactful processes in a university, we do not yet have sufficient knowledge on how to make it significantly more efficient through AI. Some processes, like teaching, may never be fully optimized by AI because to their inherently relational nature. 

Note, this is also an opportunity to review major workflows; they often evolved over the years, and are far from ideal efficiency. AI can help review these workflows, and recommend streamlining. And of course, AI can be integrated into actually doing the work. 

The Third Wave: Enterprise-Level Solutions

Only after successfully navigating the first two waves should an organization consider enterprise-level AI solutions. These solutions have the potential to radically redefine the organization's core operations, placing AI at the center of its processes. This level of integration requires a deep understanding of AI capabilities, a clear vision of the organization's future, and a robust infrastructure to support AI-driven operations. Most importantly, it requires specialized tools and high level of security. 

The Timeline and Exceptions

This phased approach to AI implementation is not a quick process. For most large, complex organizations, it could take a couple of decades to fully realize the potential of AI across all workflows. However, there are exceptions. Some businesses with simpler and fewer workflows, such as narrowly specialized customer service operations, may be able to leapfrog straight into the third wave, especially if they have prior experience with AI technologies.

But these are the exceptions rather than the rule. For the majority of organizations, the path to comprehensive AI implementation requires a well-thought-out strategy, clear priorities, and a focus on building confidence and expertise over time.

Integrating AI into a complex organization's workflows is a marathon, not a sprint. It asks for patience, strategic thinking, and a willingness to learn and adapt. The key is to approach this journey with a clear strategy, well-defined priorities, and a commitment to building internal AI expertise. 

Abstract painting of waves

Wednesday, July 17, 2024

AI is not going to implement itself, but governments can help

The AI hype has passed, and the overexcited futurists' voices are mercifully fading away. We're now entering a practical era where AI is leveraged to boost productivity in businesses, non-profit, and public organizations. This shift brings a sobering realization: AI integration requires a meticulous, pragmatic approach to build reliable and trustworthy systems. It's a lot of work and requires some strategy.

When a single person manages a well-defined workflow, integrating AI is relatively straightforward. It's easy to incorporate AI tools like ChatGPT or Claude to assist with ad copy, reports, or applications. The beauty of these scenarios lies in their simplicity - the user acts as both operator and quality controller, immediately judging the output's effectiveness.

However, the story changes dramatically when we shift to multi-user workflows or more complex processes, where both inputs and outputs are more of a collective responsibility. I recently spoke with an Accounts Payable team who posed a challenging question: "Yes, we can see that AI can help review travel claims, but can you guarantee it's going to be 100% accurate?" I couldn't provide that guarantee; I don't have time to conduct a hundred tests, and I don't even have access to a hundred travel reports. They emphasized their need for completely audit-proof outcomes. This conversation highlighted the trust issues that arise when moving from AI enthusiasts to skeptics in larger organizations. And organizations should have a healthy group of skeptics to remain viable.

I've also recently been a fly on the wall during discussions between healthcare executives and a U.S. lawmaker. The executives explained that each AI-assisted medical procedure needs validation, which is expensive and often duplicated across multiple hospital systems. This challenge extends beyond healthcare. For instance, when using AI to crunch data in all organizations, we need to understand its reliability in analyzing large datasets, cleaning them, and handling outliers.

The problem is that no private institution can conduct the kind of comprehensive testing and validation needed to establish trust in AI systems across various industries. We cannot seriously trust claims of startups who are trying to sell a specialized product to an industry or a government organization. It's not clear how a hypothetical validation private service would monetize such an endeavor.

This is where I believe government involvement becomes crucial. Instead of obsessing with deep fakes and ethics, that's what governments should be doing. Governments can collaborate with industry experts to develop standardized benchmarks for AI reliability and performance. They could establish certification programs that act as quality marks, assuring users that AI systems have undergone rigorous testing. Moreover, government funding could support businesses, NGOs, and government agencies in conducting extensive AI testing, especially benefiting smaller organizations lacking the necessary resources.

In my view, public-private partnerships are key to navigating these challenges. By leveraging expertise from both sectors, we can develop robust testing frameworks and create dependable AI systems. This approach would pave the way for more efficient and innovative workflows across industries, ensuring that the benefits of AI are realized while maintaining trust and reliability. 

Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Tuesday, July 9, 2024

AI-Positive Pedagogy: Navigating the Great Disruption

AI has disrupted the educational landscape. This disruption threatens the established sequence of skill development, from simple to mid-range to higher-level skills, by eroding traditional curriculum principles, particularly in the realm of student activities and assessment. As a profession, we face a critical decision: limit AI use or develop an AI-positive pedagogy.

While limiting AI use may seem tempting, it is ultimately unfeasible and fails to prepare students for the AI-infused world they will live in. Attempting to enforce strict limitations on AI use is not only impractical but also fails to acknowledge the potential benefits that AI can bring to education.

The only plausible path forward is to adapt a new pedagogy to harness the power of AI for the benefit of our students. This involves a shift towards authentic, discipline-specific assessments that mirror real-world applications of AI within various fields. By focusing on how AI is used in different disciplines, educators can create assessments that evaluate students' ability to effectively utilize AI tools in relevant contexts.

AI-positive pedagogy emphasizes the cultivation of higher-order thinking skills, such as prompt engineering and discerning thinking. Prompt engineering involves crafting effective queries and instructions for AI systems, while discerning thinking emphasizes the critical evaluation of AI-generated information and the ability to make informed decisions by combining AI insights with human judgment. These meta-AI skills are crucial for students to navigate and thrive in an AI-populated world.

AI-positive pedagogy should prepare students to work effectively in environments where human and artificial intelligence coexist and complement each other. By fostering skills in collaborating with AI systems, understanding the strengths of both human and artificial intelligence, and developing strategies for distributed problem-solving, educators can equip students to succeed in the AI-infused workplace.

The shift towards AI-positive pedagogy is well-rooted in past pedagogy and curriculum theory. Educators have long prioritized conceptual and higher-level skills over mechanical and procedural knowledge. The disruption caused by AI may serve as a catalyst for educators to finally achieve what they have been striving for over the past century. As we embrace AI-positive pedagogy, it is essential to re-evaluate the assumption that all effort leads to learning. Cognitive Load Theory suggests that learning can be optimized by managing the three types of cognitive load: intrinsic (inherent complexity of the learning material), extraneous (caused by ineffective instructional design), and germane (effort required to process and construct mental schemas). In the context of AI-positive pedagogy, this involves using AI tools to provide appropriate support and scaffolding as learners progress from lower-level to higher-level skills, while minimizing extraneous load and promoting germane load. Not all loss of effort by students is bad. If we are honest, much of learning work is extraneous, busy, or compliance/submission work anyway. By investigating the limits and structure of leapfrogging - skipping over mid-range skills to move from basic literacies and numeracies to creative, theoretical, and critical thinking - educators can harness the power of AI to accelerate student growth.

To develop a robust AI-positive pedagogy, educators and cognitive psychologists must collaborate to investigate how students interact with and perceive AI tools - alone or under teacher's guidance. This research should focus on understanding the mental models students develop when engaging with AI, and how these models differ from those associated with other educational tools. By exploring students' cognitive processes, researchers can identify the unique challenges and opportunities presented by AI in the learning environment.

It is also crucial to examine the emotional and motivational factors that influence students' engagement with AI tools. Understanding how students' attitudes, beliefs, and self-efficacy impact their willingness to adopt and effectively use AI in their learning can inform the design of AI-positive pedagogical strategies.

In addition to investigating student cognition and affect, researchers should also explore the social and cultural dimensions of AI use in education. This includes examining how AI tools can be leveraged to promote collaborative learning, foster inclusive learning environments, and bridge educational inequities.

To build a comprehensive AI-positive pedagogy, researchers and educators must also develop and validate practices for integrating AI into various disciplines and educational contexts. This involves creating guidelines for the use of AI in education, as well as establishing professional development programs to support educators in effectively implementing AI-positive pedagogical strategies.

The development of an evidence-based AI-positive pedagogy requires a concerted effort from the educational community. By investing in basic research, collaboration, and innovation, we can harness the potential of AI to transform education and empower students to thrive in an AI-infused world.


Wednesday, June 26, 2024

On AI Shaming

Here is a new thing: AI shaming. It is a practice where individuals accuse others of using artificial intelligence to generate written content, as if such an act were inherently deceitful or somhow sinful. How fascinating, the very premise of it.

This phenomenon  reveals itself to be a fairly common logical fallacy. It is a summative dismissive argument, with a dash of ad hominem (ad machinam?) for flavor. One might wonder why the method of creation should overshadow the substance of the content. After all, we don't dismiss the works of authors who switched from quills to typewriters, do we? 

There's a certain Foucauldian quality to this practice, if one may be permitted a touch of theoryhead's indulgence. By attempting to draw a sharp line between 'acceptable' and 'unacceptable' means of content creation, some seem to be engaging in a subtle power play. It is almost as if they're trying to sell their unfamiliarity with newer technologies as a virtue. it is a rather clever way of elevating the game at which one excells, and putting down a game at which one fails. While an understandable inclination it is still a folly. 

For those of us who have embraced these new tools, such accusations are about as concerning as a light drizzle on a summer day - which is to say, entirely expected and hardly worth mentioning. If anything, it provides a certain amusement to observe the lengths to which some will go to maintain the status quo and their priviledged little spot in it. 

However, there is a more sobering concern to consider. While thick-skinned people like me might brush off such criticisms with a raised eyebrow, younger, more impressionable ones might internalise this arbitrary stigma. It would be a shame if the next generation felt compelled to hide their technological proficiency out of  fear of Luddites' bullying.

As these AI tools inevitably become more sophisticated and ubiquitous, perhaps we might redirect our energy towards more productive ends. Instead of engaging in this curious form of digital fingerpointing, we could focus on the responsible and creative use of these technologies. After all, the ideas expressed within content will always be more intriguing than the means by which they were transcribed.

To those who persist in AI shaming: by all means, knock yourelf out. Your dedication to this cause is admirable, if somewhat perplexing. Just don't be too surprised if the rest of us seem a bit distracted - we'll be busy adapting to the future while you're perfecting your fingerwagging techniques.

P.S. This text, according to QuillBot, is 0% AI-generated, however I wrote it with Claude :)

How to bullshit about bullshit

Take a look at a paper ChatGPT is bullshit, by Michael Townsen Hicks, James Humphries, Joe Slater. Other than a catchy title, the paper has a few problems of its own.

1. Focus on terminology over utility:

The paper spends considerable effort arguing that the outputs of large language models (LLMs) like ChatGPT should be classified as "bullshit" rather than "hallucinations." While this may be an interesting philosophical distinction, it arguably does little to advance our understanding of the practical implications and applications of these technologies. The authors could have devoted more attention to analyzing the actual utility and potential use cases of LLMs, which would likely be more valuable to policymakers and the public.

2. Limited practical insights:

By focusing primarily on categorizing the nature of LLM outputs, the paper misses an opportunity to provide meaningful insights into how these technologies can be effectively and responsibly utilized. A more constructive approach might involve examining specific contexts where LLMs prove useful and where they fall short, rather than broadly labeling their output as "bullshit."

3. Potential for misdirection:

While the authors argue that the term "hallucinations" is misleading, replacing it with "bullshit" may not necessarily lead to a clearer understanding of LLM behavior. Both terms are metaphorical and could potentially misguide readers about the true nature and capabilities of these systems. A more technical and nuanced explanation of how LLMs function and their limitations might be more informative.

4. Overlooking nuance:

The paper seems to take a binary approach – either LLMs are concerned with truth (which they argue against) or they are producing "bullshit." This oversimplification may overlook the nuanced ways in which LLMs can be useful for various tasks, even if they don't have an inherent concern for truth in the way humans do.

5. Lack of actionable recommendations:

While the authors critique the use of the term "hallucinations," they don't offer clear, actionable recommendations for how to better communicate about LLMs to policymakers and the public. A more constructive approach would be to propose specific ways to educate stakeholders about the capabilities and limitations of these technologies.

6. Missing the broader context:

By focusing narrowly on the philosophical categorization of LLM outputs, the paper misses an opportunity to discuss the broader implications of these technologies on society, economy, and various industries. A more comprehensive analysis of the impact and potential of LLMs would likely be more valuable to readers.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...