Monday, August 19, 2024

The Right to Leapfrog: Redefining Educational Equity in the Age of AI

AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.

This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.

Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.

The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.

This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.

Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.

As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education. 


Friday, August 9, 2024

Authorship, Automation, and Answerability

In the ongoing debate about the ethical use of AI, two main concerns stand out—one superficial and one profound. The first concern, often highlighted, is about the authenticity of authorship, with fears that AI-generated content might mislead us about who the true author is. However, this worry is largely misguided. It stems from a historically limited, Western-centric notion of authorship that blurs the line between the origin of ideas and the craft of their representation.

Take the legacy of Steve Jobs. He wasn’t celebrated for personally assembling each iPhone, but for his vision and design that brought the device to life. In our industrial world, the act of making things is not inherently authorial—designing them is. Why should it be any different with text, code, or images? If I designed this text, and used advanced tools to produce it, why am I not still the author? The shock many feel towards AI’s ability to generate content is akin to the upheaval experienced by 19th-century bootmakers during the Industrial Revolution. Automation has simply extended its reach into the realms of writing, coding, and art. The craftsmanship is replaced by automation, but the core principle remains: take pride in the ideas, not in the mechanics of their production. There is no inherent authorship in the latter.

But here’s where Mikhail Bakhtin’s notion of answerability helps our understanding of the true ethical stakes. While responsibility is often about fulfilling obligations or being held accountable after the fact, answerability is about our ongoing, active engagement with the world and the people in it. It is not just about who gets credit for the content; it is about recognizing that every action, every word, and every piece of AI-generated content occurs within a web of relationships. We are answerable to others because our creations—whether authored by human hands or machine algorithms—affect them.

The real concern, then, lies in the issue of answerability. AI-generated content often appears polished, convincing, and ready for immediate consumption. This creates a dangerous temptation to release such content into the world without thorough scrutiny. Here is where the ethical stakes rise significantly. AI may produce work that looks and sounds credible, but this does not guarantee that it is unbiased, meaningful, or truthful. It maybe garbage polluting the infosphere at best, or an outward harmful fake at worst. The ease of content creation does not absolve us of the responsibility to ensure its quality and integrity, and more importantly, it doesn’t free us from the answerability we have to the world around us.

This is the message we need to instill in our students, professionals, and anyone working with AI: you are still accountable and answerable for what you produce, even if a machine does the heavy lifting. Releasing AI-generated content without critical evaluation is akin to conjuring a spell without understanding its consequences. Like a magician wielding powerful but unpredictable magic, or a novice driver behind the wheel of a truck instead of a bicycle, the stakes have been raised. The tools at our disposal are more potent than ever, and with that power comes a heightened level of answerability.

In essence, the ethical debate surrounding AI shuold not be about the authorship of the craft but shuold be about the integrity and impact of the output. The real challenge is ensuring that what we create with these advanced tools is not only innovative but also responsible and answerable. As we continue to integrate AI into more aspects of our lives, we must focus less on who—or what—authored the content and more on the ethical implications of releasing it into the world. This is where the true ethical discourse lies, and it is here that our attention should be firmly fixed.


Thursday, August 8, 2024

The Cognitive Leap Theory

With the arrival of AI, education is experiencing a profound shift, one that requires a rethinking of how we design and implement learning activities. This shift is captured in the cognitive leap theory, which posits that AI is not just an add-on to traditional education but a transformative force that redefines the learning process itself. The Cognitive Leap theory is a core part of a larger AI-positive pedagogy framework.

Traditionally, educational activities have been structured around original or revised Bloom’s Taxonomy, a framework that organizes cognitive skills from basic recall of facts (Remember) to higher-order skills like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a sequential progression, Bloom himself never insisted on a strict hierarchy. In fact, with the integration of AI into the classroom, the importance of these skills is being rebalanced. The higher-order skills, particularly those involving critical evaluation, are gaining prominence in ways that were previously unimaginable.

In an AI-positive pedagogical approach, the focus shifts from merely applying and analyzing information—tasks typically associated with mid-level cognitive engagement—to critically evaluating and improving AI-generated outputs. This represents a significant cognitive leap. Instead of simply completing tasks, students are now challenged to scrutinize AI outputs for accuracy, bias, and effectiveness in communication. This shift not only fosters deeper cognitive engagement but also prepares students to navigate the complex landscape of AI-driven information.

A key component of this approach is the development of meta-AI skills. These skills encompass the ability to formulate effective (rich) inquiries or prompts for AI, to inject original ideas into these prompts, and, crucially, to critically assess the AI’s responses. This assessment is not a one-time task but part of an iterative loop where students evaluate, re-prompt, and refine until the output meets a high standard of quality. This process not only sharpens their analytical skills but also enhances their creative abilities, as they learn to think critically about the inputs and outputs of AI systems.

Moreover, the traditional view that learning progresses linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced classroom, evaluation and creation are no longer the endpoints of learning but are increasingly becoming the starting points. Students must begin by evaluating AI-generated content and then proceed to improve it, a process that requires a deep understanding of context, an awareness of potential biases, and the ability to communicate effectively. This reordering of cognitive priorities is at the heart of the cognitive leap theory, which emphasizes that the future of education lies in teaching students not just to perform tasks but to engage in higher-order thinking at every stage of the learning process.

The implications of this shift are serious. Educators must rethink how they design assignments, moving away from traditional task-based assessments toward activities that challenge students to evaluate and improve upon AI-generated outputs. This requires a new kind of pedagogy, one that is flexible, iterative, and deeply engaged with the possibilities and limitations of AI.

By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself. 


Thursday, August 1, 2024

Meet Jinni, a Universal Assistant Bot

In a busy campus with 30,000 students, hundreds of faculty, and staff, managing everyday tasks and emergencies can be tricky. Imagine a universal bot, named Jinni, designed to assist everyone regardless of what they want and need to happen. Here’s a glimpse into how this could transform daily life on campus.

Take Dr. Nguyen, for instance. A junior professor with a packed schedule, she was just invited to present at a conference in Milan but wasn't sure how to get funding. She turned to Jinni.
"Good afternoon, Professor Nguyen. What do you need today?" Jinni asked.
"I want to attend a conference in Milan. Can I get support?" she inquired.

Jinni quickly scanned the institutional website and the financial data wharehouse and responded, "In your College, it takes a request from your Associate Dean. There is still some travel budget left, but you need to hurry. However, if it’s not a peer-reviewed conference and you’re not presenting, I wouldn't bother—the College's policy does not allow for this."

It added, "If you’d rather tell me the details about the conference and upload the invitation letter, I can file the request for you. Or, you can follow the link and do it yourself."

Professor Nguyen appreciated the options and the clarity, and chose to upload her details, letting Jinni handle the rest. Within a minute, Jinni said "Done, you shuold hear from the dean's office within a week. I alrready checked your eligibility, and recommended the Associate Dean to approve."

Then there was Mr. Thompson, a new staff member who discovered a puddle in the lobby after a rainy night. He pulled out his phone and described the situation to Jinni.

"You need to file an urgent facilities request. Here’s the link. Would you like me to file one for you? If yes, take a picture of the puddle," Jinni offered. "But if it’s really bad, you may want to call them. Do you want me to dial?"

Mr. Thompson opted for the latter, and within moments, Jinni had connected him to the facilities team.

Finally, there was Jose, a student who had missed the course drop deadline because of a bad flu. Anxious and unsure what to do, he asked Jinni for help. 

"Sorry to hear you’ve been sick. Jose. Yes, there is a petition you can file with the Registrar," Jinni replied. "I can do it for you, but I need a few more details. Do you have a note from your doctor? If not, you should get it first, then take a picture of it for me. If you used the Campus Health Center, I can contact them for you to request documentation. I will then write and submit the petition on your behalf. I will also need a few details - which class, the instructore's name, when you got sick, etc." Jose was relieved to find a straightforward solution to his problem and began to answer Jinni's questions one by one. 

The technology to create a universal agent bot like Jinni is not yet on the open market, but all elements do already exist as prototypes. More advanced customizable AI models, trained on extensive and diverse datasets, are essential to handle such tasks. More active, agentic AI also does exist. It can file and submit forms, not just find them. But even if we could to simply find and interpret policy and procedures, and point users to the right forms, it would alredy be a huge step forward. 

Simplifying and streamlining hundreds of procedures that any complex organization develops is definitely possible, but we know few examples of successful transformations like that. The next best thing is to use AI to help people navigate those procedures. This will lower barriers for all and reduce transactional costs. 


Monday, July 29, 2024

AI is an Amateur Savant

Most people who use AI think it is great in general but believe it does not grasp their area of specialization very well. As an applied philosopher, I create intellectual tools to help others think through their problems. I find AI excellent at clarifying and explaining ideas, but it has never generated an original idea worth writing about. I have yet to see reports from others in any discipline that AI has independently produced groundbreaking ideas.

AI can handle large amounts of data and provide coherent, accurate responses across various fields. This ability is comparable to a well-informed amateur who has a broad understanding but lacks deep expertise. AI can recount historical facts, explain scientific principles, and offer legal insights based on data patterns, yet it falls short in deeper, more nuanced analysis.

In my case, AI can assist by summarizing existing theories or offering possible objections or additional arguments. However, it lacks the ability to generate a genuinely novel idea. I use it a lot, and not even once did it produce anything like that. This limitation stems from its reliance on pre-existing data and patterns, preventing it from achieving the level of innovation that human professionals bring to their fields. Some believe that this limitation will soon be overcome, but I do not think so. It seems to be an intrinsic limitation, a function of AI's way of training.

Professionals/experts, whether in philosophy, medicine, or history, possess a depth of understanding developed through extensive education and practical experience. They apply complex methodologies, critical thinking, and ethical considerations that AI cannot replicate. A doctor considers the patient's history and unique implications of treatments, while a professional historian places events within a broader socio-cultural context. AI, despite its capabilities, often misses these subtleties. It is, in some sense, a savant: a fast, amazing, but inexperienced thinker.

The gap between a capable amateur and a professional/expert might seem small, especially from the point of view of the amateur. However, it is huge and is rooted in the depth of expertise, critical thinking, and the ability to judge that professionals possess; it is a function of intellect, experience, and education. This gap is where educators should look to adapt the curriculum.

In education, we should focus on that gap between the amateur and the professional and conceptualize it as the ultimate learning outcome, then build new skill ladders to claim there. Students need to understand and conquer the gap between AI and a professional expert. These meta-AI skills are our true North. AI can support this learning process by providing clear explanations and diverse perspectives, but it cannot replace the nuanced understanding and innovation that human professionals offer.


Wednesday, July 24, 2024

What percentage of my text is AI-generated?

Go ahead, ask me the question. However, I would in turn ask you to specify which of the following kinds of assistance from AI you are interested in.  

  1. Distilling information into summaries
  2. Revamping and recasting content
  3. Polishing grammar, spelling, and punctuation
  4. Sparking ideas and crafting titles
  5. Conjuring additional arguments or perspectives
  6. Spotting potential counterarguments or objections
  7. Constructing and organizing content
  8. Juxtaposing points from multiple sources
  9. Scrutinizing and refining existing content
  10. Demystifying complex ideas or jargon
  11. Architecting outlines and organizational structures
  12. Fashioning examples or illustrations
  13. Tailoring content for different audiences or formats
  14. Forging hooks or attention-grabbing openings
  15. Sculpting strong conclusions or call-to-actions
  16. Unearthing relevant quotes or citations
  17. Decoding concepts in simpler terms
  18. Fleshing out brief points or ideas
  19. Trimming verbose text
  20. Honing clarity and coherence
  21. Smoothing the flow between paragraphs or sections
  22. Concocting metaphors or analogies
  23. Verifying and authenticating information
  24. Proposing synonyms or alternative phrasing
  25. Pinpointing and eliminating redundancies
  26. Diversifying sentence variety and structure
  27. Maintaining consistency in tone and style
  28. Aligning content with specific style guides
  29. Devising keywords for SEO optimization
  30. Assembling bullet points or numbered lists
  31. Bridging sections with appropriate transitions
  32. Flagging areas that need more elaboration
  33. Accentuating key takeaways or main points
  34. Formulating questions for further exploration
  35. Contextualizing with background information
  36. Envisioning visual elements or data representations
  37. Detecting potential areas of bias or subjectivity
  38. Inventing catchy titles or headlines
  39. Streamlining the logical flow of arguments
  40. Boosting text engagement and persuasiveness
  41. Rooting out and rectifying logical fallacies
  42. Imagining hypothetical scenarios or case studies
  43. Illuminating alternative perspectives on a topic
  44. Weaving in storytelling elements
  45. Uncovering gaps in research or argumentation
  46. Producing counterexamples or rebuttals
  47. Bolstering weak arguments
  48. Harmonizing tense and voice inconsistencies
  49. Composing topic sentences for paragraphs
  50. Integrating data or statistics effectively
  51. Devising analogies to explain complex concepts
  52. Injecting humor or wit
  53. Eradicating passive voice usage
  54. Compiling topic-specific vocabulary lists
  55. Enhancing paragraph transitions
  56. Untangling run-on sentences
  57. Articulating thesis statements or main arguments
  58. Infusing content with sensory details
  59. Resolving dangling modifiers
  60. Conceiving potential research questions
  61. Incorporating rhetorical devices
  62. Rectifying pronoun inconsistencies
  63. Anticipating potential counterarguments
  64. Embedding anecdotes effectively
  65. Mending comma splices
  66. Drafting potential interview questions
  67. Sprinkling in cultural references
  68. Correcting subject-verb agreement errors
  69. Designing potential survey questions
  70. Adorning text with figurative language
  71. Repositioning misplaced modifiers
  72. Brainstorming potential titles for sections or chapters
  73. Integrating expert opinions
  74. Paring down wordiness
  75. Exploring potential subtopics
  76. Weaving in statistical data
  77. Eliminating tautologies
  78. Coining potential taglines or slogans
  79. Embedding historical context
  80. Untangling mixed metaphors
  81. Developing potential FAQs and answers
  82. Incorporating scientific terminology
  83. Fixing split infinitives
  84. Generating potential discussion points
  85. Blending in technical jargon
  86. Expunging clichés
  87. Crafting potential calls-to-action
  88. Inserting industry-specific terms
  89. Replacing euphemisms
  90. Extracting potential pullout quotes
  91. Interweaving mathematical concepts
  92. Eliminating redundant phrasing
  93. Compiling potential glossary terms and definitions
  94. Introducing philosophical concepts
  95. Standardizing formatting
  96. Curating potential appendix content
  97. Incorporating legal terminology
  98. Clarifying ambiguous pronouns
  99. Cataloging potential index terms
  100. Synthesizing interdisciplinary perspectives
  101. Writing long list of AI uses for content generation



Saturday, July 20, 2024

The Three Wave Strategy of AI Implementation

Whether it's a university, a tech giant, a manufacturing company, a public utility, or a government agency, the complexity of operations can be overwhelming. To illustrate this point, Claude and I have generated a list of over 1,150 workflows typical for a large university, many of which can be further broken down into smaller, more specific processes.

Given this complexity, the question arises: Where do we start with AI implementation? The answer lies in a strategic, phased approach that considers the unique characteristics of each workflow and the organization's readiness for AI adoption.

The First Wave: Low-Hanging Fruit

The initial phase of AI implementation should focus on what we call the "low-hanging fruit" - workflows that meet three crucial criteria:

  1. Self-evident quality: The output quality is immediately obvious and doesn't require complex evaluation.
  2. Single-person control: The workflow is typically managed or executed by one individual.
  3. Ready-made AI tools: The process can be enhanced using existing AI tools without requiring specialized development. It is either using one of the primary LLM's or building a custom bot.

These criteria help identify areas where AI can quickly and effectively augment human efforts, improving efficiency and potentially enhancing the quality of service provided. Based on these criteria, here's a priority list of workflows that could be considered for the first wave of AI implementation. These are just examples:

  1. Student services
    • Student and prospective student advising of all kinds
  2. Resume and Cover Letter Review (Career Services)
    • Offering individual resume critiques
    • Assisting with cover letter development
  3. Academic Policy Development and Enforcement (Academic Affairs)
    • Drafting and revising academic policies
  4. Health Education and Outreach (Health and Wellness Services)
    • Creating and distributing health education materials
  5. Sustainability Education and Outreach (Sustainability and Environmental Initiatives)
    • Creating sustainability guides and resources for campus community
  6. Digital Marketing and Social Media Management (University Communications and Marketing)
    • Creating and curating content for various platforms
  7. Grant Proposal Development and Submission (Research and Innovation)
    • Assisting faculty with proposal writing
  8. Financial Aid Counseling (Financial Aid and Scholarships)
    • Providing one-on-one counseling sessions
    • Offering debt management and financial literacy education
  9. Alumni Communications (Alumni Relations and Development)
    • Producing alumni magazines and newsletters
  10. Scholarly Communications (Library Services)
    • Supporting faculty in publishing and copyright issues
    • Providing guidance on research impact metrics
  11. International Student and Scholar Services (International Programs and Global Engagement)
    • Providing immigration advising and document processing

This first wave serves multiple purposes. It demonstrates the proof of principle, making more stakeholders comfortable with AI integration. It also helps build internal expertise and confidence in working with AI technologies. These early successes can pave the way for more ambitious implementations in the future.

The Second Wave: Tackling Costly Workflows

Once the organization has gained experience and confidence from the first wave, it can move on to more complex and costly workflows. These are typically processes that involve significant labor, occur frequently, and have a broad scope of impact on the organization. However, it is crucial to narrow down this list based on feasibility and readiness for AI implementation.

For instance, while teaching is undoubtedly one of the most labor-intensive and impactful processes in a university, we do not yet have sufficient knowledge on how to make it significantly more efficient through AI. Some processes, like teaching, may never be fully optimized by AI because to their inherently relational nature. 

Note, this is also an opportunity to review major workflows; they often evolved over the years, and are far from ideal efficiency. AI can help review these workflows, and recommend streamlining. And of course, AI can be integrated into actually doing the work. 

The Third Wave: Enterprise-Level Solutions

Only after successfully navigating the first two waves should an organization consider enterprise-level AI solutions. These solutions have the potential to radically redefine the organization's core operations, placing AI at the center of its processes. This level of integration requires a deep understanding of AI capabilities, a clear vision of the organization's future, and a robust infrastructure to support AI-driven operations. Most importantly, it requires specialized tools and high level of security. 

The Timeline and Exceptions

This phased approach to AI implementation is not a quick process. For most large, complex organizations, it could take a couple of decades to fully realize the potential of AI across all workflows. However, there are exceptions. Some businesses with simpler and fewer workflows, such as narrowly specialized customer service operations, may be able to leapfrog straight into the third wave, especially if they have prior experience with AI technologies.

But these are the exceptions rather than the rule. For the majority of organizations, the path to comprehensive AI implementation requires a well-thought-out strategy, clear priorities, and a focus on building confidence and expertise over time.

Integrating AI into a complex organization's workflows is a marathon, not a sprint. It asks for patience, strategic thinking, and a willingness to learn and adapt. The key is to approach this journey with a clear strategy, well-defined priorities, and a commitment to building internal AI expertise. 

Abstract painting of waves

Notebook LM: A quintessential Google Move

Google, once a powerhouse in artificial intelligence and a major force in shaping the modern internet, has found itself surprisingly behind ...