Thursday, December 7, 2023

A case against prompt engineering in education

Do we give students examples of great prompts, or do we allow them to struggle with developing their own prompting skills? This dilemma is common amongst educators integrating AI into their pedagogical strategies.

Refining prompts is as a pivotal vehicle for cognitive advancement. It fosters growth by nudging students to navigate beyond their current capabilities. A meticulously crafted ready-made prompt, while yielding impressive results, might overshoot a student's zone of proximal development. The essence of learning lies in recognizing and rectifying flaws of the output. In other word, giving students a great prompt to begin with may produce the result that is painfully obviously flawed to the instructor, but the flaws are completely invisible to students. When students are handed sophisticated prompts, there's a risk of them becoming passive users, merely applying these tools without understanding or growth. Here is some empirical evidence of this provided by Jack Dougal. One of my colleagues, hopefully will soon present similar results.

The general principle should be to calibrate potential outputs to a level where students can discern imperfections. It is also to ENCOURAGE them to look for imperfections, guiding them to be critical to the output. Just because it sounds good and grammar is perfect does not mean the text is good. This approach encourages active engagement with the learning material, prompting them to question, adapt, and evolve their understanding. It's akin to guiding someone through a labyrinth; the instructor's role is to provide just enough light to help them find their way, without illuminating the entire path.

In the educational sphere, the prompt industry's role is contentious. While it offers a plethora of ready-made prompts, enhancing efficiency, this convenience comes at a cost to cognitive development. In academia, the journey of crafting and refining prompts is crucial for fostering critical thinking and problem-solving skills.

On the research front, the prompt industry does contribute valuable insights, empirically testing and refining prompts to optimize AI interactions. I love to find out about the chain-of-thought approach, for example. However, a significant portion of the prompts available in the market are of dubious quality. These prompts, lacking empirical validation, are frequently oversold in their capabilities. The indiscriminate use of these untested prompts can result in suboptimal outcomes, reinforcing the necessity for a discerning approach to their adoption and application.

The overarching promise of AI lies in its potential to democratize content creation, designed to comprehend natural, imperfect language and provide equitable access to all, regardless of their mastery of writing mechanics, their disability, or fluency in the dominant language. This vision is threatened by attempts to monopolize and professionalize access to AI, a trend that runs counter to the very ethos of this technology. The notion that one must know 'magic words' to effectively communicate with AI is a form of self-interested deception. It undermines the inclusive and accessible nature of AI, turning it into a gated community where knowledge is unfairly hoarded rather than shared. Vigilance against such practices is essential to preserve the integrity and egalitarian promise of AI, ensuring it remains a tool for empowerment and collective advancement, rather than a vehicle for exclusion and profiteering.

Notebook LM: A quintessential Google Move

Google, once a powerhouse in artificial intelligence and a major force in shaping the modern internet, has found itself surprisingly behind ...