Monday, December 18, 2023

The prevailing mood of the discussion about AI in Education

This is the continuation of the analysis of the data set described in the previous blog, just with a different question. Obviously, I not endorse the bot's opinions or conclusions. 

Can you assess the prevailing mood based on the speech patterns?

GPT

  1. The prevailing mood in the digital ethnography of educators' discussions about AI in the classroom can be assessed through an analysis of the speech patterns and thematic content:
  2. Curiosity and Experimentation: There is a strong sense of curiosity and willingness to experiment with AI tools in the educational context. Educators are actively seeking information, sharing experiences, and exploring novel applications of AI in teaching various subjects, including special needs education​​. This exploratory tone suggests a generally positive and open attitude towards AI in education.
  3. Cautious Optimism: While there's an evident enthusiasm for the potential of AI, it's tempered with caution. Educators express concerns about privacy, data security, and the ethical implications of using AI tools, indicating a balanced view where optimism coexists with awareness of potential risks​​.
  4. Collaborative Engagement: The interactions reflect a collaborative mood among educators. They are keen on sharing insights, tips, and experiences, indicating a supportive community environment. This collaborative tone is indicative of a positive and community-oriented approach towards problem-solving and knowledge sharing​​.
  5. Practical Focus: There's a pragmatic focus on how AI can be utilized effectively in teaching and administrative tasks. Educators discuss practical applications such as grading, writing recommendation letters, and creating lesson plans, demonstrating a goal-oriented and practical mood towards AI integration​​.
  6. Concern and Vigilance: Amidst the enthusiasm, there's a discernible mood of concern, particularly regarding data privacy and the ethical use of AI. Educators are vigilant about the implications of AI use, suggesting a mood of responsibility and conscientiousness in adopting new technologies​​.
  7. Desire for Professional Growth: The discussions reveal a desire for professional development and a deeper understanding of AI tools. This indicates a mood of ambition and eagerness for growth and learning within the educator community​​.

In summary, the prevailing mood in the conversation is one of cautious optimism, characterized by a blend of enthusiasm and curiosity about AI's possibilities in education, tempered by a responsible and vigilant approach to its challenges and implications. This mood is underpinned by a collaborative spirit and a practical focus on leveraging AI for educational enhancement.

Can you give a list of the 10 most original ideas?

GPT

Based on the digital ethnography of the educators' discussion, the following can be considered the ten most original ideas for AI application in education:

  1. AI-Powered Grading Assistance: Using AI to assist in grading student work, with a focus on providing detailed feedback and reducing the time educators spend on this task​​.
  2. Historical Chatbots for Interactive Learning: Employing AI chatbots to simulate historical figures or situations, enhancing engagement and understanding in subjects like history and social studies​​.
  3. AI for Writing Recommendation Letters: Utilizing AI to draft initial versions of recommendation letters, thereby streamlining the process while maintaining personalization and authenticity​​.
  4. AI-Generated Teaching Materials and Lesson Plans: Leveraging AI to create customized teaching materials and lesson plans, particularly for language teaching and online education​​.
  5. Chatbots for Special Needs Education: Exploring the use of AI chatbots in special needs education to provide tailored learning experiences and support​​.
  6. AI in Early Childhood Education: Investigating how AI tools can be integrated into teaching fundamental subjects like English and math to young learners in grade 1​​.
  7. AI for Enhancing In-Character Writing: Using AI to provide detailed, historical context and background information to students, enabling them to write more authentically from the perspective of historical or fictional characters​​.
  8. AI-Driven Professional Development for Teachers: Offering professional development programs focused on AI tools, addressing ethical concerns, cheating prevention, and detection of AI-generated work in class​​.
  9. AI for Creating Educational Visuals: Using AI to generate visual aids like logos or images for educational purposes, such as team shirts or classroom decorations​​.
  10. AI in Digital Tech Teaching: Incorporating AI teaching in the curriculum for digital technology classes, introducing K-6 students to the basics of AI and its applications​​.

These ideas reflect a broad spectrum of innovative approaches to integrating AI into various aspects of education, demonstrating the potential of AI to transform and enhance the learning and teaching experience.


Ai perceptions among educations

The Method:

We copied the last 30 threads from nine largest Fb groups on AI and education. The document, over 1000 pages long (142558 words) was uploaded to ChatGPT, with a request to perform sentiment analysis. The groups are:

  • ChatGPT for Teachers & Educators
  • AI For Teachers
  • ChatGPT for Educators
  • AI for Teachers
  • The AI Classroom
  • ChatGPT for Teachers
  • Higher Ed discussions of AI writing
  • AI Teacher Resources with ChatGPT, Canva, & Midjourney
  • Higher Ed Learning Collective
  • AI and ChatGPT in Education

The results are below:

“Here’s a summary of the key sentiments expressed:

  1. Privacy and Data Security Concerns: Educators express concerns about privacy issues related to grading student work using AI tools like ChatGPT, citing issues such as data anonymization and the potential for data breaches. Some educators acknowledge the measures taken to anonymize data, while others suggest looking into AI companies that handle security well.
  2. Regulatory Compliance: Awareness of regional differences in data privacy regulations, such as GDPR in Europe, influences educator attitudes towards AI in education. Some educators express understanding of why companies are concerned about data privacy compliance, while others are less worried personally.
  3. Potential Risks and Misuse: There is a discussion about the potential risks associated with AI, including the re-surfacing of data and the reliability of AI outputs. Concerns are raised about AI’s potential to propagate misinformation and its impact on the educational process.
  4. Practical Applications and Benefits: Educators discuss practical uses of AI, such as assisting in grading and providing feedback, and generating teaching materials and lesson plans. AI is seen as a tool that can save time and enhance the learning experience.
  5. AI for Personalized Education: AI is considered useful for tasks like writing recommendation letters and creating personalized reports, highlighting its potential in catering to individual needs.
  6. Enhancing Creativity and Engagement: Some educators are exploring innovative ways to use AI for creative projects and enhancing student engagement, such as creating chatbots for educational purposes and using AI for art creation.
  7. Concerns About Ethical Implications and Academic Integrity: There are discussions about ethical considerations, such as the impact of AI on cheating and academic integrity. Educators ponder how AI might affect the nature of assessment and student work.
  8. Need for Professional Development and Policy Frameworks: The importance of professional development for educators to understand and effectively use AI is emphasized. There are requests for AI use policy templates and guidance on integrating AI into educational practices.
  9. Diverse Perspectives and Experiences: The sentiment among educators varies widely, with some expressing excitement and optimism about AI’s potential in education, while others show caution and skepticism. This diversity highlights the need for continued dialogue and exploration of AI’s role in education.
  10. Emerging Tools and Platforms: Educators are actively exploring and sharing various AI tools and platforms that can aid in teaching and learning, demonstrating a keen interest in staying abreast of technological advancements.

In summary, while there is significant interest and optimism about the potential of AI in education, concerns about privacy, data security, and ethical implications are prominent. The educators also highlight the importance of professional development in AI, the need for policies to guide its use, and the exploration of creative and innovative applications to enhance teaching and learning.” (ChatGPT 4, retrieved 12/14/2023).


Comment: The sample includes only educators and others already interested in the uses of AI in education, with one exception. They are in no way a representative sample. However, even among this self-selected groups, the concerns still dominate over excitement and over pragmatics. While concerned, educators readily exchange practical tips on how to use AI to reduce their own workload. They are also very curious about specialized AI tools developed for educators. This should be taken into consideration by anyone panning to develop policy or offer professional development to educators.

Disclaimer: This analysis is done within the legal framework of investigative reporting, which does not require a formal IRB approval. Although I am a trained researcher, I am acting here as a blogger/journalist. This is not research and does not imply any claims of validity.

Thanks to Adriana Menjivar Enriquez for assistance. Feel free to suggest other questions to ask about the file. I have several in mind, and will publish more results next week.

Saturday, December 9, 2023

AI and neurodiversity

If AI were human, what would it be diagnosed with? Perhaps it would be Autism Spectrum Disorder (ASD). AI, akin to individuals with ASD, often struggles with social interactions and grasping emotional nuances. While they excel in specific tasks, abstract thinking or unpredictable social contexts pose challenges. Then there's Attention Deficit Hyperactivity Disorder (ADHD). AI can display ADHD-like traits: losing context in lengthy conversations or abruptly shifting focus. This metaphorical attention deficit mirrors the challenges individuals with ADHD face in maintaining long-term conversational coherence. Lastly, consider Executive Function Disorder. AI often falters when adapting to new, unstructured tasks, akin to the challenges faced by individuals with executive function disorder in organizing and executing tasks. AI's dependence on structured data and clear objectives limits its ability to handle open-ended scenarios.


Of course, treating every limitation as a diagnosis is ridiculous. When building a relationship with AI, we should not pigeonhole it with human diagnoses. Instead, adopting a neurodiversity framework allows us to appreciate AI's unique cognitive makeup. This approach emphasizes focusing on strengths and working around limitations, acknowledging that AI represents a different kind of intelligence.

Neurodiversity is a concept and social movement that advocates for understanding and appreciating neurological differences as natural human variations, rather than disorders or deficits. Originating from the autism community, the term has expanded to include a range of neurological conditions like ADHD, dyslexia, and others. This perspective emphasizes that neurological differences should be recognized and respected just like any other human variation, such as ethnicity or sexual orientation. The neurodiversity framework promotes the idea that individuals with these differences have unique strengths and perspectives, advocating for accommodations and support systems that allow them to thrive in society. This approach shifts the focus from trying to "cure" or "fix" these individuals to celebrating and utilizing their distinct abilities, fostering a more inclusive and understanding society.

Understanding AI through the lens of neurodiversity offers an alternative perspective. We should not try to make AI closely mimic human intelligence; that would be counterproductive. Instead, we must consider embracing AI as a distinct 'other.' This approach allows us to benefit from each other's strengths and compensate for weaknesses. This approach will also reduce the anxiety about AI eventually replacing us. If we remain different, we will need each other.

In constructing our relations with AI, we can benefit from reflection on our species' internal diversity. This recognition paves the way for a more harmonious coexistence, where the strengths of one can offset the limitations of the other, creating a synergistic relationship between human and artificial intelligence. If we apply a strictly normative framework, trying to make AI exactly like the neurotypical human mind, we’re inviting trouble; the same kind of trouble human societies experience when trying to be more homogenous than they are.

Understanding AI through the neurodiversity lens offers a chance for growth and collaboration. It is not just about programming and algorithms; it is about building a relationship with a fundamentally different form of intelligence. This approach will enable us to fully harness AI's potential while respecting its unique cognitive characteristics. As we continue to evolve alongside AI, this perspective will be crucial in guiding our interactions and expectations, fostering a future where diversity in all its forms is not just accepted but celebrated.

Thursday, December 7, 2023

A case against prompt engineering in education

Do we give students examples of great prompts, or do we allow them to struggle with developing their own prompting skills? This dilemma is common amongst educators integrating AI into their pedagogical strategies.

Refining prompts is as a pivotal vehicle for cognitive advancement. It fosters growth by nudging students to navigate beyond their current capabilities. A meticulously crafted ready-made prompt, while yielding impressive results, might overshoot a student's zone of proximal development. The essence of learning lies in recognizing and rectifying flaws of the output. In other word, giving students a great prompt to begin with may produce the result that is painfully obviously flawed to the instructor, but the flaws are completely invisible to students. When students are handed sophisticated prompts, there's a risk of them becoming passive users, merely applying these tools without understanding or growth. Here is some empirical evidence of this provided by Jack Dougal. One of my colleagues, hopefully will soon present similar results.

The general principle should be to calibrate potential outputs to a level where students can discern imperfections. It is also to ENCOURAGE them to look for imperfections, guiding them to be critical to the output. Just because it sounds good and grammar is perfect does not mean the text is good. This approach encourages active engagement with the learning material, prompting them to question, adapt, and evolve their understanding. It's akin to guiding someone through a labyrinth; the instructor's role is to provide just enough light to help them find their way, without illuminating the entire path.

In the educational sphere, the prompt industry's role is contentious. While it offers a plethora of ready-made prompts, enhancing efficiency, this convenience comes at a cost to cognitive development. In academia, the journey of crafting and refining prompts is crucial for fostering critical thinking and problem-solving skills.

On the research front, the prompt industry does contribute valuable insights, empirically testing and refining prompts to optimize AI interactions. I love to find out about the chain-of-thought approach, for example. However, a significant portion of the prompts available in the market are of dubious quality. These prompts, lacking empirical validation, are frequently oversold in their capabilities. The indiscriminate use of these untested prompts can result in suboptimal outcomes, reinforcing the necessity for a discerning approach to their adoption and application.

The overarching promise of AI lies in its potential to democratize content creation, designed to comprehend natural, imperfect language and provide equitable access to all, regardless of their mastery of writing mechanics, their disability, or fluency in the dominant language. This vision is threatened by attempts to monopolize and professionalize access to AI, a trend that runs counter to the very ethos of this technology. The notion that one must know 'magic words' to effectively communicate with AI is a form of self-interested deception. It undermines the inclusive and accessible nature of AI, turning it into a gated community where knowledge is unfairly hoarded rather than shared. Vigilance against such practices is essential to preserve the integrity and egalitarian promise of AI, ensuring it remains a tool for empowerment and collective advancement, rather than a vehicle for exclusion and profiteering.

Monday, December 4, 2023

Is AI doing too much for students?

Educators’ worry about AI boils down the concept of 'Goldilocks zone.' A learning task should neither be too challenging nor too simplistic, but just right, fitting within the learner's zone of proximal development. It is something that the learner can first solve only with help, but eventually internalized and can solve on their own. The concern is that AI, in its current form, might be overstepping this boundary, solving problems on behalf of learners instead of challenging and guiding them. It is like that rookie teacher that keeps solving problems for students and rewriting their papers, and then wonders why they have not learned anything. I just want to acknowledge that this concern is very insightful and is grounded in both theory and everyday practice of teachers. However, the response to it isn't that simple. AI cannot be dismissed or banned based on this critique.

First, there's the question of what skills are truly worth learning. This is the most profound, fundamental question of all curriculum design. For instance, we know that certain basic procedural skills go out of use, and learners leapfrog them to free time to concentrate on more advanced skills. For example, dividing long numbers by hand used to be a critical procedural skill, and it is not worth the time, given the ubiquity of calculators. There is a legitimate, and sometimes passionate debate whether the mechanics of writing is such a basic procedural skill that can or cannot be delegated to the machines. I don’t want to prejudge the outcome of this debate, although I am personally leaning towards a “yes” answer, assuming that people will never go back to fully manual writing. However, the real answer will probably be more complicated. It is likely that SOME kinds of procedural knowledge will remain fundamental, and others will not. We simply do not have enough empirical data to make that call yet. A similar debate is whether the ability to manually search and summarize research databases is still a foundational skill, or we can trust AI to do that work for us. (I am old enough to remember professors insisting students go to the physical library and look through physical journals). This debate is complicated by the fact that AI engineers are struggling to solve the hallucinations problem. There is also a whole different debate on authorship that is not quite specific to education, but affects us as well. The first approach is then to rethink what is worth teaching and learning, and perhaps focus on skills that humans are really good at, and AI is not. IN other words, we reconstruct the “Goldie locks zone” for a different skill set.

The second approach centers on the calibration of AI responses. Currently, this is not widely implemented, but the potential exists. Imagine an AI that acts not as a ready solution provider but as a coach, presenting tasks calibrated to the learner's individual skill level. It is sort of like an AI engine with training wheels, both limiting it and enabling the user to grow. This approach would require creating educational AI modules programmed to adjust to the specific needs of each user’s level. We have the Item Response Theory in psychometrics that can guide us in building such models, but I am not aware of any robust working model yet. Once the Custom GPT feature starts working better, it is only a matter of time for creative teachers to build many such models.

Both approaches underscore the importance of not dismissing AI's role in education but rather fine-tuning it to enhance learning. AI is here to stay, and rather than fearing its overreach, we should harness its capabilities to foster more advanced thinking skills.

These are conversation we cannot shy away from. It is important to apply some sort of a theoretical framework to this debate, so it does not deteriorate into a shouting match of opinions. Either Vygotskian or Brunerian, or any other framework will do. Vygotsky has been especially interested in the use of tools in learning, and AI is just a new kind of tool. Tools are not note all created equal, and some are better than others for education. The ultimate question is what kind of a learning tool AI is, and whether we could adjust learning, adjust the tool, or do both.

Monday, November 27, 2023

Assessing writing with AI

Writing with AI is a complex skill that overlaps with traditional manual writing, but it is not the same. Many instructors struggle to grasp this new skill because it is unfamiliar to them. Teaching something you haven't mastered is challenging, leading to noticeable unease at all educational levels. Even those eager to incorporate AI in teaching, often open to new innovations, face this difficulty. The issue essentially lies in redefining the objectives of writing instruction. If the belief is that students should ultimately write independently, then traditional practice is paramount, leaving no role for AI tools. However, the more challenging conceptual shift is recognizing the need to teach students how to write with AI. This is like the transition from penmanship to typing. We lose something in this shift: the beauty, the discipline, and the rigorous exercises of handwriting. I recall diligently practicing letter formations in my first-grade penmanship class. Although I was never adept at it and gladly transitioned to typewriters when they became accessible, I understand the pain of losing the esteemed art of writing, cherished for centuries. This pain, particularly acute for those who have spent decades mastering and teaching writing, must be acknowledged. Yet, this shift seems inevitable. We are dealing with a technology that is being adopted faster than any in history, and it is not a passing fad. The benefits are too clear. We face a stark paradox: educators use AI to create lesson plans and assessment rubrics, yet often bar their students from using the same technology. This is unsustainable and awkward. 

As a profession, we are only taking the first steps in integrating AI into writing instruction. Here's another baby step: I revised Sacramento State University's Undergraduate Writing Portfolio Assessment criteria, considering the new skill of "wrating." 

Writing Placement for Juniors Portfolio (WPJ)

5 - Exceptional Wraiter: Demonstrates mastery in "wraiting," producing AI-assisted compositions at a publishable level in their respective discipline. Showcases exceptional skill in generating rich, engaging prompts and collaboratively refining AI outputs. Exhibits a deep understanding of AI's strengths and limitations, skillfully navigating these in producing original, high-quality work.

4 - Strong Wraiter: Effectively employs AI tools in "wraiting," producing texts of high quality that reflect a sophisticated understanding of AI's capabilities. Demonstrates the ability to create rich prompts and engage in the iterative process of refining AI-generated content. Shows a clear grasp of AI's strengths and limitations, using them to enhance original thinking and critical evaluation.

3 - Competent Wraiter: Demonstrates a solid understanding of "wraiting," using AI tools to assist in writing tasks. Capable of creating effective prompts and engaging in the process of refining AI outputs. Shows awareness of the strengths and limitations of AI in writing, but may require further guidance to fully exploit these in creating high-quality texts.

2 - Developing Wraiter: Beginning to understand the role of AI in "wraiting." Can generate basic AI-assisted texts but requires further instruction in creating effective prompts and refining outputs. Shows potential in understanding AI's strengths and limitations, but needs more practice to integrate these effectively in writing tasks.

1 - Emerging Wraiter: Early stages of grasping "wraiting." Struggles with effectively using AI tools, often producing clichéd, uninspired texts that lack human input and originality. Needs substantial guidance in understanding AI's capabilities, constructing prompts, and refining AI-generated content.

0 - Incomplete Portfolio: Portfolio does not demonstrate the basic competencies in "wraiting" or effective use of AI in writing tasks. Requires additional work to understand and skillfully employ AI tools in the writing process. What do you think?

Thursday, November 16, 2023

The fundamental misunderstanding of AI-assisted writing

The debate rages on in various Facebook groups dedicated to AI in education, encompassing educators, publishers, and even lawyers. They grapple with the ethics, practicalities, and legality of using AI-generated text, often under the flawed assumption that there's a clear demarcation between human-generated and AI-generated content. This is a classic case of misunderstanding the nature of large language models (LLMs) – it is not just technically impossible to make such a distinction, but theoretically as well.

Imagine writing assistance by AI as a spectrum. On one end, there's the lazy prompt: "Write me an essay for my class based on these instructions." On the other, a minimal request: "Here's my text, just correct the grammar." In the former case, the content is mostly computer-generated. (Although some instructors give such detailed assignment descriptions for students that the paper is practically written by the instructor, but that is another issue). Yet, the most effective and transformative uses of AI lie somewhere in the middle. This is where the magic happens: turning a raw idea into a paper outline, transforming a rough argument into coherent text, asking ChatGPT for feedback on a draft, or enriching a paragraph with vivid examples.

This is not a simple case of either-or; it is a true collaboration between human intellect and machine assistance. By pigeonholing AI as a tool that merely replaces human effort, many reveal their unfamiliarity with what I like to call 'wraiting' – a blend of writing and AI. The current clamor for distinct labeling of human vs. AI-generated text, or setting limits on the extent of AI use, can come across as naïve or even embarrassing to those well-versed in AI-assisted writing.

The beauty of 'wraiting' lies in its collaborative essence. It redefines authorship, shifting the focus from the creation process to the act of releasing the final product. The most important wraiting skills is the ability to wring great content from the machine by giving it most of the ideas. Equally important is the final editing, the ability to discern between mediocre and great content.

Just as the user of a word processor or spell-checker is considered the author, the human guiding the AI in 'wraiting' holds the rights of authorship. The key lies in understanding and experiencing this process firsthand. So, before jumping into heated debates or formulating policies, it might be wise to take AI for a spin in your next writing project. Only then can one truly appreciate the nuances of this new era of authorship, where the lines between human and machine are not just blurred but non-existent. Regulating a thing you don’t know much about is always going to be risky. 

AI in Education Research: Are We Asking the Right Questions?

A recent preprint titled " Generative AI Can Harm Learning " has attracted significant attention in education and technology circl...