Tuesday, January 30, 2024

The tiny tools issue

The world of AI implementation has three tiers. At the base are user-friendly, ready-to-use AI tools – the digital world's equivalent of instant coffee: one can simply go to a chatbot and do your thing. Ascending a level, there is the realm of tiny tools like APIs, a middle ground easily accessible to coders but mystifying to the layperson. The apex of this hierarchy is reserved for integrated, complex AI solutions – the grand orchestras of technology, both sophisticated and costly.

The drama in AI implementation, however, is not rooted in the existence of these tiers, but in their portrayal and accessibility. Providers, often driven by material interests, tend to downplay the simplicity and adequacy of the lower tiers. This misrepresentation is not just about pushing expensive solutions; it is a deeper issue of monopolizing knowledge and perpetuating power imbalances. Of course, if one knows how to do something that others do not, they want to make themselves look more essential, so they can sell their expertise.

The key takeaway here is to be a discerning consumer. Before opting for an expensive, integrated solution, consider first if one can do it themselves, and if not, if a tiny tool would suffice. Perhaps a computer science student could craft a solution efficiently and affordably. Or there might be a vendor that sells just the tiny tool needed. This approach is not just about saving resources; it is about maintaining control in an increasingly technology-driven world. Surrendering to high-end solutions can lead to a loss of autonomy, difficult to reclaim once given up.

When faced with AI implementation decisions, balance is crucial. It is essential to recognize where one's needs fit within this spectrum and make choices that harmonize practicality, cost, and control. The most effective tool at our disposal is not always the most complex or expensive one, but often our own critical thinking. By understanding the nuances of these AI tiers, we can make informed decisions that preserve our technological independence and prevent being unwittingly upsold solutions that far exceed our actual needs.

Monday, January 29, 2024

Writing instructors, why are you surprised by AI?

Why do you look surprised?  Since the 1970s, there has been a shift in your field. This change was not about refining the minutiae of grammar or punctuation. Rather, it was a movement toward valuing the creative process in writing. Think of pioneers like Donald Graves, Lucy Calkins, and Peter Elbow. They were not merely toying with new ideas; they were fundamentally altering how writing is taught, influencing college-level instruction as well.

The advent of AI technology has accelerated a shift that was already underway. Historically, while there was vocal support for creative and critical thinking, the reality often leaned towards assessing grammar and spelling. It was simpler to grade based on these concrete elements. Judging originality and creativity posed greater challenges, especially when justifying grades during student appeals.

However, it is becoming clear that the reliance on traditional assessment is no longer sustainable. It is time to genuinely embrace what has been acknowledged for decades. The focus should shift more towards teaching originality, creativity, authenticity, discernment, and critical thinking. Ideas should be valued over mechanical accuracy.

A crucial aspect of this evolution is teaching students to write with AI assistance. This approach does not diminish writing standards. Instead, it raises the bar for the final product. Students should learn to use AI as a tool to enhance their writing, not as a substitute for critical thinking or creativity.

Dear writing instructors, the time has come to adapt. And you know how to do it better than anyone else. The gradual shift many of you have been working on, is now upon us. This is a moment for re-evaluating, rethinking, and embracing a new phase in education where AI complements and enhances the teaching of writing. The future is here, and it aligns with the trajectory you have been following.

Thursday, January 25, 2024

Prompt patterns

 Just sharing a summary of  a paper that tried to develop a catalog of prompt patterns. The sourcez;

"A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" by Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, Douglas C. Schmidt. Arxiv. https://doi.org/10.48550/arXiv.2302.11382 

  1. Meta Language Creation Pattern: Focuses on creating a custom language for LLMs to improve their understanding of prompts.
  2. Output Automater Pattern: Aims to automate the generation of actionable steps or scripts in response to prompts.
  3. Flipped Interaction Pattern: Involves reversing the typical interaction flow, with the LLM posing questions to the user.
  4. Persona Pattern: Assigns a specific persona or role to an LLM to guide its output generation.
  5. Question Refinement Pattern: Enhances the LLM's responses by refining the user's questions for clarity and focus.
  6. Alternative Approaches Pattern: Encourages the LLM to offer different methods or perspectives for tackling a task.
  7. Cognitive Verifier Pattern: Involves the LLM generating sub-questions to better understand and respond to the main query.
  8. Fact Check List Pattern: Guides the LLM to produce a list of facts or statements in its output for verification.
  9. Template Pattern: Involves using a predefined template to shape the LLM's responses.
  10. Infinite Generation Pattern: Enables the LLM to continuously generate output without repeated user prompts.
  11. Visualization Generator Pattern: Focuses on generating text outputs that can be converted into visualizations by other tools.
  12. Game Play Pattern: Directs the LLM to structure its outputs in the form of a game.
  13. Reflection Pattern: Encourages the LLM to introspect and analyze its own outputs for potential errors or improvements.
  14. Refusal Breaker Pattern: Designed to rephrase user queries in situations where the LLM initially refuses to respond.
  15. Context Manager Pattern: Controls the contextual information within which the LLM operates to tailor its responses.
  16. Recipe Pattern: Helps users obtain a sequence of steps or actions to achieve a desired result.

Each pattern is detailed with its intent, context, structure, key ideas, example implementations, and potential consequences.

I want to acknowledge a good attempt, but am not sure this list is very intuitive or very helpful. In practical terms, we either ask questions or give tasks, defining some output parameters - like genre, audience, style, etc. However someone might find this helpful to keep thinking. We do need some way of classifying prompts. 

Tuesday, January 23, 2024

What is the killer app for AI-powered chatbots?

In a recent interview, I was posed with a thought-provoking question about the most impressive application of AI that holds the greatest potential. This was basically the question about the "killer app." The term "killer app" was invented by pioneers of mass computing to mean a software so essential that it drives the success of a larger platform or system. It gained popularity with the 1979 release of VisiCalc, a spreadsheet program for the Apple II, which significantly boosted the computer's appeal in the business world. "Killer app" now broadly refers to any software or service that significantly drives the adoption of a technology.

My response named a broad spectrum of AI applications where the core task involves comparing or merging two documents. Consider the everyday tasks like grading student papers, which essentially is juxtaposing a grading rubric against student submissions. Or the process of job applications, where one's resume or cover letter is matched with the job description. Even more intricate tasks like reviewing contracts involve a comparative analysis between the contract's text and relevant laws and regulations. Similarly, writing a grant application is a fusion of the request for proposal (RFP) with one's own ideas or previously written articles.

This insight opens up a broader perspective on the nature of our intellectual activities in the workplace. Many of these tasks revolve around blending, merging, and oscillating between two or more texts. If we start viewing our tasks through the lens of 'feeding the AI beast' with relevant documents, we unlock a new way to leverage this astonishing technology for our benefit.

The implications of this AI capability are profound. It's not just about simplifying tasks; it's about enhancing our cognitive processes. Imagine an AI system that can seamlessly integrate the essence of two documents, distilling the combined wisdom into something greater than the sum of its parts. This isn't just about automation; it's about augmentation. It's the fusion of human intellect with machine precision that could redefine how we approach problem-solving.

Let's delve deeper into the examples. In the educational sector, the grading of papers becomes not just a task of assessment but an opportunity for tailored feedback. The AI, by comparing a student's work with the rubric, can identify nuances that might be overlooked in a manual review. It can offer insights into a student's thought process, learning style, and areas needing improvement. This isn't just grading; it's a gateway to personalized education.

In the corporate world, the process of job applications or contract reviews is transformed. The AI's ability to merge and compare documents means it can align a candidate's skills and experiences with a job's requirements more accurately, potentially revolutionizing recruitment processes. Similarly, in legal settings, reviewing contracts with AI can ensure compliance and mitigate risks more efficiently, saving countless hours and reducing human error.

In short, the real magic of AI lies in its ability to blend and compare documents, a seemingly mundane task that, upon closer examination, reveals itself as a key to unlocking new dimensions of efficiency, creativity, and understanding. 

Monday, January 22, 2024

Why AI is unlikely to replace teachers

The allure of a tech-driven utopia in education is not new. Radios, televisions, the internet, MOOCs – each has been heralded as a harbinger of the traditional teacher's obsolescence. Today, AI steps into this familiar spotlight, with some prophesizing a future with fewer educators. Understanding this perspective isn't challenging, given the enormity of public education's budget, the stubborn inequalities it harbors, and its notorious resistance to reform. However, the notion of significantly reducing teacher numbers through AI implementation seems, at best, a distant fantasy.

Chatbots, the latest prodigies of AI, have proven to be exceptional personal tutors. They can tailor information delivery to individual needs, offering a level of customization that traditional education struggles to match. But here's the rub: education is not merely about transferring information. It's about fostering a unique educational relationship that optimizes learning. For all its sophistication, AI lacks the capacity to replicate this.

AI indeed creates a paradise for autodidacts. Those with a natural inclination towards self-directed learning, armed with motivation and discipline, find in AI a boundless resource. However, the majority aren't autodidacts. They thrive in a relational context that not only motivates but also facilitates learning. This is a foundational principle in major learning theories, from Vygotsky's social development theory to Bandura's social learning theory and Bruner's constructivist theory. The invisible labor of a teacher or a college instructor lies in creating and nurturing this context. Presently, there is nothing in AI that can substitute this critical human element.

Furthermore, educational institutions have become integral to societal fabric, not merely as centers of learning but as community hubs. Imagining what millions of children and young adults would do without the structure of schools and colleges opens a Pandora's box of societal and developmental questions. These institutions require adult presence, not just for educational delivery, which AI might partly assume, but for the overarching environment of care and socialization they provide.

My prognosis? Unlike other industries where automation has resulted in significant workforce reductions, the field of education, particularly the teaching staff, will likely remain unscathed in this aspect. There's no need for panic among educators, but there is a need for adaptation. Learning to harness AI's capabilities will be crucial, not to replace teachers, but to complement them, freeing up time for the more nuanced, relational, and affective aspects of their roles. Additionally, educators must remain agile, adapting curricula to include skills that future employers will value, ensuring students are well-equipped for the evolving workforce.

In essence, AI in education is not a replacement, but a tool – one that, if used wisely, can enhance the educational experience without displacing its most vital component: the human educator.

Thursday, January 18, 2024

Four principles of public and philanthropic support of AI adoption

Governments and philanthropists can play a role in supporting AI diffusion across various sectors. Their involvement is as critical as that of businesses and researchers in driving forward this technological revolution. However, forging a public strategy for AI adoption remains a complex and unresolved task.

The rapid embrace of AI technology calls for a focus on leveraging the collective experiences of its extensive user base, in conjunction with market forces and entrepreneurial innovation. The United States, and California in particular, stands out for its globally admired technology innovation ecosystem. This environment, driven by dynamic market forces and a spirit of entrepreneurship, creates an ideal setting for AI development. Nevertheless, the lack of a cohesive public strategy in managing this evolution might lead to varied and possibly conflicting outcomes and objectives in AI adoption.

At the heart of this matter is the public's significant interest in the effective deployment of AI. The technology holds the potential to substantially boost the economy, revolutionize public services, reshape education, and enhance social welfare systems. Yet, it is essential to balance these advancements with equitable and efficient technology adoption, ensuring that AI contributes to resolving rather than exacerbating societal inequities.

Moreover, the integration of AI in public services presents a dual advantage: improving efficiency and extending service accessibility to a wider population segment. The key challenge is to deploy these technologies inclusively, considering the diverse needs of the community. While the swift adoption of AI offers numerous opportunities, it also demands strategic and thoughtful planning. This strategy must aim not only to capitalize on AI's benefits for economic and service improvements but also to guarantee that its societal integration is equitable and inclusive, aligning technological progress with the greater public interest.

1.  Get real

The first guiding principle in AI adoption is the pursuit of a balanced perspective, essential in navigating between two extreme viewpoints. On one side, there's the dystopian view that envisions AI as a catalyst for catastrophic job losses. This narrative often emerges from a fear of the unknown, harking back to historical instances where technological advancements initially disrupted the job market. However, this perspective tends to overlook how technological evolution has historically opened up new job opportunities and industries. There is also fear that AI poses an existential threat to humanity. These two mutually exclusive dooms day scenarios are amplified by the media.

On the other side lies the overly optimistic view that sees AI as a cure-all for every challenge, and that we quickly transition to labor free economies and enjoy abundance without work. This standpoint emerges from recognizing AI's immense potential to improve efficiency, solve complex issues, and bring novel solutions to various sectors. However, it can underestimate the challenges and limitations of implementing AI technologies, such as ethical considerations, the need for comprehensive data, and crucial human oversight.

A more realistic outlook suggests that the future of AI will likely follow historical trends, presenting both opportunities and challenges. Similar to the impact of the internet and mobile technology, AI is expected to enhance productivity and stimulate economic growth, but not bring us quickly into the world without scarcity. This advancement could manifest through more streamlined operations, improved data analysis, and innovation in diverse sectors.

Both extremes discourage pragmatic, thoughtful planning. The society cannot control a change that it cannot fathom. A balanced approach to AI adoption acknowledges AI's significant potential to contribute to productivity and economic growth. Simultaneously, it recognizes the importance of strategic management to facilitate a smooth transition in the job market and society at large. This approach avoids the pitfalls of extreme views, opting instead for a nuanced and realistic understanding of AI's role in shaping the future.

2.  Democratize technology

The second principle in AI adoption emphasizes the democratization of AI technology. This concept is based on the idea that AI's benefits should be broadly accessible, not just limited to a small group of experts. This approach to democratizing technology mirrors developments in fields like web design, which has evolved from a specialized skill for programmers to a more universally accessible tool. The devolution of expertise has been a steady trend, and we must not allow it to reverse with AI.

In AI, democratization means expanding access beyond tech experts to include educational institutions, public agencies, and businesses. This approach prevents potential monopolization by  a few vendors who might control the market with proprietary platforms and high licensing fees, which could shift the focus of AI from innovation to profit, limiting its societal benefits.

Democratizing AI fosters grassroots innovation, reducing vendor-dependency, enabling individuals and organizations to develop custom AI solutions for specific needs and challenges. This can spur a wave of creativity and problem-solving in sectors like education, healthcare, social services, and public administration.

Additionally, democratizing AI plays a critical role in reducing the risks of AI reinforcing existing inequalities or biases. When AI is accessible and understandable to a diverse group, it is more likely to be used inclusively, considering a wide range of perspectives and needs.

In essence, democratizing AI is about making it a tool for many, empowering a wide user base to understand, engage with, and apply AI in ways that enhance their work and lives. This approach ensures that AI's benefits are widely distributed and its development reflects a variety of voices and needs.

3.  Regulate fine-tuning

The third principle in AI adoption underscores the vital role of governments and philanthropic organizations in regulating AI's "fine-tuning" process. This principle acknowledges their significant influence in shaping AI's ethical development.

Fine-tuning in AI involves refining algorithms and their outputs to align with specific ethical guidelines and objectives. This step is crucial to ensure AI systems adhere to societal norms. A key part of fine-tuning is filtering out harmful or inappropriate content, such as pornography, conspiracy theories, or explicit violence. This process not only prevents the spread of such content but also ensures AI's positive contribution to society.

However, fine-tuning goes beyond just excluding harmful content. It also includes identifying and rectifying inherent biases within AI systems. AI models, trained on vast datasets, can inadvertently reflect societal biases. Left unchecked, these biases may reinforce or exacerbate societal inequalities. For example, AI by default generates images of unspecified  people as white males, reflecting a bias in training data. Correcting such biases is essential to make AI inclusive and representative of global diversity. Governments must compel IT companies to spend more on fine-tuning, and make their fine-tuning practices more transparent.

However, governments and philanthropist may play an active role in funding AI ethics research, promoting diversity in training data, or setting up bodies to oversee and evaluate AI systems for biases.

4.  Support equity

The fourth principle in AI adoption is about addressing areas where market forces alone may not suffice, particularly concerning the equity implications of AI. This principle calls for targeted support in segments where the private sector might not adequately invest due to limited market incentives.

A critical area of focus is technology for people with disabilities. Market mechanisms often fail to address these needs adequately, as the market for assistive technologies can be too small to lure significant private investment. This gap necessitates government or philanthropic intervention to develop AI solutions that are innovative, inclusive, and accessible to everyone, regardless of their physical or cognitive abilities.

Another area is AI's role in bridging language barriers and aiding language learners and linguistically marginalized communities. Here again, market forces may not be enough to drive the development of AI tools tailored for these groups. Government and philanthropic support is essential in creating AI applications that meet diverse linguistic needs, promoting inclusivity and understanding.

In education, AI's impact is particularly profound. Traditional reliance on written assignments and assessments means integrating AI into education is not just about investment but also about understanding learning theories and pedagogical practices. While entrepreneurs are adept at crafting innovative tech solutions, they may lack the necessary expertise in learning sciences to ensure these tools are effective in a learning context. Thus, additional support in research, development, and training is crucial for AI to positively transform educational practices.

Contrastingly, sectors like entertainment, which are more adaptable and resource-rich, are likely to manage AI-driven disruption independently. However, more public-oriented sectors such as social services, education, and medicine require substantial support from governments and philanthropic organizations. These sectors are pivotal to societal well-being and equity; their effective navigation of AI integration is crucial for the equitable distribution of AI benefits.

In summary, strategic AI adoption and integration is imperative, especially in sectors where market forces are insufficient. This strategy should include support for education, social services, and disability support to ensure AI serves the public good effectively. The involvement of governments and philanthropic organizations is critical in providing necessary resources, guidance, and regulatory frameworks. This ensures the development and implementation of AI in ethical, equitable, and universally beneficial ways.

Sunday, January 14, 2024

Advice for Entrepreneurs Developing AI Products for Educators

Hundreds if not thousands of start-ups have emerged to leverage the potential of generative AI, which is a good thing. This surge in innovation is crucial for ensuring a fair and beneficial transition to this new technology. Among these companies, many focus on the education sector. It's not just start-ups that are diving into this arena; established companies are also adopting an entrepreneurial approach.

First, let's talk about products that are likely to fail. A common pitfall is exploiting the current lack of AI expertise among teachers and professors. These models act as basic intermediaries, providing minimal assistance in crafting better AI prompts. However, generative AI's true allure lies in its democratic nature and ease of use. It lowers barriers by understanding natural language, eliminating the need for coding or complex interface navigation. Businesses that merely enhance prompt writing and inflate the concept of "prompt engineering" to promote their services are not just unethical but also unsustainable. Their low-value proposition is a short-term strategy at best; users will eventually see through it.

Another error is developing solutions without a deep understanding of educational practices. Merely interviewing a few educators doesn't suffice. To genuinely grasp the needs of the sector, companies should either include experienced educators in their teams or seek continuous, in-depth feedback on their product prototypes. This approach prevents creating solutions that lack a real problem to solve. Unfortunately, few outsiders truly understand the core challenges AI poses in education, with only a handful of products like Khanmigo addressing these issues effectively. One of the core problems for educators is the inability to calibrate AI tools for the Zone of Proximal Development.

Despite these pitfalls, the field of AI in education is ripe for innovation. Instead of creating superficial aids, there's a need for specialized, high-value tools. Each academic discipline has foundational skills critical for further learning, and some aspects of these disciplines can be aided by AI without compromising the development of higher skills. Developing numerous, level-specific assignments that integrate AI use while safeguarding these essential skills is vital. Another significant area for innovation is enhancing student experiences. Many educational institutions, particularly large universities, struggle to provide consistent advising and support in understanding academic regulations, schedule planning, and general wellbeing. Moreover, both K-12 and higher education systems face inefficiencies in their operations. Addressing these real issues and contributing meaningful, high-value solutions is where the true opportunity lies.

Saturday, January 13, 2024

No time to learn AI? Use authentic learning

No time to to delve into the world of  AI? If so, you're not alone. Many of us feel a pang of guilt for not being able to spare the time to explore generative AI tools. However, it is much easier than you may think. 

The trick is to use chatbots for the regular tasks life brings to you. Anything, especially tasks you are not looking forward to do is a fair game. Think about updating syllabi, brainstorming assignments, developing grading rubrics, and planning lessons. The list extends to crafting administrative emails, organizing research data, summarizing articles, generating content ideas, and preparing meeting agendas. Try to ask a chatbot first. 

Now, it's important to temper expectations with a dose of reality. In about 70-80% of cases, AI will save you time right off the bat. However, in the remaining tasks, you might not see immediate gains, and they turn out to be easier done by hand. The effectiveness of AI heavily depends on the nature of your work and your willingness to stick to it and learn its nuances.

There's a learning curve, for sure, but it is not very steep.  The results are well worth the effort. For instance, AI's ability to generate first drafts, suggest edits, and even brainstorm ideas can significantly streamline your workflow.

However, it's crucial to understand the limitations of AI. It's not a magical solution to all your problems. Think of it more as a collaborative partner that can take on the heavy lifting of routine tasks, allowing you to focus on the more creative and complex aspects of your work. Literally, ChatGPT and its cousins shine in the most routine, most boring tasks, leaving you more time for creative work. 

The key to effectively integrating AI into your professional life is to start small, use the natural flow of tasks, and gradually expand its role as you become more comfortable with its capabilities. This will allow you to stay ahead of most students. Eventually you will also see how it could be used in instruction. But getting some first-hand experience is the first step. 

Friday, January 12, 2024

AI use is not a sin

The enduring influence of Puritan ethics in American culture presents an intriguing dichotomy. This historical ethos, with its deep roots in hard work and discipline, colors modern perspectives on technology and learning. I am really worried about the disproportional efforts to catch students using AI, as if it was somehow sinful on its own.

Puritan ethics, born from 16th and 17th-century religious reformers, celebrated hard work as a moral virtue. This belief, that success must be earned through effort and toil, subtly shapes American attitudes towards technology, including AI in education. Critics of AI in this realm often argue that it makes learning 'too easy', equating ease with moral decay. They yearn for the 'authenticity' of traditional learning methods, where struggle is seen as the only legitimate path to knowledge.

However, it's crucial to acknowledge that learning does indeed require effort; growth is impossible without it. But this effort need not be synonymous with drudgery. Suffering and effort are not interchangeable. The assumption that struggle is inherently valuable and that ease is inherently suspect is a limited view, overlooking the broader purpose of education.

The Puritanical echo in the debate over AI in education is ironic. The ethos was about self-improvement, yet rejecting AI tools seems counterproductive. AI can democratize and personalize education, making it more accessible and tailored to individual needs.

The overuse of ethical judgments in this context reflects a broader issue. Ethics is often oversimplified, leaving little room for the complexities of life. This misuse of ethics, particularly in education, can hinder innovation.

In re-evaluating these inherited ethical frameworks, it's essential to recognize that ease in learning isn't antithetical to the values of hard work and achievement. Education's true goal is empowerment and enlightenment, and AI offers a transformative potential in reaching this goal.

Monday, January 8, 2024

I'll tell you what's unethical (a rant)

Ah, the great ethical quandary of our times in education – the use of AI! Picture this: earnest educators standing as the last bastion of traditional wisdom, decreeing “Thou shalt not use AI,” with a fervor that's almost admirable, if it weren't so quaintly misplaced. This isn't just a classic case of misunderstanding technology; it's like watching someone trying to ward off a spaceship with a broomstick.

Now, let's talk about restrictions. In education, where reason should reign supreme, the rationale for any restriction must be more substantial than "because it’s always been this way." When an educator waves the flag of prohibition against AI, one can't help but wonder: where’s the logic? It’s a bit like saying you shouldn’t use a calculator for fear it might erode your abacus skills.

Here's a thought to ponder: the only justifiable ground for restricting AI use in education is if, and only if, it hinders the development of a foundational skill – one that's essential for crafting more complex abilities required for advanced learning. And, let’s not forget, the burden of proof rests with the person setting the limits. Which skill, exactly, is prevented from being developed by the use of AI? If you can explain it to students, then yes, be my guest, ban away.

AI is a very good tutor. Yes, it makes mistakes sometimes, but it is infinitely patient and always available, no appointment necessary. No need to be embarrassed when asking for the umpteenth example to illustrate an elusive concept. To withhold this resource from students isn't just a tad unethical; it's like hiding the key to a treasure chest of knowledge and saying, “Oops, did I forget to mention where it is?”

So, what's ethical and what's not in this grand AI debate? Anything that facilitates learning and growth is a big yes in the ethical column. Casting aspersions on AI without a valid reason or depriving students of its benefits is unethical.

The larger, real question we should be asking is this: What defines ethical practice in education? Is it clinging to the past because it’s comfortable, or is it embracing the future and all the tools it brings to help our students soar? At the end of the day, what’s truly unethical is anything that hinders progress under the guise of misguided caution. After all, isn't education all about unlocking doors, not closing them?

Saturday, January 6, 2024

What does AI reveal about relational pedagogy?

In the ongoing narrative of education's transformation, AI's integration has prompted a profound reassessment of what constitutes uniquely human skills. Stephen Wolfram astutely observed that AI hasn't so much humanized computers as it has highlighted the machine-like aspects of certain human abilities, such as writing. This insight extends powerfully into the realm of education, reshaping our understanding of teaching and its intrinsic human elements.

Traditionally, teaching has been viewed primarily as a process of instruction, a transmission of knowledge from teacher to student. However, the advent of AI in education challenges this perspective. AI's ability to deliver instruction, personalize learning, and even interact with students reveals that the instructional aspect of teaching is not exclusively human after all. Machines can replicate, and in some cases, surpass human efficiency in these areas. This realization prompts a crucial question: if machines can instruct, what then is the unique value that human educators bring to the table?

The answer lies in the relational aspect of teaching, an area where AI cannot succeed. AI's emergence has inadvertently cast a spotlight on the importance of relationship-building in education, underscoring its irreplaceability. The human teacher's role evolves from being a mere conveyor of knowledge to a mentor, a guide, a catalyst for emotional and social growth. In this light, the human educator's value is redefined, emphasizing those qualities that machines cannot replicate: empathy, emotional intelligence, and the ability to inspire and motivate.

This shift in perspective is part of a broader redefinition of what it means to be human in an age increasingly dominated by machines. As AI takes over tasks that were once thought to require human intelligence, we are compelled to re-examine and emphasize those domains that are uniquely human. The essence of humanity is being recalibrated, focusing more on emotional, social, and creative capacities - areas where humans excel and machines falter.

In the context of education, this recalibration has profound implications. It suggests that the future of teaching lies not in competing with AI in cognitive tasks but in embracing and enhancing the relational, emotional, and creative aspects of education. Teachers, liberated from the routine cognitive aspects of their work by AI, can focus more on developing deep, meaningful relationships with students, fostering their emotional and social growth, and nurturing their creativity.

This evolution does not diminish the teacher's role; rather, it elevates it. Educators become the custodians of those aspects of learning that are quintessentially human. The classroom becomes a place where not just intellectual but emotional and social skills are developed, where students learn not just from machines but from the rich, complex interactions with their teachers and peers.

AI's integration into education does more than just streamline teaching; it prompts a reevaluation of the teaching profession and a redefinition of humanity itself. As AI assumes more cognitive tasks, the unique value of human educators comes into sharper focus, centering on the relational and emotional aspects of teaching. This shift heralds a new era in education, one where the human element is not just preserved but celebrated and elevated, defining a future where humans and machines work in tandem to create a richer, more holistic educational experience.

Wednesday, December 27, 2023

Originality over convention

Writing has long been a tightrope walk between adherence to convention and the pursuit of originality. Historically, deviating from established norms could brand you as uneducated, while a lack of originality risked the label of being clichéd. This delicate balance has been fundamentally disrupted by the advent of AI in writing, or "wraiting" as I like to call it.

In the pre-AI era, convention held significant value. It was a measure of education and intelligence, a yardstick to judge the clarity and correctness of one's thoughts. However, AI's ability to effortlessly follow these conventions has suddenly diminished their value. Originality has emerged as the sole contender in the arena of writing excellence. 

This seismic shift has understandably ruffled feathers. Many derive a sense of pride and authority from mastering and teaching these conventions. Yet, they now find themselves in a world where these skills are increasingly automated. This change isn't subject to debate or democratic process - it's an unstoppable wave reshaping the landscape.

Ironically, while AI excels in adhering to conventions, it's not inherently original. It can replicate, recombine, and reformat existing ideas, but the spark of true originality still lies uniquely within the human mind. This realization should be a beacon for writers in the AI era. The challenge is no longer about mastering the rules of writing but about pushing the boundaries of creativity and originality.

The implications for education are profound. Traditionally, a significant portion of writing education focused on teaching the rules – grammar, structure, formats. Now, these aspects can be delegated to AI tools. This frees educators to focus more on cultivating creativity, critical thinking, and originality. It's a shift from teaching the mechanics of writing to exploring the depths of imagination and expression.

For those resistant to this change, the path ahead may seem daunting. It involves unlearning the supremacy of convention and embracing a world where originality reigns supreme. However, this change is not a loss but an evolution. It's an opportunity to rediscover the essence of writing as an art form, where the value lies not in the adherence to rules but in the ability to transcend them.

In conclusion, the advent of AI in writing presents an opportunity for a paradigm shift. It's a call to writers and educators alike to redefine what constitutes good writing. As we navigate this new landscape, our focus should shift from convention to creativity, from format to imagination, ensuring that the heart of writing remains a distinctly human endeavor.

Wednesday, December 20, 2023

AI Pedagogy, the introduction

  1. AI-powered chatbot is a tool. By aiding, any other tool displaces human skills. For example, CAD displaced manual drafting, and word processor/printer displaced penmanship. Educators have an ethical obligation to prepare students for the world where the tool is used, not for the world where it does not exist. Skill displacement is expected.

  2. Writing with AI, or ‘wraiting,’ is an advanced and complex cognitive skill set, mastering which should be associated with students’ cognitive growth. It partially overlaps with traditional writing but does not coincide with it. Eventually, "wraiting" instruction should replace writing instruction.

  3. The default is to allow or require students to use AI. The only reasonable exception is when the use of AI prevents the development of a truly foundational skill. The pragmatic difficulties of policing the use of AI make it even more urgent to develop a rational justification for any restrictions.

  4. In some cases, the displaceable skill is foundational for learning higher-level skills. For example, basic literacy is not a displaceable skill because it is foundational for many other higher-level literacy skills. Therefore, limitations on the use of certain tools in education may be justifiable, although they may not be arbitrary.

  5. There must be rational criteria for distinguishing between displaceable and foundational skills. An assumption that all skills associated with traditional writing instruction are foundational is just as unreasonable as the assumption that they all are displaceable. The arguments about strict linearity of curriculum are not valid. Just because we used to teach certain skills in a certain progression does not mean that some of these skills cannot be displaced by AI or other tools.

  6. A skill is foundational and non-displaceable if:

    1. It is needed for pre-AI and non-AI tasks, or is needed to operate AI. 

    2. It is demonstrably needed to develop post-AI skills such as original, critical, creative,  and discerning thinking (OCCD thinking).

  7. Rather than worrying about students cheating, instructors should make an effort to make their assignments cheat-proof. The key strategies are these:

    1. Asking to submit sequences of prompts to assess student development.

    2. Refocusing evaluation rubric to focus on OCCD thinkingб ву-emphasizing displaceable skills

    3. Raise expectations by considering content produced via a lazy prompt to be the base level, failing product

  8. Each of the uses of AI are unique, and raise different questions and concerns. Their use in instruction should be evaluated separately. These are some examples with :

    1. Aggregator of information

      1. Tell me what is known about global warming

      2. Which philosophers are most notable in virtue ethics?

      3. Remind me what Cohen’s d is in statistics.

    2. Coach/Tutor/Counselor

      1. Test my knowledge of Spanish

      2. I feel overwhelmed and disengaged. What can I do?

      3. Give me some problems that are likely to be on GRE test, and explain what I did wrong

      4. Teach me how to [...] using Socratic dialogue, where you ask leading questions, and respond depending on my answers. Present your questions one by one

    3. Data processor

      1. Run multiple regression analysis on this data

      2. Summarize transcript, examine it for main themes and do sentiment analysis

      3. Give me keywords for each of these text segments

      4. Put data from this copied webpage into a table. I only need first name, last name, email. 

    4. Brainstorming partner

      1. I am thinking of writing a paper on… Which theories I should rely on? Who are the key authors?

      2. I have this idea… Has anyone else been offering an idea like this? Is it original?

      3. How would you build an argument, what supporting and opposing points should I consider? 

      4. I have these empirical data. What claims can I make based on them? 

    5. Feedback provider

      1. Give me feedback on my paper. Use the rubric it is going to be graded on

      2. What do you think I should do to improve this paper? 

      3. Give me feedback on my lesson plan

    6. Ghost writer

      1. Write a section of my paper; use these key ideas

      2. Elaborate and elucidate this fragment

    7. Editor and copy editor

      1. Give me feedback on the paper I just uploaded. Which parts need elaboration? Which parts may be redundant, which - too wordy?

      2. Revise this segment for clarity

      3. Revise the segment for grammar only

Monday, December 18, 2023

The prevailing mood of the discussion about AI in Education

This is the continuation of the analysis of the data set described in the previous blog, just with a different question. Obviously, I not endorse the bot's opinions or conclusions. 

Can you assess the prevailing mood based on the speech patterns?

GPT

  1. The prevailing mood in the digital ethnography of educators' discussions about AI in the classroom can be assessed through an analysis of the speech patterns and thematic content:
  2. Curiosity and Experimentation: There is a strong sense of curiosity and willingness to experiment with AI tools in the educational context. Educators are actively seeking information, sharing experiences, and exploring novel applications of AI in teaching various subjects, including special needs education​​. This exploratory tone suggests a generally positive and open attitude towards AI in education.
  3. Cautious Optimism: While there's an evident enthusiasm for the potential of AI, it's tempered with caution. Educators express concerns about privacy, data security, and the ethical implications of using AI tools, indicating a balanced view where optimism coexists with awareness of potential risks​​.
  4. Collaborative Engagement: The interactions reflect a collaborative mood among educators. They are keen on sharing insights, tips, and experiences, indicating a supportive community environment. This collaborative tone is indicative of a positive and community-oriented approach towards problem-solving and knowledge sharing​​.
  5. Practical Focus: There's a pragmatic focus on how AI can be utilized effectively in teaching and administrative tasks. Educators discuss practical applications such as grading, writing recommendation letters, and creating lesson plans, demonstrating a goal-oriented and practical mood towards AI integration​​.
  6. Concern and Vigilance: Amidst the enthusiasm, there's a discernible mood of concern, particularly regarding data privacy and the ethical use of AI. Educators are vigilant about the implications of AI use, suggesting a mood of responsibility and conscientiousness in adopting new technologies​​.
  7. Desire for Professional Growth: The discussions reveal a desire for professional development and a deeper understanding of AI tools. This indicates a mood of ambition and eagerness for growth and learning within the educator community​​.

In summary, the prevailing mood in the conversation is one of cautious optimism, characterized by a blend of enthusiasm and curiosity about AI's possibilities in education, tempered by a responsible and vigilant approach to its challenges and implications. This mood is underpinned by a collaborative spirit and a practical focus on leveraging AI for educational enhancement.

Can you give a list of the 10 most original ideas?

GPT

Based on the digital ethnography of the educators' discussion, the following can be considered the ten most original ideas for AI application in education:

  1. AI-Powered Grading Assistance: Using AI to assist in grading student work, with a focus on providing detailed feedback and reducing the time educators spend on this task​​.
  2. Historical Chatbots for Interactive Learning: Employing AI chatbots to simulate historical figures or situations, enhancing engagement and understanding in subjects like history and social studies​​.
  3. AI for Writing Recommendation Letters: Utilizing AI to draft initial versions of recommendation letters, thereby streamlining the process while maintaining personalization and authenticity​​.
  4. AI-Generated Teaching Materials and Lesson Plans: Leveraging AI to create customized teaching materials and lesson plans, particularly for language teaching and online education​​.
  5. Chatbots for Special Needs Education: Exploring the use of AI chatbots in special needs education to provide tailored learning experiences and support​​.
  6. AI in Early Childhood Education: Investigating how AI tools can be integrated into teaching fundamental subjects like English and math to young learners in grade 1​​.
  7. AI for Enhancing In-Character Writing: Using AI to provide detailed, historical context and background information to students, enabling them to write more authentically from the perspective of historical or fictional characters​​.
  8. AI-Driven Professional Development for Teachers: Offering professional development programs focused on AI tools, addressing ethical concerns, cheating prevention, and detection of AI-generated work in class​​.
  9. AI for Creating Educational Visuals: Using AI to generate visual aids like logos or images for educational purposes, such as team shirts or classroom decorations​​.
  10. AI in Digital Tech Teaching: Incorporating AI teaching in the curriculum for digital technology classes, introducing K-6 students to the basics of AI and its applications​​.

These ideas reflect a broad spectrum of innovative approaches to integrating AI into various aspects of education, demonstrating the potential of AI to transform and enhance the learning and teaching experience.


Ai perceptions among educations

The Method:

We copied the last 30 threads from nine largest Fb groups on AI and education. The document, over 1000 pages long (142558 words) was uploaded to ChatGPT, with a request to perform sentiment analysis. The groups are:

  • ChatGPT for Teachers & Educators
  • AI For Teachers
  • ChatGPT for Educators
  • AI for Teachers
  • The AI Classroom
  • ChatGPT for Teachers
  • Higher Ed discussions of AI writing
  • AI Teacher Resources with ChatGPT, Canva, & Midjourney
  • Higher Ed Learning Collective
  • AI and ChatGPT in Education

The results are below:

“Here’s a summary of the key sentiments expressed:

  1. Privacy and Data Security Concerns: Educators express concerns about privacy issues related to grading student work using AI tools like ChatGPT, citing issues such as data anonymization and the potential for data breaches. Some educators acknowledge the measures taken to anonymize data, while others suggest looking into AI companies that handle security well.
  2. Regulatory Compliance: Awareness of regional differences in data privacy regulations, such as GDPR in Europe, influences educator attitudes towards AI in education. Some educators express understanding of why companies are concerned about data privacy compliance, while others are less worried personally.
  3. Potential Risks and Misuse: There is a discussion about the potential risks associated with AI, including the re-surfacing of data and the reliability of AI outputs. Concerns are raised about AI’s potential to propagate misinformation and its impact on the educational process.
  4. Practical Applications and Benefits: Educators discuss practical uses of AI, such as assisting in grading and providing feedback, and generating teaching materials and lesson plans. AI is seen as a tool that can save time and enhance the learning experience.
  5. AI for Personalized Education: AI is considered useful for tasks like writing recommendation letters and creating personalized reports, highlighting its potential in catering to individual needs.
  6. Enhancing Creativity and Engagement: Some educators are exploring innovative ways to use AI for creative projects and enhancing student engagement, such as creating chatbots for educational purposes and using AI for art creation.
  7. Concerns About Ethical Implications and Academic Integrity: There are discussions about ethical considerations, such as the impact of AI on cheating and academic integrity. Educators ponder how AI might affect the nature of assessment and student work.
  8. Need for Professional Development and Policy Frameworks: The importance of professional development for educators to understand and effectively use AI is emphasized. There are requests for AI use policy templates and guidance on integrating AI into educational practices.
  9. Diverse Perspectives and Experiences: The sentiment among educators varies widely, with some expressing excitement and optimism about AI’s potential in education, while others show caution and skepticism. This diversity highlights the need for continued dialogue and exploration of AI’s role in education.
  10. Emerging Tools and Platforms: Educators are actively exploring and sharing various AI tools and platforms that can aid in teaching and learning, demonstrating a keen interest in staying abreast of technological advancements.

In summary, while there is significant interest and optimism about the potential of AI in education, concerns about privacy, data security, and ethical implications are prominent. The educators also highlight the importance of professional development in AI, the need for policies to guide its use, and the exploration of creative and innovative applications to enhance teaching and learning.” (ChatGPT 4, retrieved 12/14/2023).


Comment: The sample includes only educators and others already interested in the uses of AI in education, with one exception. They are in no way a representative sample. However, even among this self-selected groups, the concerns still dominate over excitement and over pragmatics. While concerned, educators readily exchange practical tips on how to use AI to reduce their own workload. They are also very curious about specialized AI tools developed for educators. This should be taken into consideration by anyone panning to develop policy or offer professional development to educators.

Disclaimer: This analysis is done within the legal framework of investigative reporting, which does not require a formal IRB approval. Although I am a trained researcher, I am acting here as a blogger/journalist. This is not research and does not imply any claims of validity.

Thanks to Adriana Menjivar Enriquez for assistance. Feel free to suggest other questions to ask about the file. I have several in mind, and will publish more results next week.

Saturday, December 9, 2023

AI and neurodiversity

If AI were human, what would it be diagnosed with? Perhaps it would be Autism Spectrum Disorder (ASD). AI, akin to individuals with ASD, often struggles with social interactions and grasping emotional nuances. While they excel in specific tasks, abstract thinking or unpredictable social contexts pose challenges. Then there's Attention Deficit Hyperactivity Disorder (ADHD). AI can display ADHD-like traits: losing context in lengthy conversations or abruptly shifting focus. This metaphorical attention deficit mirrors the challenges individuals with ADHD face in maintaining long-term conversational coherence. Lastly, consider Executive Function Disorder. AI often falters when adapting to new, unstructured tasks, akin to the challenges faced by individuals with executive function disorder in organizing and executing tasks. AI's dependence on structured data and clear objectives limits its ability to handle open-ended scenarios.


Of course, treating every limitation as a diagnosis is ridiculous. When building a relationship with AI, we should not pigeonhole it with human diagnoses. Instead, adopting a neurodiversity framework allows us to appreciate AI's unique cognitive makeup. This approach emphasizes focusing on strengths and working around limitations, acknowledging that AI represents a different kind of intelligence.

Neurodiversity is a concept and social movement that advocates for understanding and appreciating neurological differences as natural human variations, rather than disorders or deficits. Originating from the autism community, the term has expanded to include a range of neurological conditions like ADHD, dyslexia, and others. This perspective emphasizes that neurological differences should be recognized and respected just like any other human variation, such as ethnicity or sexual orientation. The neurodiversity framework promotes the idea that individuals with these differences have unique strengths and perspectives, advocating for accommodations and support systems that allow them to thrive in society. This approach shifts the focus from trying to "cure" or "fix" these individuals to celebrating and utilizing their distinct abilities, fostering a more inclusive and understanding society.

Understanding AI through the lens of neurodiversity offers an alternative perspective. We should not try to make AI closely mimic human intelligence; that would be counterproductive. Instead, we must consider embracing AI as a distinct 'other.' This approach allows us to benefit from each other's strengths and compensate for weaknesses. This approach will also reduce the anxiety about AI eventually replacing us. If we remain different, we will need each other.

In constructing our relations with AI, we can benefit from reflection on our species' internal diversity. This recognition paves the way for a more harmonious coexistence, where the strengths of one can offset the limitations of the other, creating a synergistic relationship between human and artificial intelligence. If we apply a strictly normative framework, trying to make AI exactly like the neurotypical human mind, we’re inviting trouble; the same kind of trouble human societies experience when trying to be more homogenous than they are.

Understanding AI through the neurodiversity lens offers a chance for growth and collaboration. It is not just about programming and algorithms; it is about building a relationship with a fundamentally different form of intelligence. This approach will enable us to fully harness AI's potential while respecting its unique cognitive characteristics. As we continue to evolve alongside AI, this perspective will be crucial in guiding our interactions and expectations, fostering a future where diversity in all its forms is not just accepted but celebrated.

Thursday, December 7, 2023

A case against prompt engineering in education

Do we give students examples of great prompts, or do we allow them to struggle with developing their own prompting skills? This dilemma is common amongst educators integrating AI into their pedagogical strategies.

Refining prompts is as a pivotal vehicle for cognitive advancement. It fosters growth by nudging students to navigate beyond their current capabilities. A meticulously crafted ready-made prompt, while yielding impressive results, might overshoot a student's zone of proximal development. The essence of learning lies in recognizing and rectifying flaws of the output. In other word, giving students a great prompt to begin with may produce the result that is painfully obviously flawed to the instructor, but the flaws are completely invisible to students. When students are handed sophisticated prompts, there's a risk of them becoming passive users, merely applying these tools without understanding or growth. Here is some empirical evidence of this provided by Jack Dougal. One of my colleagues, hopefully will soon present similar results.

The general principle should be to calibrate potential outputs to a level where students can discern imperfections. It is also to ENCOURAGE them to look for imperfections, guiding them to be critical to the output. Just because it sounds good and grammar is perfect does not mean the text is good. This approach encourages active engagement with the learning material, prompting them to question, adapt, and evolve their understanding. It's akin to guiding someone through a labyrinth; the instructor's role is to provide just enough light to help them find their way, without illuminating the entire path.

In the educational sphere, the prompt industry's role is contentious. While it offers a plethora of ready-made prompts, enhancing efficiency, this convenience comes at a cost to cognitive development. In academia, the journey of crafting and refining prompts is crucial for fostering critical thinking and problem-solving skills.

On the research front, the prompt industry does contribute valuable insights, empirically testing and refining prompts to optimize AI interactions. I love to find out about the chain-of-thought approach, for example. However, a significant portion of the prompts available in the market are of dubious quality. These prompts, lacking empirical validation, are frequently oversold in their capabilities. The indiscriminate use of these untested prompts can result in suboptimal outcomes, reinforcing the necessity for a discerning approach to their adoption and application.

The overarching promise of AI lies in its potential to democratize content creation, designed to comprehend natural, imperfect language and provide equitable access to all, regardless of their mastery of writing mechanics, their disability, or fluency in the dominant language. This vision is threatened by attempts to monopolize and professionalize access to AI, a trend that runs counter to the very ethos of this technology. The notion that one must know 'magic words' to effectively communicate with AI is a form of self-interested deception. It undermines the inclusive and accessible nature of AI, turning it into a gated community where knowledge is unfairly hoarded rather than shared. Vigilance against such practices is essential to preserve the integrity and egalitarian promise of AI, ensuring it remains a tool for empowerment and collective advancement, rather than a vehicle for exclusion and profiteering.

Monday, December 4, 2023

Is AI doing too much for students?

Educators’ worry about AI boils down the concept of 'Goldilocks zone.' A learning task should neither be too challenging nor too simplistic, but just right, fitting within the learner's zone of proximal development. It is something that the learner can first solve only with help, but eventually internalized and can solve on their own. The concern is that AI, in its current form, might be overstepping this boundary, solving problems on behalf of learners instead of challenging and guiding them. It is like that rookie teacher that keeps solving problems for students and rewriting their papers, and then wonders why they have not learned anything. I just want to acknowledge that this concern is very insightful and is grounded in both theory and everyday practice of teachers. However, the response to it isn't that simple. AI cannot be dismissed or banned based on this critique.

First, there's the question of what skills are truly worth learning. This is the most profound, fundamental question of all curriculum design. For instance, we know that certain basic procedural skills go out of use, and learners leapfrog them to free time to concentrate on more advanced skills. For example, dividing long numbers by hand used to be a critical procedural skill, and it is not worth the time, given the ubiquity of calculators. There is a legitimate, and sometimes passionate debate whether the mechanics of writing is such a basic procedural skill that can or cannot be delegated to the machines. I don’t want to prejudge the outcome of this debate, although I am personally leaning towards a “yes” answer, assuming that people will never go back to fully manual writing. However, the real answer will probably be more complicated. It is likely that SOME kinds of procedural knowledge will remain fundamental, and others will not. We simply do not have enough empirical data to make that call yet. A similar debate is whether the ability to manually search and summarize research databases is still a foundational skill, or we can trust AI to do that work for us. (I am old enough to remember professors insisting students go to the physical library and look through physical journals). This debate is complicated by the fact that AI engineers are struggling to solve the hallucinations problem. There is also a whole different debate on authorship that is not quite specific to education, but affects us as well. The first approach is then to rethink what is worth teaching and learning, and perhaps focus on skills that humans are really good at, and AI is not. IN other words, we reconstruct the “Goldie locks zone” for a different skill set.

The second approach centers on the calibration of AI responses. Currently, this is not widely implemented, but the potential exists. Imagine an AI that acts not as a ready solution provider but as a coach, presenting tasks calibrated to the learner's individual skill level. It is sort of like an AI engine with training wheels, both limiting it and enabling the user to grow. This approach would require creating educational AI modules programmed to adjust to the specific needs of each user’s level. We have the Item Response Theory in psychometrics that can guide us in building such models, but I am not aware of any robust working model yet. Once the Custom GPT feature starts working better, it is only a matter of time for creative teachers to build many such models.

Both approaches underscore the importance of not dismissing AI's role in education but rather fine-tuning it to enhance learning. AI is here to stay, and rather than fearing its overreach, we should harness its capabilities to foster more advanced thinking skills.

These are conversation we cannot shy away from. It is important to apply some sort of a theoretical framework to this debate, so it does not deteriorate into a shouting match of opinions. Either Vygotskian or Brunerian, or any other framework will do. Vygotsky has been especially interested in the use of tools in learning, and AI is just a new kind of tool. Tools are not note all created equal, and some are better than others for education. The ultimate question is what kind of a learning tool AI is, and whether we could adjust learning, adjust the tool, or do both.

Monday, November 27, 2023

Assessing writing with AI

Writing with AI is a complex skill that overlaps with traditional manual writing, but it is not the same. Many instructors struggle to grasp this new skill because it is unfamiliar to them. Teaching something you haven't mastered is challenging, leading to noticeable unease at all educational levels. Even those eager to incorporate AI in teaching, often open to new innovations, face this difficulty. The issue essentially lies in redefining the objectives of writing instruction. If the belief is that students should ultimately write independently, then traditional practice is paramount, leaving no role for AI tools. However, the more challenging conceptual shift is recognizing the need to teach students how to write with AI. This is like the transition from penmanship to typing. We lose something in this shift: the beauty, the discipline, and the rigorous exercises of handwriting. I recall diligently practicing letter formations in my first-grade penmanship class. Although I was never adept at it and gladly transitioned to typewriters when they became accessible, I understand the pain of losing the esteemed art of writing, cherished for centuries. This pain, particularly acute for those who have spent decades mastering and teaching writing, must be acknowledged. Yet, this shift seems inevitable. We are dealing with a technology that is being adopted faster than any in history, and it is not a passing fad. The benefits are too clear. We face a stark paradox: educators use AI to create lesson plans and assessment rubrics, yet often bar their students from using the same technology. This is unsustainable and awkward. 

As a profession, we are only taking the first steps in integrating AI into writing instruction. Here's another baby step: I revised Sacramento State University's Undergraduate Writing Portfolio Assessment criteria, considering the new skill of "wrating." 

Writing Placement for Juniors Portfolio (WPJ)

5 - Exceptional Wraiter: Demonstrates mastery in "wraiting," producing AI-assisted compositions at a publishable level in their respective discipline. Showcases exceptional skill in generating rich, engaging prompts and collaboratively refining AI outputs. Exhibits a deep understanding of AI's strengths and limitations, skillfully navigating these in producing original, high-quality work.

4 - Strong Wraiter: Effectively employs AI tools in "wraiting," producing texts of high quality that reflect a sophisticated understanding of AI's capabilities. Demonstrates the ability to create rich prompts and engage in the iterative process of refining AI-generated content. Shows a clear grasp of AI's strengths and limitations, using them to enhance original thinking and critical evaluation.

3 - Competent Wraiter: Demonstrates a solid understanding of "wraiting," using AI tools to assist in writing tasks. Capable of creating effective prompts and engaging in the process of refining AI outputs. Shows awareness of the strengths and limitations of AI in writing, but may require further guidance to fully exploit these in creating high-quality texts.

2 - Developing Wraiter: Beginning to understand the role of AI in "wraiting." Can generate basic AI-assisted texts but requires further instruction in creating effective prompts and refining outputs. Shows potential in understanding AI's strengths and limitations, but needs more practice to integrate these effectively in writing tasks.

1 - Emerging Wraiter: Early stages of grasping "wraiting." Struggles with effectively using AI tools, often producing clichéd, uninspired texts that lack human input and originality. Needs substantial guidance in understanding AI's capabilities, constructing prompts, and refining AI-generated content.

0 - Incomplete Portfolio: Portfolio does not demonstrate the basic competencies in "wraiting" or effective use of AI in writing tasks. Requires additional work to understand and skillfully employ AI tools in the writing process. What do you think?

Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work

The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric i...