Friday, February 9, 2024

The Advising Bot Dilemma

In educational organizations, the integration of AI, particularly through automated advising tools like chatbots, embodies a strategic advancement yet introduces a complex dilemma. These digital advisors, designed to navigate queries ranging from academic programs to student services, highlight a pivotal choice between precision and broad utility.

At one pole, AI bots can be meticulously engineered to handle vaguely formulated inquiries, but only providing correct answers manually curated by humans. This approach, while ensuring a high level of fidelity, is marked by a slow and expensive development process. For entities with vast knowledge bases or intricate operations, the manual input required could significantly dilute the efficiency gains such tools promise to deliver.

Conversely, AI advisors programmed for wider application operate by not only interpreting queries, but also sourcing answers from a pre-existing repository of documents and websites. This method, though expedient, compromises on accuracy, a drawback that becomes more pronounced within the context of large and diverse information repositories.

A balanced strategy proposes the coexistence of both high and low-fidelity bots within the educational sphere. Low-fidelity bots offer an expedient first layer of support, adept at managing basic inquiries through triage advising. Tailoring these bots to specific domains and incorporating clear disclaimers could mitigate the risk of misinformation, directing students towards accurate resources while alleviating the administrative burden on staff.

For situations where accuracy is paramount, a semi-automatic model emerges as a superior alternative, at least for now. This model envisions a symbiotic relationship between AI systems and human advisors, with AI proposing potential responses and the advisor ensuring their validity. Such a configuration enhances efficiency without compromising the integrity of the advice provided.

AI imperfections sometimes may be tolerated. AI adoption required a pragmatic cost-benefit analysis. The evaluation hinges on whether the operational efficiencies gained through deploying lower-fidelity systems justify the associated risks. We must compare them not to very expensive and very reliable alternative, but to not getting any advicу at all, or receiving it from roommates and random sources. The decision on whether to limit these systems to straightforward queries or to implement them within defined subject areas requires careful consideration.

Addressing these trade-offs is crucial for harnessing AI's potential in educational settings. This nuanced approach, advocating for a judicious blend of high and low-fidelity advising tools, underscores the importance of strategic planning in AI deployment. It offers a pathway to leverage technological advancements, ensuring they complement rather than complicate the educational mission.

Tuesday, February 6, 2024

AI undermines linguistic privilege

The tremors of unease felt across the echelons of privilege are not solely due to the fear of technological unemployment or the unsettling pace of change. Rather, they stem from a deeper, more introspective anxiety: the threat AI poses to the use of language as a bastion of privilege. For centuries, mastery over the nuanced realms of oral and written speech has served as a subtle yet potent tool of social stratification, a way to gatekeep the corridors of power and influence. But as AI begins to democratize these linguistic capabilities, it inadvertently challenges the very foundations of societal hierarchies, provoking a backlash draped in ethical rhetoric that masks a more self-serving agenda.

Language, in its most refined forms, has long been a marker of education, sophistication, and belonging. To speak with the clipped accents of an upper-class Englishman, to wield the jargon of academia, or to navigate the complex conventions of professional communication has been to hold a key to doors otherwise closed. These linguistic markers function as tacit gatekeepers, delineating who belongs within the inner circles of influence and who remains outside, their voices deemed less worthy. The assertion that one must speak or write in a certain way to be considered intelligent or capable reinforces societal power structures and perpetuates inequities. It's a subtle form of oppression, one that privileges certain dialects, accents, and syntactical forms over others, equating linguistic conformity with intelligence and worthiness.

Enter the realm of artificial intelligence, with its natural language processing capabilities and machine learning algorithms. AI, with its inherent impartiality to the accents, dialects, and syntactical structures it mimics, does not discriminate based on the traditional markers of linguistic prestige. It can generate scholarly articles, craft professional emails, or compose poetic verses with equal ease, regardless of the socioeconomic or cultural background of the user. This leveling of the linguistic playing field poses a direct challenge to those who have historically leveraged their mastery of language as a means of maintaining status and privilege.

Critics of AI often cloak their apprehensions in the guise of ethical concerns, voicing fears about data privacy, algorithmic bias, or the dehumanization of communication. While these issues are undoubtedly important, they sometimes serve to obscure a more uncomfortable truth: the democratizing impact of AI on language threatens to undermine traditional power dynamics. The reluctance to embrace this technology fully may, in part, stem from a reluctance to relinquish the privilege that comes with linguistic mastery.

This resistance to change is not a new phenomenon. Throughout history, technological advancements have often been met with skepticism by those whose status quo they disrupt. The printing press, the telephone, and the internet all faced initial pushback from those who feared the loss of control over information dissemination. Similarly, AI's impact on language is merely the latest battleground in the ongoing struggle between progress and privilege.

Yet, the equalizing potential of AI should not be viewed with apprehension but embraced as an opportunity for societal advancement. By breaking down the barriers erected by linguistic elitism, AI can facilitate more inclusive, diverse forms of communication. It can empower individuals from all backgrounds to express themselves effectively, participate in scholarly discourse, and compete in professional arenas on equal footing. In doing so, AI can help to dismantle some of the systemic barriers that have perpetuated inequality and hindered social mobility.

The anxiety surrounding AI's impact on language reflects broader concerns about the erosion of traditional forms of privilege. As AI continues to advance, it challenges us to reconsider the values we ascribe to certain forms of linguistic expression and to question the fairness of societal structures built upon them. Embracing the democratizing influence of AI on language could lead to a more equitable and inclusive society, where intelligence and capability are recognized in all their diverse expressions, rather than gauged by adherence to arbitrary linguistic norms. In the end, the true measure of progress may not be in the sophistication of our technologies but in our willingness to let go of outdated markers of privilege.

Tuesday, January 30, 2024

The tiny tools issue

The world of AI implementation has three tiers. At the base are user-friendly, ready-to-use AI tools – the digital world's equivalent of instant coffee: one can simply go to a chatbot and do your thing. Ascending a level, there is the realm of tiny tools like APIs, a middle ground easily accessible to coders but mystifying to the layperson. The apex of this hierarchy is reserved for integrated, complex AI solutions – the grand orchestras of technology, both sophisticated and costly.

The drama in AI implementation, however, is not rooted in the existence of these tiers, but in their portrayal and accessibility. Providers, often driven by material interests, tend to downplay the simplicity and adequacy of the lower tiers. This misrepresentation is not just about pushing expensive solutions; it is a deeper issue of monopolizing knowledge and perpetuating power imbalances. Of course, if one knows how to do something that others do not, they want to make themselves look more essential, so they can sell their expertise.

The key takeaway here is to be a discerning consumer. Before opting for an expensive, integrated solution, consider first if one can do it themselves, and if not, if a tiny tool would suffice. Perhaps a computer science student could craft a solution efficiently and affordably. Or there might be a vendor that sells just the tiny tool needed. This approach is not just about saving resources; it is about maintaining control in an increasingly technology-driven world. Surrendering to high-end solutions can lead to a loss of autonomy, difficult to reclaim once given up.

When faced with AI implementation decisions, balance is crucial. It is essential to recognize where one's needs fit within this spectrum and make choices that harmonize practicality, cost, and control. The most effective tool at our disposal is not always the most complex or expensive one, but often our own critical thinking. By understanding the nuances of these AI tiers, we can make informed decisions that preserve our technological independence and prevent being unwittingly upsold solutions that far exceed our actual needs.

Monday, January 29, 2024

Writing instructors, why are you surprised by AI?

Why do you look surprised?  Since the 1970s, there has been a shift in your field. This change was not about refining the minutiae of grammar or punctuation. Rather, it was a movement toward valuing the creative process in writing. Think of pioneers like Donald Graves, Lucy Calkins, and Peter Elbow. They were not merely toying with new ideas; they were fundamentally altering how writing is taught, influencing college-level instruction as well.

The advent of AI technology has accelerated a shift that was already underway. Historically, while there was vocal support for creative and critical thinking, the reality often leaned towards assessing grammar and spelling. It was simpler to grade based on these concrete elements. Judging originality and creativity posed greater challenges, especially when justifying grades during student appeals.

However, it is becoming clear that the reliance on traditional assessment is no longer sustainable. It is time to genuinely embrace what has been acknowledged for decades. The focus should shift more towards teaching originality, creativity, authenticity, discernment, and critical thinking. Ideas should be valued over mechanical accuracy.

A crucial aspect of this evolution is teaching students to write with AI assistance. This approach does not diminish writing standards. Instead, it raises the bar for the final product. Students should learn to use AI as a tool to enhance their writing, not as a substitute for critical thinking or creativity.

Dear writing instructors, the time has come to adapt. And you know how to do it better than anyone else. The gradual shift many of you have been working on, is now upon us. This is a moment for re-evaluating, rethinking, and embracing a new phase in education where AI complements and enhances the teaching of writing. The future is here, and it aligns with the trajectory you have been following.

Thursday, January 25, 2024

Prompt patterns

 Just sharing a summary of  a paper that tried to develop a catalog of prompt patterns. The sourcez;

"A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" by Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, Douglas C. Schmidt. Arxiv. https://doi.org/10.48550/arXiv.2302.11382 

  1. Meta Language Creation Pattern: Focuses on creating a custom language for LLMs to improve their understanding of prompts.
  2. Output Automater Pattern: Aims to automate the generation of actionable steps or scripts in response to prompts.
  3. Flipped Interaction Pattern: Involves reversing the typical interaction flow, with the LLM posing questions to the user.
  4. Persona Pattern: Assigns a specific persona or role to an LLM to guide its output generation.
  5. Question Refinement Pattern: Enhances the LLM's responses by refining the user's questions for clarity and focus.
  6. Alternative Approaches Pattern: Encourages the LLM to offer different methods or perspectives for tackling a task.
  7. Cognitive Verifier Pattern: Involves the LLM generating sub-questions to better understand and respond to the main query.
  8. Fact Check List Pattern: Guides the LLM to produce a list of facts or statements in its output for verification.
  9. Template Pattern: Involves using a predefined template to shape the LLM's responses.
  10. Infinite Generation Pattern: Enables the LLM to continuously generate output without repeated user prompts.
  11. Visualization Generator Pattern: Focuses on generating text outputs that can be converted into visualizations by other tools.
  12. Game Play Pattern: Directs the LLM to structure its outputs in the form of a game.
  13. Reflection Pattern: Encourages the LLM to introspect and analyze its own outputs for potential errors or improvements.
  14. Refusal Breaker Pattern: Designed to rephrase user queries in situations where the LLM initially refuses to respond.
  15. Context Manager Pattern: Controls the contextual information within which the LLM operates to tailor its responses.
  16. Recipe Pattern: Helps users obtain a sequence of steps or actions to achieve a desired result.

Each pattern is detailed with its intent, context, structure, key ideas, example implementations, and potential consequences.

I want to acknowledge a good attempt, but am not sure this list is very intuitive or very helpful. In practical terms, we either ask questions or give tasks, defining some output parameters - like genre, audience, style, etc. However someone might find this helpful to keep thinking. We do need some way of classifying prompts. 

Tuesday, January 23, 2024

What is the killer app for AI-powered chatbots?

In a recent interview, I was posed with a thought-provoking question about the most impressive application of AI that holds the greatest potential. This was basically the question about the "killer app." The term "killer app" was invented by pioneers of mass computing to mean a software so essential that it drives the success of a larger platform or system. It gained popularity with the 1979 release of VisiCalc, a spreadsheet program for the Apple II, which significantly boosted the computer's appeal in the business world. "Killer app" now broadly refers to any software or service that significantly drives the adoption of a technology.

My response named a broad spectrum of AI applications where the core task involves comparing or merging two documents. Consider the everyday tasks like grading student papers, which essentially is juxtaposing a grading rubric against student submissions. Or the process of job applications, where one's resume or cover letter is matched with the job description. Even more intricate tasks like reviewing contracts involve a comparative analysis between the contract's text and relevant laws and regulations. Similarly, writing a grant application is a fusion of the request for proposal (RFP) with one's own ideas or previously written articles.

This insight opens up a broader perspective on the nature of our intellectual activities in the workplace. Many of these tasks revolve around blending, merging, and oscillating between two or more texts. If we start viewing our tasks through the lens of 'feeding the AI beast' with relevant documents, we unlock a new way to leverage this astonishing technology for our benefit.

The implications of this AI capability are profound. It's not just about simplifying tasks; it's about enhancing our cognitive processes. Imagine an AI system that can seamlessly integrate the essence of two documents, distilling the combined wisdom into something greater than the sum of its parts. This isn't just about automation; it's about augmentation. It's the fusion of human intellect with machine precision that could redefine how we approach problem-solving.

Let's delve deeper into the examples. In the educational sector, the grading of papers becomes not just a task of assessment but an opportunity for tailored feedback. The AI, by comparing a student's work with the rubric, can identify nuances that might be overlooked in a manual review. It can offer insights into a student's thought process, learning style, and areas needing improvement. This isn't just grading; it's a gateway to personalized education.

In the corporate world, the process of job applications or contract reviews is transformed. The AI's ability to merge and compare documents means it can align a candidate's skills and experiences with a job's requirements more accurately, potentially revolutionizing recruitment processes. Similarly, in legal settings, reviewing contracts with AI can ensure compliance and mitigate risks more efficiently, saving countless hours and reducing human error.

In short, the real magic of AI lies in its ability to blend and compare documents, a seemingly mundane task that, upon closer examination, reveals itself as a key to unlocking new dimensions of efficiency, creativity, and understanding. 

Navigating the AI Gold Rush: Skins, Security, and the Real Value Proposition

 The economic battle surrounding artificial intelligence is intensifying at an unprecedented pace. Major AI players like OpenAI, Google, Met...