Tuesday, January 30, 2024

The tiny tools issue

The world of AI implementation has three tiers. At the base are user-friendly, ready-to-use AI tools – the digital world's equivalent of instant coffee: one can simply go to a chatbot and do your thing. Ascending a level, there is the realm of tiny tools like APIs, a middle ground easily accessible to coders but mystifying to the layperson. The apex of this hierarchy is reserved for integrated, complex AI solutions – the grand orchestras of technology, both sophisticated and costly.

The drama in AI implementation, however, is not rooted in the existence of these tiers, but in their portrayal and accessibility. Providers, often driven by material interests, tend to downplay the simplicity and adequacy of the lower tiers. This misrepresentation is not just about pushing expensive solutions; it is a deeper issue of monopolizing knowledge and perpetuating power imbalances. Of course, if one knows how to do something that others do not, they want to make themselves look more essential, so they can sell their expertise.

The key takeaway here is to be a discerning consumer. Before opting for an expensive, integrated solution, consider first if one can do it themselves, and if not, if a tiny tool would suffice. Perhaps a computer science student could craft a solution efficiently and affordably. Or there might be a vendor that sells just the tiny tool needed. This approach is not just about saving resources; it is about maintaining control in an increasingly technology-driven world. Surrendering to high-end solutions can lead to a loss of autonomy, difficult to reclaim once given up.

When faced with AI implementation decisions, balance is crucial. It is essential to recognize where one's needs fit within this spectrum and make choices that harmonize practicality, cost, and control. The most effective tool at our disposal is not always the most complex or expensive one, but often our own critical thinking. By understanding the nuances of these AI tiers, we can make informed decisions that preserve our technological independence and prevent being unwittingly upsold solutions that far exceed our actual needs.

Monday, January 29, 2024

Writing instructors, why are you surprised by AI?

Why do you look surprised?  Since the 1970s, there has been a shift in your field. This change was not about refining the minutiae of grammar or punctuation. Rather, it was a movement toward valuing the creative process in writing. Think of pioneers like Donald Graves, Lucy Calkins, and Peter Elbow. They were not merely toying with new ideas; they were fundamentally altering how writing is taught, influencing college-level instruction as well.

The advent of AI technology has accelerated a shift that was already underway. Historically, while there was vocal support for creative and critical thinking, the reality often leaned towards assessing grammar and spelling. It was simpler to grade based on these concrete elements. Judging originality and creativity posed greater challenges, especially when justifying grades during student appeals.

However, it is becoming clear that the reliance on traditional assessment is no longer sustainable. It is time to genuinely embrace what has been acknowledged for decades. The focus should shift more towards teaching originality, creativity, authenticity, discernment, and critical thinking. Ideas should be valued over mechanical accuracy.

A crucial aspect of this evolution is teaching students to write with AI assistance. This approach does not diminish writing standards. Instead, it raises the bar for the final product. Students should learn to use AI as a tool to enhance their writing, not as a substitute for critical thinking or creativity.

Dear writing instructors, the time has come to adapt. And you know how to do it better than anyone else. The gradual shift many of you have been working on, is now upon us. This is a moment for re-evaluating, rethinking, and embracing a new phase in education where AI complements and enhances the teaching of writing. The future is here, and it aligns with the trajectory you have been following.

Thursday, January 25, 2024

Prompt patterns

 Just sharing a summary of  a paper that tried to develop a catalog of prompt patterns. The sourcez;

"A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" by Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, Douglas C. Schmidt. Arxiv. https://doi.org/10.48550/arXiv.2302.11382 

  1. Meta Language Creation Pattern: Focuses on creating a custom language for LLMs to improve their understanding of prompts.
  2. Output Automater Pattern: Aims to automate the generation of actionable steps or scripts in response to prompts.
  3. Flipped Interaction Pattern: Involves reversing the typical interaction flow, with the LLM posing questions to the user.
  4. Persona Pattern: Assigns a specific persona or role to an LLM to guide its output generation.
  5. Question Refinement Pattern: Enhances the LLM's responses by refining the user's questions for clarity and focus.
  6. Alternative Approaches Pattern: Encourages the LLM to offer different methods or perspectives for tackling a task.
  7. Cognitive Verifier Pattern: Involves the LLM generating sub-questions to better understand and respond to the main query.
  8. Fact Check List Pattern: Guides the LLM to produce a list of facts or statements in its output for verification.
  9. Template Pattern: Involves using a predefined template to shape the LLM's responses.
  10. Infinite Generation Pattern: Enables the LLM to continuously generate output without repeated user prompts.
  11. Visualization Generator Pattern: Focuses on generating text outputs that can be converted into visualizations by other tools.
  12. Game Play Pattern: Directs the LLM to structure its outputs in the form of a game.
  13. Reflection Pattern: Encourages the LLM to introspect and analyze its own outputs for potential errors or improvements.
  14. Refusal Breaker Pattern: Designed to rephrase user queries in situations where the LLM initially refuses to respond.
  15. Context Manager Pattern: Controls the contextual information within which the LLM operates to tailor its responses.
  16. Recipe Pattern: Helps users obtain a sequence of steps or actions to achieve a desired result.

Each pattern is detailed with its intent, context, structure, key ideas, example implementations, and potential consequences.

I want to acknowledge a good attempt, but am not sure this list is very intuitive or very helpful. In practical terms, we either ask questions or give tasks, defining some output parameters - like genre, audience, style, etc. However someone might find this helpful to keep thinking. We do need some way of classifying prompts. 

Tuesday, January 23, 2024

What is the killer app for AI-powered chatbots?

In a recent interview, I was posed with a thought-provoking question about the most impressive application of AI that holds the greatest potential. This was basically the question about the "killer app." The term "killer app" was invented by pioneers of mass computing to mean a software so essential that it drives the success of a larger platform or system. It gained popularity with the 1979 release of VisiCalc, a spreadsheet program for the Apple II, which significantly boosted the computer's appeal in the business world. "Killer app" now broadly refers to any software or service that significantly drives the adoption of a technology.

My response named a broad spectrum of AI applications where the core task involves comparing or merging two documents. Consider the everyday tasks like grading student papers, which essentially is juxtaposing a grading rubric against student submissions. Or the process of job applications, where one's resume or cover letter is matched with the job description. Even more intricate tasks like reviewing contracts involve a comparative analysis between the contract's text and relevant laws and regulations. Similarly, writing a grant application is a fusion of the request for proposal (RFP) with one's own ideas or previously written articles.

This insight opens up a broader perspective on the nature of our intellectual activities in the workplace. Many of these tasks revolve around blending, merging, and oscillating between two or more texts. If we start viewing our tasks through the lens of 'feeding the AI beast' with relevant documents, we unlock a new way to leverage this astonishing technology for our benefit.

The implications of this AI capability are profound. It's not just about simplifying tasks; it's about enhancing our cognitive processes. Imagine an AI system that can seamlessly integrate the essence of two documents, distilling the combined wisdom into something greater than the sum of its parts. This isn't just about automation; it's about augmentation. It's the fusion of human intellect with machine precision that could redefine how we approach problem-solving.

Let's delve deeper into the examples. In the educational sector, the grading of papers becomes not just a task of assessment but an opportunity for tailored feedback. The AI, by comparing a student's work with the rubric, can identify nuances that might be overlooked in a manual review. It can offer insights into a student's thought process, learning style, and areas needing improvement. This isn't just grading; it's a gateway to personalized education.

In the corporate world, the process of job applications or contract reviews is transformed. The AI's ability to merge and compare documents means it can align a candidate's skills and experiences with a job's requirements more accurately, potentially revolutionizing recruitment processes. Similarly, in legal settings, reviewing contracts with AI can ensure compliance and mitigate risks more efficiently, saving countless hours and reducing human error.

In short, the real magic of AI lies in its ability to blend and compare documents, a seemingly mundane task that, upon closer examination, reveals itself as a key to unlocking new dimensions of efficiency, creativity, and understanding. 

Monday, January 22, 2024

Why AI is unlikely to replace teachers

The allure of a tech-driven utopia in education is not new. Radios, televisions, the internet, MOOCs – each has been heralded as a harbinger of the traditional teacher's obsolescence. Today, AI steps into this familiar spotlight, with some prophesizing a future with fewer educators. Understanding this perspective isn't challenging, given the enormity of public education's budget, the stubborn inequalities it harbors, and its notorious resistance to reform. However, the notion of significantly reducing teacher numbers through AI implementation seems, at best, a distant fantasy.

Chatbots, the latest prodigies of AI, have proven to be exceptional personal tutors. They can tailor information delivery to individual needs, offering a level of customization that traditional education struggles to match. But here's the rub: education is not merely about transferring information. It's about fostering a unique educational relationship that optimizes learning. For all its sophistication, AI lacks the capacity to replicate this.

AI indeed creates a paradise for autodidacts. Those with a natural inclination towards self-directed learning, armed with motivation and discipline, find in AI a boundless resource. However, the majority aren't autodidacts. They thrive in a relational context that not only motivates but also facilitates learning. This is a foundational principle in major learning theories, from Vygotsky's social development theory to Bandura's social learning theory and Bruner's constructivist theory. The invisible labor of a teacher or a college instructor lies in creating and nurturing this context. Presently, there is nothing in AI that can substitute this critical human element.

Furthermore, educational institutions have become integral to societal fabric, not merely as centers of learning but as community hubs. Imagining what millions of children and young adults would do without the structure of schools and colleges opens a Pandora's box of societal and developmental questions. These institutions require adult presence, not just for educational delivery, which AI might partly assume, but for the overarching environment of care and socialization they provide.

My prognosis? Unlike other industries where automation has resulted in significant workforce reductions, the field of education, particularly the teaching staff, will likely remain unscathed in this aspect. There's no need for panic among educators, but there is a need for adaptation. Learning to harness AI's capabilities will be crucial, not to replace teachers, but to complement them, freeing up time for the more nuanced, relational, and affective aspects of their roles. Additionally, educators must remain agile, adapting curricula to include skills that future employers will value, ensuring students are well-equipped for the evolving workforce.

In essence, AI in education is not a replacement, but a tool – one that, if used wisely, can enhance the educational experience without displacing its most vital component: the human educator.

Thursday, January 18, 2024

Four principles of public and philanthropic support of AI adoption

Governments and philanthropists can play a role in supporting AI diffusion across various sectors. Their involvement is as critical as that of businesses and researchers in driving forward this technological revolution. However, forging a public strategy for AI adoption remains a complex and unresolved task.

The rapid embrace of AI technology calls for a focus on leveraging the collective experiences of its extensive user base, in conjunction with market forces and entrepreneurial innovation. The United States, and California in particular, stands out for its globally admired technology innovation ecosystem. This environment, driven by dynamic market forces and a spirit of entrepreneurship, creates an ideal setting for AI development. Nevertheless, the lack of a cohesive public strategy in managing this evolution might lead to varied and possibly conflicting outcomes and objectives in AI adoption.

At the heart of this matter is the public's significant interest in the effective deployment of AI. The technology holds the potential to substantially boost the economy, revolutionize public services, reshape education, and enhance social welfare systems. Yet, it is essential to balance these advancements with equitable and efficient technology adoption, ensuring that AI contributes to resolving rather than exacerbating societal inequities.

Moreover, the integration of AI in public services presents a dual advantage: improving efficiency and extending service accessibility to a wider population segment. The key challenge is to deploy these technologies inclusively, considering the diverse needs of the community. While the swift adoption of AI offers numerous opportunities, it also demands strategic and thoughtful planning. This strategy must aim not only to capitalize on AI's benefits for economic and service improvements but also to guarantee that its societal integration is equitable and inclusive, aligning technological progress with the greater public interest.

1.  Get real

The first guiding principle in AI adoption is the pursuit of a balanced perspective, essential in navigating between two extreme viewpoints. On one side, there's the dystopian view that envisions AI as a catalyst for catastrophic job losses. This narrative often emerges from a fear of the unknown, harking back to historical instances where technological advancements initially disrupted the job market. However, this perspective tends to overlook how technological evolution has historically opened up new job opportunities and industries. There is also fear that AI poses an existential threat to humanity. These two mutually exclusive dooms day scenarios are amplified by the media.

On the other side lies the overly optimistic view that sees AI as a cure-all for every challenge, and that we quickly transition to labor free economies and enjoy abundance without work. This standpoint emerges from recognizing AI's immense potential to improve efficiency, solve complex issues, and bring novel solutions to various sectors. However, it can underestimate the challenges and limitations of implementing AI technologies, such as ethical considerations, the need for comprehensive data, and crucial human oversight.

A more realistic outlook suggests that the future of AI will likely follow historical trends, presenting both opportunities and challenges. Similar to the impact of the internet and mobile technology, AI is expected to enhance productivity and stimulate economic growth, but not bring us quickly into the world without scarcity. This advancement could manifest through more streamlined operations, improved data analysis, and innovation in diverse sectors.

Both extremes discourage pragmatic, thoughtful planning. The society cannot control a change that it cannot fathom. A balanced approach to AI adoption acknowledges AI's significant potential to contribute to productivity and economic growth. Simultaneously, it recognizes the importance of strategic management to facilitate a smooth transition in the job market and society at large. This approach avoids the pitfalls of extreme views, opting instead for a nuanced and realistic understanding of AI's role in shaping the future.

2.  Democratize technology

The second principle in AI adoption emphasizes the democratization of AI technology. This concept is based on the idea that AI's benefits should be broadly accessible, not just limited to a small group of experts. This approach to democratizing technology mirrors developments in fields like web design, which has evolved from a specialized skill for programmers to a more universally accessible tool. The devolution of expertise has been a steady trend, and we must not allow it to reverse with AI.

In AI, democratization means expanding access beyond tech experts to include educational institutions, public agencies, and businesses. This approach prevents potential monopolization by  a few vendors who might control the market with proprietary platforms and high licensing fees, which could shift the focus of AI from innovation to profit, limiting its societal benefits.

Democratizing AI fosters grassroots innovation, reducing vendor-dependency, enabling individuals and organizations to develop custom AI solutions for specific needs and challenges. This can spur a wave of creativity and problem-solving in sectors like education, healthcare, social services, and public administration.

Additionally, democratizing AI plays a critical role in reducing the risks of AI reinforcing existing inequalities or biases. When AI is accessible and understandable to a diverse group, it is more likely to be used inclusively, considering a wide range of perspectives and needs.

In essence, democratizing AI is about making it a tool for many, empowering a wide user base to understand, engage with, and apply AI in ways that enhance their work and lives. This approach ensures that AI's benefits are widely distributed and its development reflects a variety of voices and needs.

3.  Regulate fine-tuning

The third principle in AI adoption underscores the vital role of governments and philanthropic organizations in regulating AI's "fine-tuning" process. This principle acknowledges their significant influence in shaping AI's ethical development.

Fine-tuning in AI involves refining algorithms and their outputs to align with specific ethical guidelines and objectives. This step is crucial to ensure AI systems adhere to societal norms. A key part of fine-tuning is filtering out harmful or inappropriate content, such as pornography, conspiracy theories, or explicit violence. This process not only prevents the spread of such content but also ensures AI's positive contribution to society.

However, fine-tuning goes beyond just excluding harmful content. It also includes identifying and rectifying inherent biases within AI systems. AI models, trained on vast datasets, can inadvertently reflect societal biases. Left unchecked, these biases may reinforce or exacerbate societal inequalities. For example, AI by default generates images of unspecified  people as white males, reflecting a bias in training data. Correcting such biases is essential to make AI inclusive and representative of global diversity. Governments must compel IT companies to spend more on fine-tuning, and make their fine-tuning practices more transparent.

However, governments and philanthropist may play an active role in funding AI ethics research, promoting diversity in training data, or setting up bodies to oversee and evaluate AI systems for biases.

4.  Support equity

The fourth principle in AI adoption is about addressing areas where market forces alone may not suffice, particularly concerning the equity implications of AI. This principle calls for targeted support in segments where the private sector might not adequately invest due to limited market incentives.

A critical area of focus is technology for people with disabilities. Market mechanisms often fail to address these needs adequately, as the market for assistive technologies can be too small to lure significant private investment. This gap necessitates government or philanthropic intervention to develop AI solutions that are innovative, inclusive, and accessible to everyone, regardless of their physical or cognitive abilities.

Another area is AI's role in bridging language barriers and aiding language learners and linguistically marginalized communities. Here again, market forces may not be enough to drive the development of AI tools tailored for these groups. Government and philanthropic support is essential in creating AI applications that meet diverse linguistic needs, promoting inclusivity and understanding.

In education, AI's impact is particularly profound. Traditional reliance on written assignments and assessments means integrating AI into education is not just about investment but also about understanding learning theories and pedagogical practices. While entrepreneurs are adept at crafting innovative tech solutions, they may lack the necessary expertise in learning sciences to ensure these tools are effective in a learning context. Thus, additional support in research, development, and training is crucial for AI to positively transform educational practices.

Contrastingly, sectors like entertainment, which are more adaptable and resource-rich, are likely to manage AI-driven disruption independently. However, more public-oriented sectors such as social services, education, and medicine require substantial support from governments and philanthropic organizations. These sectors are pivotal to societal well-being and equity; their effective navigation of AI integration is crucial for the equitable distribution of AI benefits.

In summary, strategic AI adoption and integration is imperative, especially in sectors where market forces are insufficient. This strategy should include support for education, social services, and disability support to ensure AI serves the public good effectively. The involvement of governments and philanthropic organizations is critical in providing necessary resources, guidance, and regulatory frameworks. This ensures the development and implementation of AI in ethical, equitable, and universally beneficial ways.

Sunday, January 14, 2024

Advice for Entrepreneurs Developing AI Products for Educators

Hundreds if not thousands of start-ups have emerged to leverage the potential of generative AI, which is a good thing. This surge in innovation is crucial for ensuring a fair and beneficial transition to this new technology. Among these companies, many focus on the education sector. It's not just start-ups that are diving into this arena; established companies are also adopting an entrepreneurial approach.

First, let's talk about products that are likely to fail. A common pitfall is exploiting the current lack of AI expertise among teachers and professors. These models act as basic intermediaries, providing minimal assistance in crafting better AI prompts. However, generative AI's true allure lies in its democratic nature and ease of use. It lowers barriers by understanding natural language, eliminating the need for coding or complex interface navigation. Businesses that merely enhance prompt writing and inflate the concept of "prompt engineering" to promote their services are not just unethical but also unsustainable. Their low-value proposition is a short-term strategy at best; users will eventually see through it.

Another error is developing solutions without a deep understanding of educational practices. Merely interviewing a few educators doesn't suffice. To genuinely grasp the needs of the sector, companies should either include experienced educators in their teams or seek continuous, in-depth feedback on their product prototypes. This approach prevents creating solutions that lack a real problem to solve. Unfortunately, few outsiders truly understand the core challenges AI poses in education, with only a handful of products like Khanmigo addressing these issues effectively. One of the core problems for educators is the inability to calibrate AI tools for the Zone of Proximal Development.

Despite these pitfalls, the field of AI in education is ripe for innovation. Instead of creating superficial aids, there's a need for specialized, high-value tools. Each academic discipline has foundational skills critical for further learning, and some aspects of these disciplines can be aided by AI without compromising the development of higher skills. Developing numerous, level-specific assignments that integrate AI use while safeguarding these essential skills is vital. Another significant area for innovation is enhancing student experiences. Many educational institutions, particularly large universities, struggle to provide consistent advising and support in understanding academic regulations, schedule planning, and general wellbeing. Moreover, both K-12 and higher education systems face inefficiencies in their operations. Addressing these real issues and contributing meaningful, high-value solutions is where the true opportunity lies.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...