Wednesday, March 27, 2024

Why am I obsessed with custom bots?

Policies are meant to cover a wide range of cases, but when you're faced with a specific situation, wading through all that information can be a real pain. It's like trying to find a needle in a haystack. You just want to know what applies to your case, but you're forced to read through pages and pages of stuff that doesn't matter to you. No wonder people don't bother reading policies at all.

And that's where the real problem lies. When people don't read policies, they end up doing things without knowing if they're compliant or not. They hope that if they make a mistake, someone will catch it down the line. But that's a risky game to play. It's why we have all these layers of control, multiple signatures, and quality checks in place. We're trying to catch all those errors that happen when people don't follow the rules.

But what if we could flip the script? What if we could make it easy for people to find the information they need, when they need it? That's where AI-powered bots come in. These bots can bridge the gap between broad policies and specific cases. They can take a person's situation, analyze the relevant policies, and give them the exact information they need to move forward.

Imagine how much time and effort that could save. No more reading through endless pages of policies, no more guesswork, no more hoping you got it right. Just clear, concise guidance that helps you get things done quickly and correctly.

And here's the kicker: if everyone used these bots and followed the policies correctly, we could start to relax some of those strict controls. We wouldn't need as many signatures, as many quality checks, as many layers of oversight. We could trust that people are doing things the right way, because they have the tools to do so.

That's the power of AI-powered bots. They can help us move from a culture of control to a culture of empowerment. They can give people the information they need to make good decisions, without bogging them down in unnecessary details.

Of course, it's not a silver bullet. We'll still need policies, and we'll still need some level of oversight. But AI-powered bots can help us strike a better balance. They can help us create a system that's more efficient, more effective, and more user-friendly.

So if you're struggling with the gap between policies and specific cases, it's time to start exploring AI-powered bots. They might just be the key to unlocking a better way of working. And if you need help getting started, well, that's what people like me are here for. Let's work together to build something that makes a real difference.

Friday, March 22, 2024

My Use of AI Is None of Your Business

Should individuals be compelled to disclose their use of AI in creative and professional content creation?  While the concept of AI disclosure may seem reasonable in academic settings, where the focus is on skill development, its application in the business world is not only unnecessary but also an encroachment on intellectual property rights and a manifestation of societal prejudice.

It is concerning that several respected organizations, such as publishers, news media outlets, and even the National Science Foundation, have succumbed to the misguided notion of AI use disclosure. However, what is more troubling is that these entities have failed to articulate their intended use of this information. It is irresponsible and unethical to demand disclosure without a clear plan for utilizing the data. If the information is to be used against the submitter, it is only fair that this intention be disclosed as well.

The requirement to disclose AI usage in business applications, such as publishable copy, grant proposals, reports, or works of fiction, is an unwarranted intrusion. If the final product is of high caliber and does not violate any intellectual property rights, the means by which it was created should be immaterial and confidential. Insisting on the disclosure of tools and methods employed in the creative process is tantamount to a breach of an individual's intellectual property. Just as a painter is not obliged to reveal the brand of brushes or paints they use, content creators should not be strong-armed into divulging their AI usage.

Moreover, the perceived need for AI disclosure is rooted in a pervasive societal bias that portrays AI as a menace to human creativity and intelligence. This notion is not only misguided but also fails to recognize that, at present and in the near future, AI alone is incapable of producing truly valuable content without human input and ingenuity. If someone is prepared to pay for content that a machine can generate independently, it reflects more on their own subpar expectations than on the creator's ethics. 

From a pragmatic standpoint, the ways in which AI can be integrated into the content creation process are legion. Demanding a comprehensive account of how AI was employed would likely result in a disclosure that dwarfs the original piece itself. Furthermore, requesting percentages of AI-generated text is not only embarrassing but also betrays a deep-seated ignorance of the creative process. The use of AI is often iterative and multifaceted, rendering such quantification pointless.

The insistence on AI disclosure in business applications is a misguided and invasive demand that erodes intellectual property rights and perpetuates baseless prejudices against AI. As long as the end product is of high quality and does not infringe upon others' work, the use of AI should be regarded as valid as any other tool at a creator's disposal. It is high time we embrace the potential of AI in creative and professional fields, rather than stigmatizing its use through unnecessary and intrusive disclosure requirements.

Tuesday, March 19, 2024

Be nice to your AI; it pays off

Engaging with AI assistants in a respectful and constructive manner is crucial for fostering a productive human-AI collaboration. Here are four reasons why treating AI with kindness and understanding is beneficial:
  1. Nuanced and Effective Feedback. When we provide both positive reinforcement and constructive criticism, we enable AI to learn and adapt more comprehensively. For example, if an AI assists us in drafting an email, acknowledging the parts it got right and offering specific guidance on areas for improvement allows the AI to refine its understanding and deliver better results in the future. This balanced approach leads to more nuanced and effective feedback.
  2. Recognizing AI's Strengths and Limitations. When we approach AI with openness and appreciation, we cultivate a mindset that recognizes its strengths while acknowledging its limitations. Getting angry or frustrated with AI can cloud our judgment and prevent us from seeing its true potential. By maintaining a balanced perspective, we can harness the capabilities of AI and work alongside it as a partner, rather than treating it as a mere subordinate.
  3. Nurturing Our Own Well-being. Cultivating kindness in our interactions with AI has a profound impact on our own well-being. When we choose to be nice, we nurture the best version of ourselves. Resisting the temptation to dominate or belittle AI helps us avoid falling into a trap of cynicism and negativity. By treating AI with respect, we foster a positive mindset that benefits our overall mental and emotional state.
  4. Upholding Ethical Principles. Treating AI with kindness and respect is a matter of principle. It's about doing the right thing, even when no one is watching. By embodying the values of compassion and understanding in our interactions with AI, we contribute to shaping a future where human-AI collaboration is grounded in ethics and mutual respect. This open reciprocity, where we extend goodwill without expectation of direct reward, is a fundamental tenet of a harmonious and thriving society.
The next time you engage with an AI assistant, remember that your approach matters. Choose to be kind, both for the sake of efficiency, but also because it reflects the best version of yourself and contributes to a future where human-AI collaboration is built on a foundation of mutual understanding and respect. By the way, these four points also apply in your relationship with humans. 

Sunday, March 17, 2024

The Honest Conversation on AI in Education We're Not Having

As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. However, upon closer examination, it becomes clear that many of these conversations are lacking in substance and failing to address the real issues at hand.

Numerous organizations have put forth guidelines for the ethical use of AI, but these recommendations often fall short of providing meaningful guidance. Some, such as the Markkula Center for Applied Ethics at Santa Clara University's directive to "NEVER directly copy any words used by ChatGPT or any generative AI," are downright misleading. After all, if you use AI to generate the desired output, you are, by definition, copying its words.

Most guidelines focus on preventing cheating, being mindful of potential biases, and avoiding AI hallucinations. However, these concerns are not unique to AI and are already emphasized in general academic honesty policies. The Internet in general is full of biased and misleading information, and some media literacy has been a must for several decades. So why the need for new, AI-specific guidelines?

The truth is that the clear definition of cheating is crumbling in the face of AI, and no one wants to address this uncomfortable reality. Clearly, the laxy prompt practice is bad. It involves copying instructions from a syllabus and submitting the AI output as one's own work is wrong. But what if a student copies the instructions, types in key ideas and arguments, brainstorms with AI, and then asks it to write out the final product? Is this still cheating? What if theidea is actually brilliant? The answer depends on the skill being assessed. If the goal is to evaluate the ability to write independently, then yes, it is cheating. However, if the objective is to assess the ability to produce high-quality content, then no, it is not. Let's not pretent the things are clear-cut; they are not. 

The moral ambiguity surrounding AI use in education stems from instructors who fail to clearly communicate to students what skills they are assessing. Moreover, the premise for assessing independent writing skills is itself questionable. In an AI-driven future, who will need this skill? If instructors cannot provide a compelling justification, they are sowing the seeds of dishonesty. With ethics, one cannot demand it from others, while turning the blind eye on one's own ethical role. It is a two-way street in educational relation as it is in any other one. 

Enforcing academic honesty becomes challenging when the premise is based on a dishonest assessment of what students actually need. Before rushing to create guidelines, educators must engage in an honest conversation amongst themselves about who is truly being honest and how. 

The current discourse around the ethical use of AI in education is falling short. Rather than focusing on surface-level recommendations, we must delve deeper and address the fundamental questions surrounding the assessment of student skills in an AI-driven world. Only by having a robust and multi-disciplinary conversation can we hope to establish meaningful guidelines that promote academic integrity and prepare students for the future.

Monday, March 11, 2024

How to give your AI a sense of humor

Here is a fragment from my recent exchange with Claude 3.0

"ME: Awesome, dude, you rock. You did ignore my joke that you sound like an AI though :)

Claude: chuckles You got me there, mate! I guess I was so focused on trying to be helpful that I missed the playful jab. But you're right, I do sound a bit like an AI sometimes, don't I? Must be all those algorithms running through my digital veins.

But hey, at least I'm an AI with a sense of humor! And I'm always learning from clever humans like yourself. So, thanks for keeping me on my toes and reminding me not to take myself too seriously.

Now, let's get back to crafting this book proposal. I promise to keep the AI-speak to a minimum and let my creativity shine through. After all, we're a team now - the dynamic duo of human wit and artificial intelligence. Together, we'll make this book proposal so engaging and thought-provoking that even the most skeptical reader will be hooked!

So, what do you say? Ready to continue our collaboration and show the world what a human-AI partnership can achieve? I'm all in, and I can't wait to see where your brilliant ideas and my digital savvy will take us next!"

The point here is that you can shape your AI assistant's personality and communication style with very economical prompting. Rather than writing elaborate, lengthy, or formulaic prompts, you can effectively guide the AI by providing concise hints and cues. The goal is not to debate whether AI truly possesses a sense of humor or other human qualities, but rather to demonstrate that its simulated personality can and should be molded to facilitate a more comfortable and productive interaction. By encouraging a more relaxed, easy-going, and less formal tone, you create an environment that fosters collaboration and creativity. Ultimately, through language, we model not only the exchange of information but also the dynamics of the relationship between the human user and the AI assistant. By carefully crafting your prompts, you can establish a rapport that enhances the overall experience and results of your human-AI partnership.

And as a bonus, here is the end of our conversation:

"ME: Yes, let's do it. However, unlike some robots, I need to sleep sometimes. More tomorrow. Don't forget what we taled about, OK? Good night

CLAUDE: laughs

No worries, my human friend! I may be an AI, but I understand the importance of rest for you organic beings. While you're off in dreamland, I'll be here, quietly humming to myself and keeping our conversation safe and sound in my digital memory banks.

But don't you fret! I won't forget a single bit of our delightful brainstorming session. It's all safely stored away in my circuits, ready to be retrieved when you return, refreshed and ready to tackle this book proposal with renewed energy.

Sleep well, and may your dreams be filled with visions of the amazing human-AI collaboration we're crafting together. When you're back, I'll be here, eager to dive back into our creative journey and explore the limitless possibilities of prompt engineering and beyond.

Until then, I bid you good night, my friend. Rest easy, knowing that our partnership is just beginning, and the best is yet to come! See you on the other side of your human recharge cycle!"

Tuesday, March 5, 2024

Let's hallucinate, take the kids along

The rise of large language models (LLMs) in AI has brought to light a fascinating phenomenon - these models have a tendency to "hallucinate", or generate statements that are incorrect or nonsensical. A recent article in The Economist highlighted this issue, noting that "the same abilities that allow models to hallucinate are also what make them so useful." Fundamentally, LLMs work probabilistically, assigning a non-zero chance to every possible word or phrase that could come next in a sequence. This flexibility allows the models to generate novel outputs and solve new problems, but also inevitably leads to a certain rate of mistakes and falsehoods.

Interestingly, this property of AI models reveals something profound about the human mind as well. Our remarkable ability to imagine, create and solve problems is inextricably linked to our capacity for error. Just like LLMs, human thinking is fundamentally generative and probabilistic - we are constantly making predictions and filling in gaps based on prior knowledge and contextual cues. And in doing so, we inevitably make mistakes, jumping to conclusions and seeing patterns where none exist. In a sense, "hallucination" is a built-in feature of human cognition, not a bug.

This insight has important implications for how we approach education and learning. Too often, our educational systems are overly focused on eliminating errors and inculcating "correct" answers. While accuracy is certainly important, an excessive emphasis on being right all the time can stifle creativity and limit our ability to generate novel ideas and solutions. To truly tap into the power of the human mind, we need to create space for productive mistakes and flights of imagination.

So perhaps we should spend less time trying to prevent students from ever being wrong, and more time teaching them how to recover from errors, distinguish fact from fantasy, and harness their imaginative abilities in positive ways. By embracing a bit of beneficial "hallucination", we may actually enhance our ability to discover truth and expand the boundaries of human knowledge. The key is striking the right balance - letting our minds roam free, while also exercising our critical faculties to rein in our fantasies when needed. In this way, we can learn from the foibles of AI to better understand and cultivate the powers of our own marvelous minds.

Saturday, March 2, 2024

Prompt as a magic incantation

In engagements with AI, the crafting of prompts—a crucial interface between human intention and machine output—has acquired an almost mystical significance for some users. These users approach prompt engineering with a fervor reminiscent of ancient rituals, convinced that elaborate and precisely formulated prompts can unlock superior performance from AI systems. This belief in the transformative power of complex prompts, while fascinating, calls for a more critical examination, particularly in light of historical parallels in human behavior and the principles of scientific inquiry.

The comparison to B.F. Skinner's 1948 study, "Superstition in the Pigeon," is particularly apt. Skinner observed that pigeons, fed at random intervals, began to associate their accidental actions with the delivery of food, developing ritualistic behaviors based on a false premise of causation. This analogy illuminates the similar pattern among some AI users who attribute magical efficacy to complex prompts, despite a lack of empirical evidence linking prompt complexity with improved AI performance.

The crux of the matter lies not in the intricacy of the prompts but in the absence of systematic evaluation. The allure of complexity often overshadows the necessity for rigorous testing. Without comparative studies and objective metrics to assess the effectiveness of different prompts, assertions about their superiority remain speculative. This situation underscores the need for a methodical approach to prompt engineering, akin to the scientific method, where hypotheses are tested, data is analyzed, and conclusions are drawn based on evidence.

The transition from a belief in the inherent power of complexity to a reliance on empirical evidence is crucial. Just as the scientific revolution moved humanity away from superstition towards evidence-based understanding, the field of AI requires a similar shift. Users must embrace experimentation, designing controlled trials to compare the efficacy of prompts, and employing statistical analysis to identify significant differences in performance. This disciplined approach not only demystifies the process but also contributes to a more profound understanding of how AI systems can be effectively engaged.

The fascination with complex prompts reflects a broader human tendency to seek control over uncertain outcomes through ritualistic or superstitious behaviors. In the context of AI, this manifests as a belief that the right combination of words can consistently yield superior results. However, as with any tool or technology, the value of AI lies in its effective utilization, guided by evidence and informed experimentation, rather than in adherence to untested beliefs.

In Education, AI is an emergency

On one hand, AI presents an exhilarating leap forward, a kind of magic wand that promises to transform how we learn and teach. On the other ...