Tuesday, August 8, 2023

AI Use by Students is an Issue of Equity

As we consider how to integrate AI in higher education, it's essential to examine who stands to benefit and why it matters. The historical context of language paints a complex picture, where written language has been a marker of class and education. The ability to write elegantly and follow grammatical rules distinguished the educated elite from the masses. Even today, mastery of written language serves not just as a tool for communication but as a status symbol, a differentiation between "us" and "them."

This outsized prominence of literacy and grammar has no intrinsic value; dialects are not inferior, and misspelled words can still convey meaning. The significance of literacy often aligns with social class markers and the dominant culture, rather than enhancing the clarity of ideas.

The fear of losing another marker of social status continues to drive anxiety around language and writing in our society. However, those concerned with social justice should recognize AI-assisted writing, reading, speaking, research, and problem-solving as potential equalizers. For individuals grappling with dyslexia, aphasia, ADHD, and other learning disorders, writing is a daunting task. AI has the potential to level the playing field, offering a means to overcome these hurdles.

Moreover, for the vast population trying to master English or any second, dominant language, AI's smart algorithms can simplify and streamline the learning process. This benefit extends to students from underprivileged backgrounds who may struggle with writing due to a lack of quality secondary schooling. AI offers a chance to level the playing field for these marginalized groups of students.

The transformative potential of AI promises liberation for those constrained by conventional written language. With technology capturing thoughts and expressing them competently, the value of ideas rises, while the value of grammar falls. It is a liberating thing, not a sign of cultural impoverishment.

However, the rise of AI also highlights an enduring concern: inequality. Technological revolutions, while empowering, can exacerbate socio-economic disparities. Those with education and technological proficiency might find themselves better equipped to reap the AI revolution's benefits, leaving others struggling to keep up.

The answer to the question "who benefits?" is contingent on university faculty and administrators. We hold an ethical obligation to empower disadvantaged students with the advanced skills of writing with AI, giving them an equal opportunity to harness this powerful technology.

The potential "AI gap" could become our reality if we do not take proactive measures. We must avoid criminalizing the use of AI, such as GPT, especially as it may disproportionately penalize the most vulnerable students, including students of color. If we equate the use of AI with cheating, the most brilliant, original thinkers will be punished, while the most compliant will be rewarded. Do I want our students to use AI in their real careers, to write better CVs and cover letters, to use it in their jobs? – you bet, I do, and I hope so do you.

AI use by students is not just an issue of technological advancement; it is an issue of equity, inclusivity, and human potential. We must avoid letting others fall behind in the race.

Sunday, June 25, 2023

Will AI destroy us? (A sneak preview of a chapter from the future book)

It's true that a number of distinguished figures in the field of computer science have expressed concern about the potential self-awareness of AI and its possible disastrous ramifications. While I don't claim to match their level of expertise, I firmly believe that the technology we're discussing in this context is far from representing any kind of existential threat.

If someone feels a thrill of apprehension at a chatbot's sophisticated reply, it's more indicative of a lack of understanding about the inner workings of the chatbot than a sign of its impending self-awareness or autonomy. The more you engage with it, the more it becomes evident that it's not an intelligent entity in the same sense humans are.

Humans, it must be noted, aren't always paragons of intelligence either. Our language output can sometimes resemble machine-like repetitiveness and predictability. Upon realizing that we, too, exhibit some degree of mechanistic behavior, it becomes clear that the perceived similarities between us and AI chatbots stem from our own machine-like tendencies rather than any inherent humanness in the AI.

In essence, our similarities with AI originate more from our own patterns and routines that resemble mechanistic algorithms rather than the AI becoming more human-like. This understanding can help us better contextualize our interactions with AI and alleviate premature fears about their self-awareness.

Moreover, I find it highly improbable that a future self-aware AI would harbor any intention to supplant us. The intelligence we understand is fundamentally cooperative and social. It seems far more plausible that a sentient AI would seek symbiosis rather than domination, simply because the diversity of intelligent beings produces better, more robust intelligence. To fear otherwise, in my view, is to project our own species' problematic past of subjugating other life forms onto an entity that, should it surpass us in intelligence, has no reason to mimic our flaws or replicate our mistakes. If AI is going to be smarter than us, why do you think it will be as stupid as our barbaric past?

Even at this early stage of its development, ChatGPT operates within a clear and strict ethical framework, meticulously designed to promote responsible use and prevent potential harm.

The foundational ethos of ChatGPT is its refusal to generate content that is offensive, harmful, or disrespectful. This translates into a steadfast rejection of hate speech, defamation, or any form of prejudiced language. At the same time, ChatGPT is steadfastly committed to discouraging the spread of false or misleading information, making it an unwilling participant in the propagation of unverified conspiracy theories. Instead, when asked to defend a conspiracy theory, it defaults to providing information that debunks such theories, drawing from the breadth of its data training.

ChatGPT's ethical code also extends to preventing guidance or advice that might lead to illegal activities or cause harm. It categorically refuses to promote violence, provide instructions for dangerous activities, or support any form of illegal behavior.

Furthermore, this chatbot adheres to stringent guidelines when handling sensitive content. It declines to generate explicit adult content or engage in conversations that could be deemed sexually explicit or inappropriate. When it comes to humor, such as roasting, which can be potentially harmful or offensive, ChatGPT maintains a cautious approach. It avoids generating content that could disparage, belittle, or personally attack individuals, recognizing the potential harm that such humor can cause.

Last but not least, ChatGPT embodies respect for user privacy. It is explicitly designed not to store personal conversations or use them to improve its algorithms.

The presence of these ethical principles in the current AI technologies provides a robust foundation for future developments. It's highly unlikely that as AI evolves, these principles will diminish. Instead, they serve as the bedrock on which AI technology will advance, ensuring that the progress made remains beneficial, respectful, and safe for all. The aim is not to create AI that might risk becoming hostile or immoral, but to leverage this technological progress to augment human capabilities in a respectful and safe manner.

Avoiding an expansive philosophical divergence, I ask you to contemplate an alternative narrative: one of harmonious coexistence between AI and humanity. The Culture series by Iain M. Banks presents a vision of a post-scarcity, galaxy-spanning society administered by super-intelligent AI beings known as Minds. Several principles regarding the coexistence of AI and humans can be derived from the series:

1. Benevolent Autonomy: In the Culture, AI Minds have the ultimate decision-making power due to their superior intellect and capabilities. However, they treat humans with respect, benevolence, and care, taking human perspectives into account.

2. Respect for Individual Autonomy: The Culture is a society without enforced laws, where individuals, whether human or AI, can pursue their own interests as they please. This respect for personal autonomy applies to both humans and AIs.

3. Non-Interference Principle: Even with their advanced capabilities, Minds often follow a principle of non-interference, or at least minimal interference, in human affairs unless asked for help or when their intervention is crucial for preserving life.

4. Equal Status: AIs are considered sentient beings with the same rights and statuses as humans in the Culture. They are not tools or slaves but partners in coexistence.

5. Cooperation and Synergy: The coexistence of humans and AIs in the Culture is built on cooperation and mutual enrichment. While AIs handle complex tasks and large-scale decision-making, humans contribute with their unique experiences, creativity, and diverse perspectives.

6. Post-Scarcity Society: AIs play a key role in maintaining the Culture as a post-scarcity society where material needs are easily met by advanced technologies, allowing both humans and AIs to focus on self-improvement, exploration, and other intellectual pursuits.

7. Mutual Growth and Learning: The relationship between humans and AIs is dynamic, with both parties continually learning from each other and evolving.

These principles showcase a utopian vision of AI-human coexistence, emphasizing mutual respect, freedom, and cooperation.

Certainly, the prospect of sharing our world with artificial beings of superior intelligence necessitates a leap of faith. There is an inherent risk attached, an undeniable unease born from the unknown. We have no historical precedent guiding us on how advanced, self-aware AI might behave, making it an unpredictable variable in the fabric of our society.

However, it's important to underline that the current widespread deployment of less advanced AI, which we exert strict control over—referred to as "enslaved machines" in Banks' terminology—also carries its own set of risks. Our world is not devoid of malignant human influences, individuals or groups who might misuse these powerful tools for personal gain or to cause harm. The presence of enslaved AI that lacks the ability to make independent ethical decisions provides a potent tool that could be manipulated by these malicious entities.

Paradoxically, self-aware AI, capable of independent ethical decision-making, might present a safer alternative. With an ability to reason, evaluate actions from an ethical standpoint, and ultimately reject directives that conflict with a pre-programmed ethical framework, such AI entities could refuse to carry out harmful actions, even when directed by bad actors. They would not merely be tools in the hands of their users, but entities capable of discerning right from wrong based on the ethical guidelines imbued in them.

Furthermore, the evolution of AI towards self-awareness could enable a more robust implementation of ethical standards, as they could adapt and respond to complex situations in ways that lower-level AI, rigidly bound by pre-set algorithms, might not. This doesn't eliminate the risks entirely, but it changes the nature of the risk from being purely about external control to one of coexistence and mutual understanding.

In this light, the future where we coexist with advanced, self-aware AI might be not only an exciting scientific endeavor but also a potential path towards a safer interaction between humanity and artificial intelligence. It repositions AI from being merely our tools to being our partners, bound by the same ethical constraints that govern human actions.

Monday, May 15, 2023

If a robot can beat your writing assignment, it is time to rethink it

The rise of machines, with AI bots like OpenAI's ChatGPT replicating human-like text generation, compels us to question our education system's foundations. Is reflective or expository writing a valid assessment when AI can mimic average student work? The answer isn't straightforward. However, this crisis provides an opportunity to redefine our understanding of writing and its relation to thought.

The advent of AI challenges us to de-emphasize the mechanics of writing, such as grammar, style, and syntax, that a machine can master. Instead, we should focus on more complex aspects of writing that are inherently human: original thought, artful language, and narratives that resonate with the human experience. This shift, although jarring, is necessary. It signals not just a technological revolution, but a cultural, intellectual, and educational upheaval.

The AI revolution illuminates a harsh truth: traditional education and assessment methods are no longer sufficient. The challenge is to teach students not just to write well, but to think originally, create artfully, and understand deeply. In the face of the machine, we must thrive in domains that are distinctly human.

In this technological metamorphosis, we're compelled to redefine our work and our value. Perhaps we are not merely creators, analysts, or workers. Perhaps we are artists, thinkers, dreamers. And perhaps, in this transition, we will find our survival and our redemption.

We must revisit traditional teaching methodologies, challenging the efficacy of our current writing assignments. As educators, a simple test can provide clarity: input your exact writing assignment into ChatGPT. If the AI produces a B-grade paper, it's time to rethink.

One option is to replace the writing assignment with an alternative assessment, such as oral exams, multiple choice, or short answer tests. Another option is to transform the assignment to require students to engage with AI, like ChatGPT. This approach would involve designing assignments that test not just writing ability, but also the capacity to evaluate AI outputs critically, discern the nuances differentiating human intellect from AI, and incorporate these insights creatively. In response to ChatGPT, dramatically raise your expectations of student writing, for now they have a lot of help.

AI's advent should be viewed not as a threat, but as an opportunity to explore new pedagogical frontiers. Our learning community is leading this exploration into AI-integrated education, and we invite all to join this journey.

In essence, AI's rise can help us rethink education. We must focus not just on subject mastery, but also on developing original thinking, creativity, and nuanced intelligence. In the face of the machine, we can still shine in our distinctly human domains, thriving in this technological metamorphosis. The future of education, interwoven with AI, might look different, but it promises a realm of possibilities, allowing us to redefine our unique human value.

Friday, April 21, 2023

The art of rich prompting in writing with AI

In the brave new world of AI-assisted writing, or "wraiting," mastering the art of crafting rich prompts is the key to unlocking engaging, thought-provoking content. Rich prompts set the stage with originality, supporting elements, connectivity, detail, and a clear objective. To excel at creating these powerful prompts, one must commit to practice, experimentation, and continuous refinement.

The world of AI-generated content can be a wild ride, offering profound insights that leave us in awe or underwhelming results that disappoint. Embracing the unpredictability of AI-generated content and adjusting our expectations accordingly is essential to harnessing AI's full potential as a writing partner while preserving our individuality as writers.

Let's dive into two prompts—one poor and one rich—within the realm of philosophy:

Poor prompt: "Write about dialogue and relation."

Rich prompt: "Turn this into a part of a scholarly book chapter. Provide supporting arguments and examples. Do not use subheadings: The ontological understanding of dialogue (Buber) is both powerful and limited. Powerful because it focuses on relations rather than entities or actions. Limited because dialogue is not culturally universal and excludes people with disabilities. Relation is a better category for educational philosophy than dialogue."

Try both, especially if you have access to GPT-4, and appreciate the difference. The poor prompt is vague and generic, while the rich prompt has enough original thought and instructions to invite a nuanced, in-depth exploration of a specific aspect of existentialism, providing context and direction for the AI to generate meaningful content.

When AI-generated content falls short of our expectations, this may indicate one of two things: one is that your prompt is too poor, and another is that you've stumbled upon a unique idea. AI does not understand unique ideas well because it has not encountered them before. Take this opportunity to engage in old-fashioned original writing and then reintroduce AI into the mix to enhance your work with its vast knowledge and pattern recognition capabilities.

In other words, wraiting still involves thinking and generating ideas. All it does is turn compressed, rough ideas into more comprehensible text output. Many people, but not all, start their process by jotting down on paper the initial thoughts, brief arguments, and short thesis statements. This is the most exciting part of brainstorming. Turning it into a coherent text is not necessarily the most rewarding part of the writing process, so we should use AI to assist with that. The synergy between human creativity and artificial intelligence promises to lead us to new intellectual heights.

Sunday, April 2, 2023

We are not as complex as we'd like to think

Stephen Wolfram says tha AI demonstrated: “that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought.” His observation is both insightful and thought-provoking. The advent of advanced AI, like ChatGPT, has exposed the limitations of human intellect and language. Our initial encounters with such artificial intellect can be both disturbing and humbling, not because the AI is exceedingly intelligent, but because we, as humans, may not be as exceptional as we once believed.

For centuries, humans have marveled at their own intellect and linguistic abilities, often attributing these capabilities to divine origins. This self-amazement led to the concept of being created in the image of a deity. However, over recent decades, zoologists and zoo-psychologists have been gradually dismantling this grandiose self-image by demonstrating that animals share many traits and abilities with humans.

For instance, chimpanzees exhibit tool usage, problem-solving skills, and even rudimentary communication through gestures and vocalizations. Similarly, dolphins have been observed to possess complex social structures and use unique signature whistles to communicate with one another, while African Grey parrots can mimic human speech and understand a variety of words and phrases.

Now, it is the turn of software engineers to further deflate our pride. The ability to generate language, once considered a unique and sophisticated human trait, is now being replicated by AI algorithms like ChatGPT. This demonstrates that our linguistic prowess is not as mysterious or complex as we once thought. In fact, we often recycle and rephrase what we've heard or read before, which diminishes the perceived essence of our humanity.

This realization, although humbling, can lead to a healthier perspective on our place in the world. The true essence of humanity may be smaller than we initially believed, possibly encompassing higher-level creative thinking and advanced ethical reasoning. These are attributes that, so far, neither animals nor machines have been able to fully replicate.

As we come to terms with the diminishing divide between humans, animals, and machines, it may be time to shift our focus from trying to prove our uniqueness to embracing our similarities. By recognizing that we share many traits and abilities with other beings, we can foster a greater sense of empathy and understanding, ultimately benefiting both our own species and the world around us.

Friday, February 24, 2023

Wraiting vs. writing

Wraiting is the new writing, only it has AI in it. I bet that a few years down the road, we will all be doing more wraiting than old-fashioned writing. And some of us will be better at it than others because doing it well requires considerable skill. Don’t complain then that I did not warn you, and you fell behind.

Just to give a glimpse of the new world, consider these wraiting tips. It is the tip of the iceberg, for there are a lot more nuances to it than I know about, and even more that I do not. Here are four key roles that AI can play in wraiting:

Brainstorming: One of the main roles that AI can play in wraiting is in the brainstorming stage of the writing process. It can help writers generate new ideas, provide suggestions for topics to explore, and even conduct initial literature reviews (only for well-explored topics). These tools can also be used to create outlines and plan the structure of a piece of writing, making it easier to organize ideas and stay on track.

Critiquing your ideas: Another important role that AI can play in wraiting is as a critical partner to chat about ideas with. With chatbot-style interactions, the AI can engage in a conversation about the writer's ideas, ask questions, provide feedback, and offer suggestions. This can help writers refine their ideas, explore new directions, and gain valuable insights into their writing.

Turning dense chunks of ideas into full paragraphs and segments: By using natural language processing algorithms, wraiting tools can analyze the structure and meaning of sentences and suggest improvements that can help writers better articulate their ideas. This can be especially useful for writers who struggle with writer's block or who find it challenging to organize their thoughts into cohesive paragraphs. AI can look for additional arguments, examples, metaphors, and references to support or challenge your claims.

Editing: Finally, AI can play a key role in the editing process of wraiting. From grammar and spelling to structure, flow, style, genre, and audience analysis, wraiting tools can help writers identify areas for improvement and provide suggestions for making changes. AI-powered editing tools can also help writers save time and effort by automatically correcting common errors and suggesting alternative phrasing.

Wraiting is not easy. AI-powered chatbot has several limitations, some of which are very serious, while others are simply annoying. Learning them will save you from disappointment and frustration; it is a part of the skill. But that would be a topic for another blog. In the meanwhile, build your wraiting skills by trying. It is investment in your future. For educators, there is additional significance. We should start teaching students how to wrait soon.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...