Tuesday, May 21, 2024

"First try with AI"; On the advantages of organic learning

Some people advocate for structured training programs and dedicated time for AI learning, but a more organic approach is more effective and efficient.

The secret to successfully incorporating AI into your work is to simply start using it for your next task. Rather than setting aside special time for AI learning, dive right in and explore how AI can assist you in your current projects. Need to do something? Write a memo, a long email, a letter, a grant proposal? "First Try with AI."

What do you have to lose? he worst-case scenario is that you waste a little time if AI proves unhelpful for that particular task. However, in most cases, you will discover its usefulness and potential to save you some time, even if it doesn't complete the task entirely.

It's important to recognize that AI never does everything for you. Only the most mindless, bureaucratic,  compliance-related content may be primarily handled by AI. However, for the majority of tasks, you will intuitively learn the right mix of human and AI ingredients to create the best soup. This organic learning process allows you to understand the strengths and limitations of AI within the context of your specific work.

There is nothing wrong with taking courses to learn about AI, but it is worth noting that assignments in such courses often lack authenticity. Those are "pretend tasks." Even after completing a course, you would still need to learn how to transfer your new skills into real-world contexts. In contrast, an organic approach to AI learning allows you to immediately apply your knowledge within the context of your work, resulting in a more motivated, deeper, and faster learning experience.

As you gradually incorporate AI into your daily tasks, you will naturally develop a better understanding of when and how to leverage its capabilities, and where to mitigate its shortcomings. This hands-on, contextual learning approach will not only help you become more proficient in using AI but also enable you to identify new opportunities for its application within your organization.

For educational contexts, we know there is a strong correlation between instructors personally using AI and them allowing students to use it in class. We don't trust things we do not understand, which explains the unreasonably strong worries about cheating. There will be no classroom use without the personal use by instructors first. Once teachers start using it for their own purposes, their anxiety levels go down, and their creativity frees up to invent classroom uses. 

Monday, May 13, 2024

Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work

The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric implies a homogeneous substance, akin to measuring the alcohol content in a beverage. However, my recent survey suggests that academic integrity associated with AI use is far from homogeneous. The survey asked educators to evaluate the ethical implications of using AI for twelve different tasks in writing an academic paper, ranging from researching to brainstorming to editing to actually writing full sections.

The findings revealed significant variance in responses. While many respondents were comfortable with AI aiding in brainstorming ideas, they expressed reservations or outright disapproval of AI writing entire paragraphs or papers. This disparity underscores a critical issue: there is no consensus in the academic profession on what constitutes acceptable AI assistance in learning. More strikingly, within each individual's responses, there was considerable variation in how different AI uses were assessed.

Consider the implications of a tool like Turnitin reporting "50% AI-generated" content. What does this figure actually represent? It lacks context about how the AI-generated content was incorporated. For instance, a paper could be largely original, with only minor edits made by AI at the end, potentially showing a high percentage of AI contribution. Conversely, a student might contribute minimally to an essentially AI-written paper, making slight modifications to reduce the AI-detected percentage. Both scenarios could yield vastly different percentages, yet the ethical implications are markedly divergent.

The pursuit of better detection technology misses the point. The issue is not with the detection capabilities but with the construct itself. The very idea of "AI-generated text" as a unified concept is problematic. Just as a depression inventory measures various symptoms that converge on the underlying construct of depression, our methods for evaluating AI in academic work must recognize the diverse and context-dependent nature of its use. The current approach, which treats all AI contributions as equivalent, is akin to judging a book's genre by counting its words. I which Turnitin and other commercial "AI Detectors" would show just a little more integrity and stop selling us the snake oil. They must know for sure that their claims are bogus, because AI-generated text is not a valid construct to be measured. 

Instead of focusing obsessively on detecting AI-generated content, we need to shift our perspective. We should expect and require students to use AI as part of their learning process. The challenge then becomes developing assignments that not only measure the content knowledge but also the meta-AI skills and competencies necessary to navigate and leverage these tools effectively. This approach acknowledges the complexity of AI's applications and ensures it is used responsibly, promoting a learning environment that respects both the potential and the limitations of artificial intelligence.

Monday, May 6, 2024

In Education, AI is an emergency

On one hand, AI presents an exhilarating leap forward, a kind of magic wand that promises to transform how we learn and teach. On the other hand,  this glam surface lies a grittier reality—one where the very essence of learning could be at risk.

In education, the core value lies in the process itself. The act of wrestling with ideas, constructing arguments, and stumbling over complex problems is where true learning happens. If a student turns to AI to write an essay, they might technically meet the assignment's requirements, but they've bypassed the intellectual struggle critical to internalizing knowledge. This worry has only deepened in the wake of the pandemic, which already strained educational norms and exposed glaring disparities. Introducing AI into this mix feels like throwing a wrench into an already delicate machine, risking the dilution of the educational experience and fostering a generation more adept at using tools than thinking independently.

Addressing this is no minor feat. It is not about rejecting AI's benefits outright, but rather about steering its use with a careful hand. Educators must become architects of a new curriculum that anticipates AI's influence and actively incorporates it in enriching ways. Perhaps this means designing projects where AI is expected to be used by the projects are still challenging and generate growth.

However, such a transformative approach to curriculum development is a colossal task, varied across academic disciplines and leveels of education. Educators need robust support systems, time to experiment and innovate, and backup from policies that understand and address these unique challenges. Governments and educational leaders must be partners in crafting policies that nurture educationally effectiveand responsible AI use.

As I reflect on this development, I am struck by the immensity of the challenge before us. It is not just about adapting to a new tool; it is about redefining the very foundations of how we teach and learn. It is about finding a way to harness the power of AI without sacrificing the soul of education. This is a journey that will require bold experimentation, deep collaboration, and a willingness to embrace the unknown. But it is a journey we must undertake, for the stakes are too high to ignore. The future of education hangs in the balance, and it is up to us to shape it with wisdom, courage, and a steadfast commitment to the human experience of learning.

Friday, May 3, 2024

Public Money, Private Glory?

At tech events, where AI CEOs bask in the adoration, there's a conspicuous absence in the narrative: the role of public funding and research. These sectors haven't just sprung up through the ingenuity and perseverance of a few brilliant minds; they're the culmination of substantial public investment. Yet, you'd be hard-pressed to hear a word of thanks to taxpayers or governments at these glittering presentations.

The problem with this omission is twofold. Firstly, it promotes a misleading story of technological development—one where breakthroughs seem to happen through sheer brilliance rather than collaborative, incremental progress supported by public funding. This narrative can skew public perception, suggesting that technological advancement might somehow spontaneously occur without structured support. It makes the process seem more magical than methodical, glossing over the reality that innovation is usually more marathon than sprint, and certainly not a solo race.

Secondly, this narrative concentrates excessive admiration—and thus influence—in the hands of tech leaders. Celebrated as visionary and almost superhuman, these individuals often come to wield significant power, not just over their companies but within society itself. Yet, while they may be exceptional in their fields, they frequently lack broad education in social sciences and humanities, or experience in broader human affairs, areas crucial for understanding the implications of the technologies they unleash. This can lead to decisions that prioritize innovation over social impact considerations or public welfare, a risky imbalance.

The superstar culture in technology isn't just an issue of misrepresentation. It has practical consequences, potentially leading policymakers and the public to undervalue the importance of ongoing governmental support for research. If tech advancements are viewed as products of individual genius rather than results of public investment and collaboration, governments and voters might feel justified in cutting funds to these areas, mistakenly believing the private sector will fill the gap. This could slow innovation and shift the global tech landscape, especially towards countries that maintain robust public funding for research.

Acknowledging the role of public funding in technology isn't about diminishing the achievements of tech leaders—it's about painting a more complete and accurate picture of innovation. This more nuanced understanding could foster better-informed decisions regarding funding, education, and policy, ensuring the ecosystem that nurtures new technologies remains dynamic and well-supported.

Ultimately, recognizing the collective contributions to technological advancements isn't just about giving credit where it’s due. It's about ensuring a balanced narrative that neither idolizes the individual innovator nor underestimates the foundational role of public investment. By correcting this imbalance, we can encourage a more sustainable, equitable approach to technology development—one that's grounded in reality and attentive to the broader implications of rapid technological change.

Four Myths About AI

AI is often vilified, with myths shaping public perception more than facts. Let us dispel four common myths about AI and present a more bala...