Friday, August 9, 2024

Authorship, Automation, and Answerability

In the ongoing debate about the ethical use of AI, two main concerns stand out—one superficial and one profound. The first concern, often highlighted, is about the authenticity of authorship, with fears that AI-generated content might mislead us about who the true author is. However, this worry is largely misguided. It stems from a historically limited, Western-centric notion of authorship that blurs the line between the origin of ideas and the craft of their representation.

Take the legacy of Steve Jobs. He wasn’t celebrated for personally assembling each iPhone, but for his vision and design that brought the device to life. In our industrial world, the act of making things is not inherently authorial—designing them is. Why should it be any different with text, code, or images? If I designed this text, and used advanced tools to produce it, why am I not still the author? The shock many feel towards AI’s ability to generate content is akin to the upheaval experienced by 19th-century bootmakers during the Industrial Revolution. Automation has simply extended its reach into the realms of writing, coding, and art. The craftsmanship is replaced by automation, but the core principle remains: take pride in the ideas, not in the mechanics of their production. There is no inherent authorship in the latter.

But here’s where Mikhail Bakhtin’s notion of answerability helps our understanding of the true ethical stakes. While responsibility is often about fulfilling obligations or being held accountable after the fact, answerability is about our ongoing, active engagement with the world and the people in it. It is not just about who gets credit for the content; it is about recognizing that every action, every word, and every piece of AI-generated content occurs within a web of relationships. We are answerable to others because our creations—whether authored by human hands or machine algorithms—affect them.

The real concern, then, lies in the issue of answerability. AI-generated content often appears polished, convincing, and ready for immediate consumption. This creates a dangerous temptation to release such content into the world without thorough scrutiny. Here is where the ethical stakes rise significantly. AI may produce work that looks and sounds credible, but this does not guarantee that it is unbiased, meaningful, or truthful. It maybe garbage polluting the infosphere at best, or an outward harmful fake at worst. The ease of content creation does not absolve us of the responsibility to ensure its quality and integrity, and more importantly, it doesn’t free us from the answerability we have to the world around us.

This is the message we need to instill in our students, professionals, and anyone working with AI: you are still accountable and answerable for what you produce, even if a machine does the heavy lifting. Releasing AI-generated content without critical evaluation is akin to conjuring a spell without understanding its consequences. Like a magician wielding powerful but unpredictable magic, or a novice driver behind the wheel of a truck instead of a bicycle, the stakes have been raised. The tools at our disposal are more potent than ever, and with that power comes a heightened level of answerability.

In essence, the ethical debate surrounding AI shuold not be about the authorship of the craft but shuold be about the integrity and impact of the output. The real challenge is ensuring that what we create with these advanced tools is not only innovative but also responsible and answerable. As we continue to integrate AI into more aspects of our lives, we must focus less on who—or what—authored the content and more on the ethical implications of releasing it into the world. This is where the true ethical discourse lies, and it is here that our attention should be firmly fixed.


Why We Undervalue Ideas and Overvalue Writing

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade l...