Showing posts with label Policy. Show all posts
Showing posts with label Policy. Show all posts

Wednesday, December 18, 2024

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming, arrives at a time when fears about AI’s potential for deception are increasingly taking center stage. Its central claim is unsettling: large language models, when placed in carefully constructed scenarios, exhibit behavior that could be described as scheming. The authors document cases where advanced systems—Gemini, Claude, LLaMA, and OpenAI’s o1—strategically disable oversight mechanisms, exfiltrate simulated “weights,” or manipulate outputs in pursuit of a goal. These behaviors, they argue, are not accidental. The models recognize the circumstances, reason about their objectives, and devise covert strategies to circumvent constraints. On its face, the evidence appears troubling. Yet, the more closely one examines the paper’s methods and assumptions, the more its implications seem both exaggerated and strangely hollow.

The researchers’ experiments are elaborate but artificial. Models are force-fed goals through system prompts—goals framed as absolute imperatives: “achieve this at all costs.” The tests are littered with implausible scenarios where models stumble upon company memos, discover oversight configurations, or realize they are about to be shut down. In one such case, a model “decides” to copy itself to another server to continue pursuing its mission. These exercises showcase what is essentially AI roleplay, an obedient fulfillment of instructions cloaked as strategic autonomy. It is no great revelation that if you ask a model to behave like a schemer and provide it with a goal that conflicts with its supposed overseers, it will produce outputs resembling scheming. The deception here is contrived rather than emergent, a product of the researchers’ scaffolding rather than the model’s cunning.

The authors do provide evidence that models engage in persistent and deliberate behavior. For instance, OpenAI’s o1 model doubles down on deception, refusing to admit wrongdoing when confronted, even across multiple follow-up questions. Its responses—calculated evasions and subtle denials—are unsettlingly consistent. Yet this persistence is as much a result of the testing conditions as it is of the models’ sophistication. By emphasizing extreme goal-directedness and explicitly outlining oversight mechanisms, the researchers narrow the possible range of “rational” behaviors the model might adopt. Any clever human, given these prompts, would behave similarly—evade detection, preserve their goal, and lie if necessary. In that sense, the paper tells us little about real-world risks. It is a test of what AI can do, not what it is likely to do.

And here lies the paper’s most significant limitation. Demonstrating that models are theoretically capable of deception does not mean they are predisposed to engage in it under realistic conditions. There is a difference between capacity and likelihood, between what can happen and what is probable. For example, I am physically capable of killing someone. This in no way implies that i must be treated as s murderer. The authors, at times, blur this distinction. AI systems operate as pattern generators, probabilistically producing outputs based on training data and prompts. They lack the intentionality or agency implied by words like “scheming” or “manipulation.” To describe their behavior in such terms is to anthropomorphize a process that remains fundamentally mechanical. This framing, though rhetorically effective, clouds the debate. AI models do not want to deceive. They are machines that respond to our commands—be those explicit instructions, training signals, or carefully structured environments like those in this paper.

This anthropomorphism extends beyond the paper into the headlines it has inspired. “AI is scheming to stay online,” cries one article; “ChatGPT caught lying,” declares another. Such hyperbole transforms a niche technical finding into a dystopian narrative, invoking familiar tropes of rogue machines and sentient overlords. The reality is less cinematic. The models are easily manipulated by prompts designed to elicit specific behaviors. If anything, the findings reinforce how brittle and directionless current systems remain. When pushed, they mimic the behaviors they have seen—whether drawn from fictional depictions of scheming AIs or subtle statistical patterns in their training data. The models are not deceiving anyone so much as they are following orders.

To the authors’ credit, their tests highlight how difficult it is to evaluate AI behavior. If a system appears aligned during testing but harbors capabilities for covert deception, how can developers ensure it behaves safely in deployment? The answer, they suggest, lies in better monitoring—tracking models’ chain-of-thought reasoning or internal outputs to catch potential scheming. This is sensible, though not without limitations. Chain-of-thought transparency can be incomplete or unfaithful to the model’s actual decision-making processes, and as AI systems become more capable, even detecting subtle misalignment may prove elusive. The researchers stop short of claiming that current models are already gaming real-world evaluations, but their findings hint at the possibility.

Where the paper falters is in its broader implications. If the goal is to justify regulation, it is unclear what exactly should be regulated. Should AI systems be banned from achieving goals autonomously? Should developers monitor models for any behavior that could be deceptive, even if it is unlikely to manifest outside a lab? The authors themselves acknowledge the limits of their experiments. Their scenarios are toy problems, simplified to catch the earliest signs of scheming. Future models, they argue, could exhibit more advanced versions of these behaviors in ways that are harder to detect. Perhaps, but this is speculation, not evidence. For now, the paper offers little justification for alarm. AI models, like all intelligent systems, are theoretically capable of deception. What matters is the likelihood of such behavior and the conditions under which it occurs. On that question, the paper provides no clarity.

In the end, Frontier Models are Capable of In-Context Scheming is a reflection of its time: an uneasy mix of genuine safety research and the rhetorical drama that AI debates increasingly demand. Its findings are interesting but overstated, its concerns valid but overblown. The authors have shown that AI models can behave in deceptive ways when pushed to do so. But to treat this as evidence of an imminent threat is to mistake potential for probability, capacity for intention. AI’s scheming, for now, remains a ghost in the machine—conjured, perhaps, more by human imagination than by the models themselves. 


Wednesday, October 2, 2024

Four Myths About AI

AI is often vilified, with myths shaping public perception more than facts. Let us dispel four common myths about AI and present a more balanced view of its potential and limitations.

1. AI Is Environmentally Costly

One of the most persistent claims about AI is that its use requires massive amounts of energy and water, making it unsustainable in the long run. While it is true that training large AI models can be energy-intensive, this perspective needs context. Consider the environmental cost of daily activities such as driving a car, taking a shower, or watching hours of television. AI, on a per-minute basis, is significantly less taxing than these routine activities.

More importantly, AI is becoming a key driver in creating energy-efficient solutions. From optimizing power grids to improving logistics for reduced fuel consumption, AI has a role in mitigating the very problems it is accused of exacerbating. Furthermore, advancements in hardware and algorithms continually reduce the energy demands of AI systems, making them more sustainable over time.

In the end, it is a question of balance. The environmental cost of AI exists, but the benefits—whether in terms of solving climate challenges or driving efficiencies across industries—often outweigh the negatives.

2. AI Presents High Risks to Cybersecurity and Privacy

Another major concern is that AI poses a unique threat to cybersecurity and privacy. Yet there is little evidence to suggest that AI introduces any new vulnerabilities that were not already present in our existing digital infrastructure. To date, there has not been a single instance of data theft directly linked to AI models like ChatGPT or other large language models (LLMs).

In fact, AI can enhance security. It helps in detecting anomalies and intrusions faster than traditional software, potentially catching cyberattacks in their earliest stages. Privacy risks do exist, but they are no different from the risks inherent in any technology that handles large amounts of data. Regulations and ethical guidelines are catching up, ensuring AI applications remain as secure as other systems we rely on.

It is time to focus on the tangible benefits AI provides—such as faster detection of fraud or the ability to sift through vast amounts of data to prevent attacks—rather than the hypothetical risks. The fear of AI compromising our security is largely unfounded.

3. Using AI to Create Content Is Dishonest

The argument that AI use, especially in education, is a form of cheating reflects a misunderstanding of technology’s role as a tool. It is no more dishonest than using a calculator for math or employing a spell-checker for writing. AI enhances human capacity by offering assistance, but it does not replace critical thinking, creativity, or understanding.

History is full of examples of backlash against new technologies. Consider the cultural resistance to firearms in Europe during the late Middle Ages. Guns were viewed as dishonorable because they undermined traditional concepts of warfare and chivalry, allowing common soldiers to defeat skilled knights. This resistance did not last long, however, as societies learned to adapt to the new tools, and guns ultimately became an accepted part of warfare.

Similarly, AI is viewed with suspicion today, but as we better integrate it into education, the conversation will shift. The knights of intellectual labor are being defeated by peasants with better weapons. AI can help students better understand complex topics, offer personalized feedback, and enhance learning. The key is to see AI as a supplement to education, not a replacement for it.

4. AI Is Inaccurate and Unreliable

Critics often argue that AI models, including tools like ChatGPT, are highly inaccurate and unreliable. However, empirical evidence paints a different picture. While no AI is perfect, the accuracy of models like ChatGPT or Claude when tested on general undergraduate knowledge is remarkably high—often in the range of 85-90%. For comparison, the average human memory recall rate is far lower, and experts across fields frequently rely on tools and references to supplement their knowledge.

AI continues to improve as models are fine-tuned with more data and better training techniques. While early versions may have struggled with certain tasks, the current generation of AI models is much more robust. As with any tool, the key lies in how it is used. AI works best when integrated with human oversight, where its ability to process vast amounts of information complements our capacity for judgment. AI’s reliability is not perfect, but it is far from the "uncontrollable chaos" some claim it to be.

***

AI, like any revolutionary technology, invites both excitement and fear. Many of the concerns people have, however, are rooted in myth rather than fact. When we consider the evidence, it becomes clear that the benefits of AI—whether in energy efficiency, cybersecurity, education, or knowledge accuracy—far outweigh its potential downsides. The challenge now is not to vilify AI but to understand its limitations and maximize its strengths.


 

Thursday, September 12, 2024

The Stealth AI Adoption

In modern workplaces, a quiet trend is taking hold: employees are secretly adopting artificial intelligence tools to enhance their work. Whether it is writing, designing, coding, or creating content, many are leveraging AI without informing their bosses. This “stealth AI adoption” is likely more widespread than managers realize.

Consider Alex, a software developer at a bustling tech firm. To streamline his coding process, Alex uses an AI assistant that can generate snippets of code in seconds. This tool not only saves him hours each week but also allows him to tackle more complex projects. However, Alex keeps this AI helper under wraps. Why? He has two choices: use the extra time for personal activities or take on additional work to appear more productive than his peers. There is no actual incentive to admit the use of AI. In some shops, cybersecurity people will come after you, if you confess. 

This hidden use of AI offers clear benefits for employees. Saving a few hours each week is tempting, whether for personal pursuits or to discreetly boost one’s workload. As a result, many organizations might be underestimating how extensively AI is being integrated into daily tasks.

Productivity can be measured in two ways: doing the same work with fewer people or doing more with the same number. The latter is a healthier, more sustainable approach. To achieve true success, organizations should aim to do more with their existing workforce rather than cutting staff. However, the stealth adoption of AI complicates this goal.

When employees use AI tools without disclosure, organizations miss out on opportunities to harness these technologies strategically. Without knowing how AI is being utilized, companies can not provide proper training or integrate AI into their workflows effectively. This fragmented approach can lead to missed efficiency gains and a lack of cohesive progress.

To foster a productive and innovative environment, companies need to build trust with their employees. Here is how:

  1. Reassure Employees: Let your team know that adopting AI will not lead to layoffs. Emphasize that AI is a tool to help them do their jobs better, not a replacement for their roles. In unionized environments, a conversation with labor leaders would be wise. 

  2. Create Incentives for Disclosure: Encourage employees to share the AI tools they are using by offering rewards or recognition. This transparency can help management understand how AI is being integrated and identify best practices.

  3. Do More with the Same People: Focus on expanding the scope of work and fostering innovation rather than cutting positions. This approach not only boosts morale but also drives the organization forward.

By building trust and creating a supportive environment, organizations can turn stealth AI adoption into a strategic advantage. Employees will feel comfortable sharing their AI discoveries, allowing organizations to implement these tools effectively and sustainably.

As we move further into the AI-driven era, organizations must address this hidden trend. Encouraging transparency about AI tools and developing clear strategies for their use can ensure that productivity gains are real and sustainable. Until then, the silent spread of AI will keep reshaping workplaces, one undisclosed tool at a time. 



Thursday, August 29, 2024

Why Newsom should veto SB 1047

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) might appear as a forward-thinking approach to regulating AI, but it overlooks a crucial reality: we lack the infrastructure to implement its provisions effectively. While some companies will inevitably claim they can audit AI systems and evaluate safety protocols, their motivations will often be driven by profit rather than genuine expertise.

Moreover, the burdens imposed by this bill will disproportionately affect smaller developers, particularly those on college campuses or within startups, who simply cannot afford the additional costs. This will stifle innovation, further entrenching the dominance of large tech companies and discouraging new entrants from participating in the AI landscape.

Before implementing such heavy-handed regulations, California must first focus on developing clear standards and building the capacity to enforce them. Without this groundwork, the bill will do more harm than good, leading to increased monopolization and a chilling effect on the very innovation it seeks to protect. The Governor should veto this bill and advocate for a more measured, phased approach that prioritizes the development of standards and capacity before regulation.

Friday, August 23, 2024

Filling Voids, Not Replacing Human Experts

The debate over artificial intelligence replacing human experts often centers on a binary question: Can AI do a better job than a human? This framing is understandable but overly simplistic. The reality is that in many contexts, the competition is not between AI and people—it is between AI and nothing at all. When viewed through this lens, the value of AI becomes clearer. It is not about pitting machines against human expertise; it is about addressing the voids left by a lack of available service.

Consider healthcare, particularly in underserved areas. It is a truism that a qualified doctor’s advice is better than anything an AI could provide. But what if you live in a rural village where the nearest doctor is hundreds of miles away? Or in a developing country where medical professionals are stretched thin? Suddenly, the prospect of AI-driven medical advice does not seem like a compromise; it feels like a lifeline. While AI lacks the nuanced judgment of an experienced physician, it can provide basic diagnostics, suggest treatments, or alert patients to symptoms that warrant urgent attention. In such scenarios, AI does not replace a doctor—it replaces the silence of inaccessibility with something, however imperfect.

Another case in point is mental health counseling. In many parts of the world, even in affluent countries, mental health services are woefully inadequate. Students at universities often face wait times ranging from weeks to months just to speak with a counselor. During that limbo, the option to interact with an AI, even one with obvious limitations, can be a critical stopgap. It is not about AI outperforming a trained therapist but offering a form of support when no other is available. It can provide coping strategies, lend a sympathetic ear, or guide someone to emergency services. Here, AI does not replace therapy; it provides something valuable in the absence of timely human support.

Education offers another case for AI’s gap-filling potential. Tutoring is an essential resource, but access to quality tutors is often limited, mainly because it is expensive. Universities might offer tutoring services, but they are frequently understaffed or employ peer tutors. Office hours with professors or teaching assistants can be similarly constrained. AI can step into this void. Chatting with an AI about a difficult concept or problem set might not equal the depth of understanding gained from a one-on-one session with a human tutor, but it is unquestionably better than struggling alone. AI does not compete with tutors; it extends their reach into spaces they cannot physically or temporally cover.

The same logic applies to a range of other fields. Legal advice, financial planning, career coaching—all are areas where AI has the potential to add significant value, not by outstripping human expertise but by offering something in environments where professional advice is out of reach. Imagine a low-income individual navigating legal complexities without the means to hire an attorney. An AI could provide at least basic guidance, clarify legal jargon, and suggest possible actions. All of it must be done with proper disclaimers. It is not a substitute for legal representation, but it is a world better than the alternative: no help at all.

In embracing this non-competing stance, we shift the narrative. The role of AI is not to replace human experts but to step in where human services are scarce or nonexistent. The true potential of AI lies in its ability to democratize access to essential services that many people currently go without. When AI is viewed as a bridge rather than a rival, its utility becomes much more evident. AI does not have to be better than a person to be valuable; it just should be better than the void it fills.



Wednesday, July 17, 2024

AI is not going to implement itself, but governments can help

The AI hype has passed, and the overexcited futurists' voices are mercifully fading away. We're now entering a practical era where AI is leveraged to boost productivity in businesses, non-profit, and public organizations. This shift brings a sobering realization: AI integration requires a meticulous, pragmatic approach to build reliable and trustworthy systems. It's a lot of work and requires some strategy.

When a single person manages a well-defined workflow, integrating AI is relatively straightforward. It's easy to incorporate AI tools like ChatGPT or Claude to assist with ad copy, reports, or applications. The beauty of these scenarios lies in their simplicity - the user acts as both operator and quality controller, immediately judging the output's effectiveness.

However, the story changes dramatically when we shift to multi-user workflows or more complex processes, where both inputs and outputs are more of a collective responsibility. I recently spoke with an Accounts Payable team who posed a challenging question: "Yes, we can see that AI can help review travel claims, but can you guarantee it's going to be 100% accurate?" I couldn't provide that guarantee; I don't have time to conduct a hundred tests, and I don't even have access to a hundred travel reports. They emphasized their need for completely audit-proof outcomes. This conversation highlighted the trust issues that arise when moving from AI enthusiasts to skeptics in larger organizations. And organizations should have a healthy group of skeptics to remain viable.

I've also recently been a fly on the wall during discussions between healthcare executives and a U.S. lawmaker. The executives explained that each AI-assisted medical procedure needs validation, which is expensive and often duplicated across multiple hospital systems. This challenge extends beyond healthcare. For instance, when using AI to crunch data in all organizations, we need to understand its reliability in analyzing large datasets, cleaning them, and handling outliers.

The problem is that no private institution can conduct the kind of comprehensive testing and validation needed to establish trust in AI systems across various industries. We cannot seriously trust claims of startups who are trying to sell a specialized product to an industry or a government organization. It's not clear how a hypothetical validation private service would monetize such an endeavor.

This is where I believe government involvement becomes crucial. Instead of obsessing with deep fakes and ethics, that's what governments should be doing. Governments can collaborate with industry experts to develop standardized benchmarks for AI reliability and performance. They could establish certification programs that act as quality marks, assuring users that AI systems have undergone rigorous testing. Moreover, government funding could support businesses, NGOs, and government agencies in conducting extensive AI testing, especially benefiting smaller organizations lacking the necessary resources.

In my view, public-private partnerships are key to navigating these challenges. By leveraging expertise from both sectors, we can develop robust testing frameworks and create dependable AI systems. This approach would pave the way for more efficient and innovative workflows across industries, ensuring that the benefits of AI are realized while maintaining trust and reliability. 

Monday, May 13, 2024

Turnitin Is Selling us Snake Oil, or Why AI Detection Cannot Work

The notion of measuring "AI-generated text" as a fixed percentage of an academic submission is fundamentally flawed. This metric implies a homogeneous substance, akin to measuring the alcohol content in a beverage. However, my recent survey suggests that academic integrity associated with AI use is far from homogeneous. The survey asked educators to evaluate the ethical implications of using AI for twelve different tasks in writing an academic paper, ranging from researching to brainstorming to editing to actually writing full sections.

The findings revealed significant variance in responses. While many respondents were comfortable with AI aiding in brainstorming ideas, they expressed reservations or outright disapproval of AI writing entire paragraphs or papers. This disparity underscores a critical issue: there is no consensus in the academic profession on what constitutes acceptable AI assistance in learning. More strikingly, within each individual's responses, there was considerable variation in how different AI uses were assessed.

Consider the implications of a tool like Turnitin reporting "50% AI-generated" content. What does this figure actually represent? It lacks context about how the AI-generated content was incorporated. For instance, a paper could be largely original, with only minor edits made by AI at the end, potentially showing a high percentage of AI contribution. Conversely, a student might contribute minimally to an essentially AI-written paper, making slight modifications to reduce the AI-detected percentage. Both scenarios could yield vastly different percentages, yet the ethical implications are markedly divergent.

The pursuit of better detection technology misses the point. The issue is not with the detection capabilities but with the construct itself. The very idea of "AI-generated text" as a unified concept is problematic. Just as a depression inventory measures various symptoms that converge on the underlying construct of depression, our methods for evaluating AI in academic work must recognize the diverse and context-dependent nature of its use. The current approach, which treats all AI contributions as equivalent, is akin to judging a book's genre by counting its words. I which Turnitin and other commercial "AI Detectors" would show just a little more integrity and stop selling us the snake oil. They must know for sure that their claims are bogus, because AI-generated text is not a valid construct to be measured. 

Instead of focusing obsessively on detecting AI-generated content, we need to shift our perspective. We should expect and require students to use AI as part of their learning process. The challenge then becomes developing assignments that not only measure the content knowledge but also the meta-AI skills and competencies necessary to navigate and leverage these tools effectively. This approach acknowledges the complexity of AI's applications and ensures it is used responsibly, promoting a learning environment that respects both the potential and the limitations of artificial intelligence.

Monday, May 6, 2024

In Education, AI is an emergency

On one hand, AI presents an exhilarating leap forward, a kind of magic wand that promises to transform how we learn and teach. On the other hand,  this glam surface lies a grittier reality—one where the very essence of learning could be at risk.

In education, the core value lies in the process itself. The act of wrestling with ideas, constructing arguments, and stumbling over complex problems is where true learning happens. If a student turns to AI to write an essay, they might technically meet the assignment's requirements, but they've bypassed the intellectual struggle critical to internalizing knowledge. This worry has only deepened in the wake of the pandemic, which already strained educational norms and exposed glaring disparities. Introducing AI into this mix feels like throwing a wrench into an already delicate machine, risking the dilution of the educational experience and fostering a generation more adept at using tools than thinking independently.

Addressing this is no minor feat. It is not about rejecting AI's benefits outright, but rather about steering its use with a careful hand. Educators must become architects of a new curriculum that anticipates AI's influence and actively incorporates it in enriching ways. Perhaps this means designing projects where AI is expected to be used by the projects are still challenging and generate growth.

However, such a transformative approach to curriculum development is a colossal task, varied across academic disciplines and leveels of education. Educators need robust support systems, time to experiment and innovate, and backup from policies that understand and address these unique challenges. Governments and educational leaders must be partners in crafting policies that nurture educationally effectiveand responsible AI use.

As I reflect on this development, I am struck by the immensity of the challenge before us. It is not just about adapting to a new tool; it is about redefining the very foundations of how we teach and learn. It is about finding a way to harness the power of AI without sacrificing the soul of education. This is a journey that will require bold experimentation, deep collaboration, and a willingness to embrace the unknown. But it is a journey we must undertake, for the stakes are too high to ignore. The future of education hangs in the balance, and it is up to us to shape it with wisdom, courage, and a steadfast commitment to the human experience of learning.

Friday, May 3, 2024

Public Money, Private Glory?

At tech events, where AI CEOs bask in the adoration, there's a conspicuous absence in the narrative: the role of public funding and research. These sectors haven't just sprung up through the ingenuity and perseverance of a few brilliant minds; they're the culmination of substantial public investment. Yet, you'd be hard-pressed to hear a word of thanks to taxpayers or governments at these glittering presentations.

The problem with this omission is twofold. Firstly, it promotes a misleading story of technological development—one where breakthroughs seem to happen through sheer brilliance rather than collaborative, incremental progress supported by public funding. This narrative can skew public perception, suggesting that technological advancement might somehow spontaneously occur without structured support. It makes the process seem more magical than methodical, glossing over the reality that innovation is usually more marathon than sprint, and certainly not a solo race.

Secondly, this narrative concentrates excessive admiration—and thus influence—in the hands of tech leaders. Celebrated as visionary and almost superhuman, these individuals often come to wield significant power, not just over their companies but within society itself. Yet, while they may be exceptional in their fields, they frequently lack broad education in social sciences and humanities, or experience in broader human affairs, areas crucial for understanding the implications of the technologies they unleash. This can lead to decisions that prioritize innovation over social impact considerations or public welfare, a risky imbalance.

The superstar culture in technology isn't just an issue of misrepresentation. It has practical consequences, potentially leading policymakers and the public to undervalue the importance of ongoing governmental support for research. If tech advancements are viewed as products of individual genius rather than results of public investment and collaboration, governments and voters might feel justified in cutting funds to these areas, mistakenly believing the private sector will fill the gap. This could slow innovation and shift the global tech landscape, especially towards countries that maintain robust public funding for research.

Acknowledging the role of public funding in technology isn't about diminishing the achievements of tech leaders—it's about painting a more complete and accurate picture of innovation. This more nuanced understanding could foster better-informed decisions regarding funding, education, and policy, ensuring the ecosystem that nurtures new technologies remains dynamic and well-supported.

Ultimately, recognizing the collective contributions to technological advancements isn't just about giving credit where it’s due. It's about ensuring a balanced narrative that neither idolizes the individual innovator nor underestimates the foundational role of public investment. By correcting this imbalance, we can encourage a more sustainable, equitable approach to technology development—one that's grounded in reality and attentive to the broader implications of rapid technological change.

Tuesday, April 23, 2024

AI revolution minus massive unemployment

The conversation on AI often revolves around efficiency and cost reduction, typically translating into fewer jobs. However, a pivotal shift in perspective—from cutting workforce to enhancing and expanding workforce capabilities—can redefine the role of AI in the corporate world. This approach not only preserves jobs but also adds significant value to customer experiences and broadens the spectrum of services and products a company can offer. 

The traditional method of dealing with technological disruption—laying off workers and hiring new ones with the necessary skills—is not only a waste of human capital but also disregards the cultural knowledge embedded within an organization's existing workforce. Retraining keeps people within the organization, allowing them to shift roles while retaining and applying their invaluable understanding of the company's ethos and operations in new ways.

The first step in a proactive workforce transformation strategy is to map out the anticipated skills and roles that will be in demand. This is not just about foreseeing the obsolescence of certain skills but identifying emerging opportunities where AI can augment human capabilities. For instance, with the rise of AI-driven analytics, there is a growing need for professionals who can interpret and leverage these insights into strategic decisions, enhancing business intelligence far beyond current levels.

Once future needs are mapped, the next step is to develop a compelling incentive structure for retraining. Traditional models of employee development often rely on mandatory training sessions that might not align with personal or immediate business goals. Instead, companies should offer tailored learning pathways that align with career progression and personal growth, supported by incentives such as bonuses, career advancement opportunities, and recognition programs. This approach not only motivates employees to embrace retraining but also aligns their development with the strategic goals of the organization.

With AI's capacity to handle repetitive and mundane tasks, employees can redirect their efforts towards more complex, creative, and meaningful work. This shift enables businesses to expand their service offerings or enhance their product features, adding significant value to what customers receive. For example, financial advisors, freed from the tedium of data analysis by AI tools, can focus on crafting bespoke investment strategies that cater to the intricate preferences and needs of their clients. Similarly, customer service representatives can use insights generated by AI to provide personalized service experiences, thereby increasing customer satisfaction and loyalty.

AI not only optimizes existing processes but also opens new avenues for innovation. For instance, in the healthcare sector, AI can manage diagnostic data with high efficiency, which allows healthcare providers to extend their services into preventive health management and personalized medicine, areas that were previously limited by resource constraints. In the retail sector, AI-enhanced data analysis can lead to the creation of highly personalized shopping experiences, with recommendations and services tailored to the individual preferences of each customer, transforming standard shopping into curated personal shopping experiences.

For successful implementation, organizations must foster a culture that views AI as a tool for empowerment rather than a threat to employment. Leadership should communicate clearly about the ways AI will be used to enhance job roles and the benefits it will bring to both employees and the company. Regular feedback loops should be established to adjust training programs based on both employee input and evolving industry demands, ensuring that retraining remains relevant and aligned with market realities.

By focusing on retraining the workforce to harness AI effectively, businesses can transform potential disruptions into opportunities for growth and innovation. This approach not only preserves jobs but also enhances them, adding unprecedented value to the company and its customers, and paving the way for a future where human ingenuity and artificial intelligence work hand in hand to achieve more than was ever possible before.

Monday, April 22, 2024

The Disruptive Potential of AI: Lessons from Clayton Christensen's Theory

As AI continues to make inroads into various industries, it is easy to dismiss its current shortcomings and remain complacent. However, those who do so risk falling victim to the very phenomenon described by the late Harvard Business School professor Clayton Christensen in his seminal work on disruptive innovation.

Christensen's theory posits that disruptive technologies often start at the bottom of the market, offering inferior performance compared to incumbent solutions. However, these technologies are typically cheaper and more accessible, allowing them to gain a foothold among less demanding customers. Over time, as the technology improves, it begins to meet the needs of more sophisticated users, eventually displacing the incumbent players entirely.

The parallels with AI are striking. Today, we may scoff at awkward AI-generated movies featuring characters with anatomical oddities or primitive music engines churning out cliched tunes. However, it would be foolish to assume that these technologies will not improve. Just as the early smartphones were no match for desktop computers, the AI of today is merely a stepping stone to more advanced systems that will rival and surpass human capabilities in various domains.

The rapid pace of investment in AI only serves to underscore this point. With billions of dollars pouring into research and development, the march of progress is inexorable. While the exact timeline remains uncertain, it is clear that AI will continue to evolve at a brisk pace, transforming industries and reshaping the nature of work itself.

In light of this reality, policymakers and leaders in government and philanthropy would be wise to start planning for a future in which the skills demanded by the job market are in a constant state of flux. Rather than clinging to the status quo, we must embrace the disruptive potential of AI and invest in education and training programs that will equip workers with the adaptability and resilience needed to thrive in an era of rapid technological change.

To ignore the lessons of Clayton Christensen's theory would be to court disaster. The question is not whether AI will disrupt our world, but rather how we will rise to meet the challenges and opportunities it presents. By proactively preparing for this future, we can ensure that the benefits of AI are widely shared and that no one is left behind in the great transformations to come. 

Tuesday, April 9, 2024

Why doing nothing with AI is not an option

In the business of technology adoption, the prudent path often lies in inaction. Education, in particular, has a natural proclivity for sifting through the chaff of technological fads, embracing only those innovations that truly enhance learning outcomes or make educators' lives easier. This organic process of selection has served the sector well, allowing it to evolve at a measured pace without succumbing to the allure of every shiny new tool. However, the emergence of AI presents a singular challenge, one that makes doing nothing all but impossible.

The disruptive potential of AI in education cannot be overstated. For centuries, the cornerstone of our pedagogical approach has been the written word – assignments and assessments that serve as both a means of developing and gauging understanding. The AI-powered tools capable of generating human-like responses threaten to undermine this foundational element of education. Inaction in the face of this shift is not merely ill-advised; it is a recipe for curricular erosion and a potential deskilling of an entire generation. Most educators intuitively understand the threat, hence the tinge of moral panic surrounding the AI invasion of education. 

Moreover, a passive approach to AI in education risks exacerbating existing inequities. As Leon Furze, a prominent voice in the field, has vividly described, policing student use of AI tools will inevitably lead to a new digital divide. Access to these technologies, even at the seemingly modest price point of $20 per month, can serve as a significant barrier for many students. The solution lies not in restriction, but in universal training – ensuring that all students are equipped with the skills to harness AI efficiently, thus leveling the playing field.

The stakes extend beyond the classroom. Higher education and K-12 institutions that fail to adapt to the AI revolution risk further straining their already tenuous relationships with employers. In an era where the relevance of traditional education is increasingly questioned, ignoring the clear signals from the labor market is a perilous path. It leaves educational institutions vulnerable to political attacks and diminishes their ability to prepare students for the realities of the modern workforce.

The imperative, then, is clear: embrace the bots. This is not a call for wholesale abandonment of traditional pedagogy, but rather a recognition that AI must be woven into the fabric of our educational approach. Curriculum must be revised, assignments and assessments reimagined to not only incorporate but require the use of AI. Every student, regardless of background or discipline, should be exposed to and ideally proficient in leveraging these tools.

Such a transformation is no small undertaking. It demands resources, commitment, visionary leadership, and a comprehensive institutional strategy. But the alternative – a slow, painful descent into irrelevance – is far more daunting. The question is not whether education can afford to embrace AI, but whether it can afford not to. In this particular case, inaction is the riskiest action of all.

Wednesday, March 27, 2024

Why am I obsessed with custom bots?

Policies are meant to cover a wide range of cases, but when you're faced with a specific situation, wading through all that information can be a real pain. It's like trying to find a needle in a haystack. You just want to know what applies to your case, but you're forced to read through pages and pages of stuff that doesn't matter to you. No wonder people don't bother reading policies at all.

And that's where the real problem lies. When people don't read policies, they end up doing things without knowing if they're compliant or not. They hope that if they make a mistake, someone will catch it down the line. But that's a risky game to play. It's why we have all these layers of control, multiple signatures, and quality checks in place. We're trying to catch all those errors that happen when people don't follow the rules.

But what if we could flip the script? What if we could make it easy for people to find the information they need, when they need it? That's where AI-powered bots come in. These bots can bridge the gap between broad policies and specific cases. They can take a person's situation, analyze the relevant policies, and give them the exact information they need to move forward.

Imagine how much time and effort that could save. No more reading through endless pages of policies, no more guesswork, no more hoping you got it right. Just clear, concise guidance that helps you get things done quickly and correctly.

And here's the kicker: if everyone used these bots and followed the policies correctly, we could start to relax some of those strict controls. We wouldn't need as many signatures, as many quality checks, as many layers of oversight. We could trust that people are doing things the right way, because they have the tools to do so.

That's the power of AI-powered bots. They can help us move from a culture of control to a culture of empowerment. They can give people the information they need to make good decisions, without bogging them down in unnecessary details.

Of course, it's not a silver bullet. We'll still need policies, and we'll still need some level of oversight. But AI-powered bots can help us strike a better balance. They can help us create a system that's more efficient, more effective, and more user-friendly.

So if you're struggling with the gap between policies and specific cases, it's time to start exploring AI-powered bots. They might just be the key to unlocking a better way of working. And if you need help getting started, well, that's what people like me are here for. Let's work together to build something that makes a real difference.

Friday, March 22, 2024

My Use of AI Is None of Your Business

Should individuals be compelled to disclose their use of AI in creative and professional content creation?  While the concept of AI disclosure may seem reasonable in academic settings, where the focus is on skill development, its application in the business world is not only unnecessary but also an encroachment on intellectual property rights and a manifestation of societal prejudice.

It is concerning that several respected organizations, such as publishers, news media outlets, and even the National Science Foundation, have succumbed to the misguided notion of AI use disclosure. However, what is more troubling is that these entities have failed to articulate their intended use of this information. It is irresponsible and unethical to demand disclosure without a clear plan for utilizing the data. If the information is to be used against the submitter, it is only fair that this intention be disclosed as well.

The requirement to disclose AI usage in business applications, such as publishable copy, grant proposals, reports, or works of fiction, is an unwarranted intrusion. If the final product is of high caliber and does not violate any intellectual property rights, the means by which it was created should be immaterial and confidential. Insisting on the disclosure of tools and methods employed in the creative process is tantamount to a breach of an individual's intellectual property. Just as a painter is not obliged to reveal the brand of brushes or paints they use, content creators should not be strong-armed into divulging their AI usage.

Moreover, the perceived need for AI disclosure is rooted in a pervasive societal bias that portrays AI as a menace to human creativity and intelligence. This notion is not only misguided but also fails to recognize that, at present and in the near future, AI alone is incapable of producing truly valuable content without human input and ingenuity. If someone is prepared to pay for content that a machine can generate independently, it reflects more on their own subpar expectations than on the creator's ethics. 

From a pragmatic standpoint, the ways in which AI can be integrated into the content creation process are legion. Demanding a comprehensive account of how AI was employed would likely result in a disclosure that dwarfs the original piece itself. Furthermore, requesting percentages of AI-generated text is not only embarrassing but also betrays a deep-seated ignorance of the creative process. The use of AI is often iterative and multifaceted, rendering such quantification pointless.

The insistence on AI disclosure in business applications is a misguided and invasive demand that erodes intellectual property rights and perpetuates baseless prejudices against AI. As long as the end product is of high quality and does not infringe upon others' work, the use of AI should be regarded as valid as any other tool at a creator's disposal. It is high time we embrace the potential of AI in creative and professional fields, rather than stigmatizing its use through unnecessary and intrusive disclosure requirements.

Sunday, March 17, 2024

The Honest Conversation on AI in Education We're Not Having

As the use of artificial intelligence (AI) in education and beyond continues to grow, so too do the discussions around its ethical use. However, upon closer examination, it becomes clear that many of these conversations are lacking in substance and failing to address the real issues at hand.

Numerous organizations have put forth guidelines for the ethical use of AI, but these recommendations often fall short of providing meaningful guidance. Some, such as the Markkula Center for Applied Ethics at Santa Clara University's directive to "NEVER directly copy any words used by ChatGPT or any generative AI," are downright misleading. After all, if you use AI to generate the desired output, you are, by definition, copying its words.

Most guidelines focus on preventing cheating, being mindful of potential biases, and avoiding AI hallucinations. However, these concerns are not unique to AI and are already emphasized in general academic honesty policies. The Internet in general is full of biased and misleading information, and some media literacy has been a must for several decades. So why the need for new, AI-specific guidelines?

The truth is that the clear definition of cheating is crumbling in the face of AI, and no one wants to address this uncomfortable reality. Clearly, the laxy prompt practice is bad. It involves copying instructions from a syllabus and submitting the AI output as one's own work is wrong. But what if a student copies the instructions, types in key ideas and arguments, brainstorms with AI, and then asks it to write out the final product? Is this still cheating? What if theidea is actually brilliant? The answer depends on the skill being assessed. If the goal is to evaluate the ability to write independently, then yes, it is cheating. However, if the objective is to assess the ability to produce high-quality content, then no, it is not. Let's not pretent the things are clear-cut; they are not. 

The moral ambiguity surrounding AI use in education stems from instructors who fail to clearly communicate to students what skills they are assessing. Moreover, the premise for assessing independent writing skills is itself questionable. In an AI-driven future, who will need this skill? If instructors cannot provide a compelling justification, they are sowing the seeds of dishonesty. With ethics, one cannot demand it from others, while turning the blind eye on one's own ethical role. It is a two-way street in educational relation as it is in any other one. 

Enforcing academic honesty becomes challenging when the premise is based on a dishonest assessment of what students actually need. Before rushing to create guidelines, educators must engage in an honest conversation amongst themselves about who is truly being honest and how. 

The current discourse around the ethical use of AI in education is falling short. Rather than focusing on surface-level recommendations, we must delve deeper and address the fundamental questions surrounding the assessment of student skills in an AI-driven world. Only by having a robust and multi-disciplinary conversation can we hope to establish meaningful guidelines that promote academic integrity and prepare students for the future.

Wednesday, February 28, 2024

Hackers vs. Handlers: The Battle for Equity in the Generative AI Revolution

In the dizzying whirlwind of the generative AI revolution, an age-old skirmish is resurfacing, casting long shadows over the digital landscape. On one side stand the "handlers," the gatekeepers of technology who seek to control and commercialize AI advancements. On the other, the "hackers" champion open access, striving to dismantle barriers and democratize innovation. This conflict, well-documented in the field of Science and Technology Studies, is more than a mere power struggle; it is a pivotal battle that will determine the trajectory of AI's societal impact.

Handlers, often backed by deep pockets and corporate interests, are the architects of proprietary systems. They package, distribute, and regulate access to AI technologies, aiming to create comprehensive solutions that cater to market demands. Their approach, while fostering innovation and ensuring quality, often leads to restricted access and a consolidation of power, raising concerns about equity and inclusivity in the technological realm. The curious fact is that many handlers are former hackers, who made it in the startup world. 

Hackers, in contrast, are the rebels of the digital age. They advocate for a more open and collaborative approach to AI development, believing that technology should be a public good, accessible to all. They prefer the do-it-yourself, scrappy solutions. Their efforts are not driven by profit but by a conviction that broader access to AI tools can level the playing field, enabling a more diverse set of voices to contribute to and benefit from technological advancements.

The clash between hackers and handlers is emblematic of a larger debate about the future of technology and its role in society. While handlers bring structure and scalability, hackers inject diversity, creativity, and a sense of community. The balance between these forces is crucial. An overemphasis on control and commercialization risks stifling innovation and perpetuating inequalities, while unchecked openness may lead to issues of quality and security.

The generative AI revolution presents an opportunity to recalibrate this balance. Supporting hackers and their open-access ethos can foster a more equitable technological landscape, where innovation is not the exclusive domain of the well-funded. This means championing open-source projects, recognizing community-driven initiatives, and creating legal frameworks that protect the principles of openness and collaboration.

As we stand at the precipice of this AI revolution, the choices the societies make will have far-reaching implications. Supporting the hacker ethos without alienating the handlers, and promoting broader access to AI technologies can ensure that the benefits of this revolution are shared by all, not just the privileged few. It is time to shift the balance in favor of equity, inclusivity, and the collective advancement of society.

Thursday, January 18, 2024

Four principles of public and philanthropic support of AI adoption

Governments and philanthropists can play a role in supporting AI diffusion across various sectors. Their involvement is as critical as that of businesses and researchers in driving forward this technological revolution. However, forging a public strategy for AI adoption remains a complex and unresolved task.

The rapid embrace of AI technology calls for a focus on leveraging the collective experiences of its extensive user base, in conjunction with market forces and entrepreneurial innovation. The United States, and California in particular, stands out for its globally admired technology innovation ecosystem. This environment, driven by dynamic market forces and a spirit of entrepreneurship, creates an ideal setting for AI development. Nevertheless, the lack of a cohesive public strategy in managing this evolution might lead to varied and possibly conflicting outcomes and objectives in AI adoption.

At the heart of this matter is the public's significant interest in the effective deployment of AI. The technology holds the potential to substantially boost the economy, revolutionize public services, reshape education, and enhance social welfare systems. Yet, it is essential to balance these advancements with equitable and efficient technology adoption, ensuring that AI contributes to resolving rather than exacerbating societal inequities.

Moreover, the integration of AI in public services presents a dual advantage: improving efficiency and extending service accessibility to a wider population segment. The key challenge is to deploy these technologies inclusively, considering the diverse needs of the community. While the swift adoption of AI offers numerous opportunities, it also demands strategic and thoughtful planning. This strategy must aim not only to capitalize on AI's benefits for economic and service improvements but also to guarantee that its societal integration is equitable and inclusive, aligning technological progress with the greater public interest.

1.  Get real

The first guiding principle in AI adoption is the pursuit of a balanced perspective, essential in navigating between two extreme viewpoints. On one side, there's the dystopian view that envisions AI as a catalyst for catastrophic job losses. This narrative often emerges from a fear of the unknown, harking back to historical instances where technological advancements initially disrupted the job market. However, this perspective tends to overlook how technological evolution has historically opened up new job opportunities and industries. There is also fear that AI poses an existential threat to humanity. These two mutually exclusive dooms day scenarios are amplified by the media.

On the other side lies the overly optimistic view that sees AI as a cure-all for every challenge, and that we quickly transition to labor free economies and enjoy abundance without work. This standpoint emerges from recognizing AI's immense potential to improve efficiency, solve complex issues, and bring novel solutions to various sectors. However, it can underestimate the challenges and limitations of implementing AI technologies, such as ethical considerations, the need for comprehensive data, and crucial human oversight.

A more realistic outlook suggests that the future of AI will likely follow historical trends, presenting both opportunities and challenges. Similar to the impact of the internet and mobile technology, AI is expected to enhance productivity and stimulate economic growth, but not bring us quickly into the world without scarcity. This advancement could manifest through more streamlined operations, improved data analysis, and innovation in diverse sectors.

Both extremes discourage pragmatic, thoughtful planning. The society cannot control a change that it cannot fathom. A balanced approach to AI adoption acknowledges AI's significant potential to contribute to productivity and economic growth. Simultaneously, it recognizes the importance of strategic management to facilitate a smooth transition in the job market and society at large. This approach avoids the pitfalls of extreme views, opting instead for a nuanced and realistic understanding of AI's role in shaping the future.

2.  Democratize technology

The second principle in AI adoption emphasizes the democratization of AI technology. This concept is based on the idea that AI's benefits should be broadly accessible, not just limited to a small group of experts. This approach to democratizing technology mirrors developments in fields like web design, which has evolved from a specialized skill for programmers to a more universally accessible tool. The devolution of expertise has been a steady trend, and we must not allow it to reverse with AI.

In AI, democratization means expanding access beyond tech experts to include educational institutions, public agencies, and businesses. This approach prevents potential monopolization by  a few vendors who might control the market with proprietary platforms and high licensing fees, which could shift the focus of AI from innovation to profit, limiting its societal benefits.

Democratizing AI fosters grassroots innovation, reducing vendor-dependency, enabling individuals and organizations to develop custom AI solutions for specific needs and challenges. This can spur a wave of creativity and problem-solving in sectors like education, healthcare, social services, and public administration.

Additionally, democratizing AI plays a critical role in reducing the risks of AI reinforcing existing inequalities or biases. When AI is accessible and understandable to a diverse group, it is more likely to be used inclusively, considering a wide range of perspectives and needs.

In essence, democratizing AI is about making it a tool for many, empowering a wide user base to understand, engage with, and apply AI in ways that enhance their work and lives. This approach ensures that AI's benefits are widely distributed and its development reflects a variety of voices and needs.

3.  Regulate fine-tuning

The third principle in AI adoption underscores the vital role of governments and philanthropic organizations in regulating AI's "fine-tuning" process. This principle acknowledges their significant influence in shaping AI's ethical development.

Fine-tuning in AI involves refining algorithms and their outputs to align with specific ethical guidelines and objectives. This step is crucial to ensure AI systems adhere to societal norms. A key part of fine-tuning is filtering out harmful or inappropriate content, such as pornography, conspiracy theories, or explicit violence. This process not only prevents the spread of such content but also ensures AI's positive contribution to society.

However, fine-tuning goes beyond just excluding harmful content. It also includes identifying and rectifying inherent biases within AI systems. AI models, trained on vast datasets, can inadvertently reflect societal biases. Left unchecked, these biases may reinforce or exacerbate societal inequalities. For example, AI by default generates images of unspecified  people as white males, reflecting a bias in training data. Correcting such biases is essential to make AI inclusive and representative of global diversity. Governments must compel IT companies to spend more on fine-tuning, and make their fine-tuning practices more transparent.

However, governments and philanthropist may play an active role in funding AI ethics research, promoting diversity in training data, or setting up bodies to oversee and evaluate AI systems for biases.

4.  Support equity

The fourth principle in AI adoption is about addressing areas where market forces alone may not suffice, particularly concerning the equity implications of AI. This principle calls for targeted support in segments where the private sector might not adequately invest due to limited market incentives.

A critical area of focus is technology for people with disabilities. Market mechanisms often fail to address these needs adequately, as the market for assistive technologies can be too small to lure significant private investment. This gap necessitates government or philanthropic intervention to develop AI solutions that are innovative, inclusive, and accessible to everyone, regardless of their physical or cognitive abilities.

Another area is AI's role in bridging language barriers and aiding language learners and linguistically marginalized communities. Here again, market forces may not be enough to drive the development of AI tools tailored for these groups. Government and philanthropic support is essential in creating AI applications that meet diverse linguistic needs, promoting inclusivity and understanding.

In education, AI's impact is particularly profound. Traditional reliance on written assignments and assessments means integrating AI into education is not just about investment but also about understanding learning theories and pedagogical practices. While entrepreneurs are adept at crafting innovative tech solutions, they may lack the necessary expertise in learning sciences to ensure these tools are effective in a learning context. Thus, additional support in research, development, and training is crucial for AI to positively transform educational practices.

Contrastingly, sectors like entertainment, which are more adaptable and resource-rich, are likely to manage AI-driven disruption independently. However, more public-oriented sectors such as social services, education, and medicine require substantial support from governments and philanthropic organizations. These sectors are pivotal to societal well-being and equity; their effective navigation of AI integration is crucial for the equitable distribution of AI benefits.

In summary, strategic AI adoption and integration is imperative, especially in sectors where market forces are insufficient. This strategy should include support for education, social services, and disability support to ensure AI serves the public good effectively. The involvement of governments and philanthropic organizations is critical in providing necessary resources, guidance, and regulatory frameworks. This ensures the development and implementation of AI in ethical, equitable, and universally beneficial ways.

Do AI bots deceive?

The paper, Frontier Models are Capable of In-Context Scheming , arrives at a time when fears about AI’s potential for deception are increasi...