Forget Trusting Sam Altman, Barry Diller Says AGI's Unknown Future is the Real Worry

Forget Trusting Sam Altman, Barry Diller Says AGI's Unknown Future is the Real Worry

Media heavyweight Barry Diller recently made headlines by weighing in on the debate around trusting AI leaders, specifically OpenAI's Sam Altman. While many wonder if Altman can be relied upon to guide humanity through the AI revolution, Diller offered a surprising take. He suggested that personal trust, even in a figure like Altman, might not be the most important factor as we approach truly advanced artificial intelligence.

Diller, who knows Altman personally, publicly vouched for him, calling him a sincere and decent person with good values. This comes after some past reports and former colleagues have raised questions about Altman’s leadership style and trustworthiness. However, Diller quickly shifted focus from individual character to a much larger, more unpredictable challenge.

The real issue, Diller explained at a recent Wall Street Journal conference, isn't whether we can trust the people building AI. Instead, it’s the arrival of Artificial General Intelligence, or AGI. This isn't just today's smart chatbots; AGI would be a type of AI capable of outperforming humans across virtually any intellectual task, and its potential consequences are truly unknown, even to its creators.

Diller warned that even those at the forefront of AI development admit to a "sense of wonder" about what they are unleashing. He stressed that we are embarking on something that will change almost everything, and the path ahead is shrouded in mystery. The sheer scale and unpredictability of AGI are so immense, he argued, that human trust in individuals becomes almost beside the point.

Barry Diller is a long-time titan in the media world, a co-founder of Fox Broadcasting and currently chairman of IAC and Expedia Group. His insights carry weight, especially when discussing disruptive technologies. Sam Altman, on the other hand, is the highly visible CEO of OpenAI, the company behind ChatGPT, and a central figure in the current AI boom.

Altman's leadership has been under scrutiny recently, with some former board members and colleagues alleging manipulative or deceptive behavior. This created a public conversation about whether someone with such immense power over AI development could truly be trusted to guide it responsibly for humanity's benefit. Diller's comments directly address this tension, but then pivot to a more profound concern about the technology itself.

This discussion might sound like something out of a science fiction movie, but it has very real implications for everyone. If AGI truly arrives, it won't just optimize our work or write our emails; it could fundamentally reshape industries, economies, and even how we understand intelligence and progress. The direct impact is that the world you know could be altered in ways we can barely imagine.

On a broader scale, Diller's point highlights a critical question: how do we prepare for something that even its creators don't fully understand? It shifts the focus from scrutinizing individual leaders to the profound societal challenge of governing a technology that could exceed human control. Thinking about "guardrails" now isn't just about preventing bad actors, but about navigating truly unknown territory.

Diller’s warning about AGI setting its own rules, with "no going back," is particularly striking. It taps into the deepest concerns people have about AI: what if we create something so powerful that we lose the ability to guide it, or even survive alongside it? This isn't about human error or malicious intent anymore, but about the inherent unpredictability of a super-intelligent system operating beyond our comprehension.

The conversation around AI's future will only intensify as companies push closer to AGI. We can expect more calls for robust guardrails, ethical frameworks, and potentially international cooperation to manage this rapidly evolving technology. The exact timeline for AGI remains uncertain, but what's clear is that leaders like Diller want us to start grappling with the "great unknown" now, before its full power is unleashed.

Do you agree with Barry Diller that personal trust in AI leaders becomes "irrelevant" as AGI nears, or do you think the character of those building this technology remains paramount?

If even AI creators don't fully understand what they're building, who then should be responsible for creating the "guardrails" Diller talks about for AGI?


Filed under: AIethics, ArtificialGeneralIntelligence, FutureofTech, BarryDiller, SamAltman

Comments