Weekly The Generation, Year 1, Issue 13
November 28, 2023
By Courtney Radsch
The theatrics of OpenAI’s seeming implosion amid the firing of its CEO and co-founder Sam Altman, Microsoft’s dramatic offer to poach its top executives and staff, and Altman’s triumphant return following the ouster of the board has all the trappings of a Hollywood blockbuster.
But the drama unfolding should put the spotlight on the tyranny of the tech titans that control critical aspects of the AI ecosystem.
OpenAI has developed some of the most advanced large-language models and pioneering artificial-intelligence products, such as the text generator ChatGPT and image generator Dall-E, which have been responsible for making generative AI into a household term and discussion about AI risks into dinnertime conversation.
Although OpenAI is in the spotlight, however, Microsoft has played a leading role in the unfolding drama. Microsoft swooped in to scoop up the ousted executives and create a new AI research division for Altman to lead, with hundreds of staff reportedly ready to follow them. Microsoft said it was ready to hire them all (though they would have probably needed to wait until the new year, when California’s prohibition against non-competes goes into effect) and it has the cash to make good on such a promise.
It turns out Microsoft won’t have to take on the entire cast of characters, since Altman is now set to return to OpenAI under a new board leadership, which should allow Microsoft to keep its privileged relationship without assuming any liability for employee costs or research and development. Either way, it’s a win-win for Microsoft.
At the root of these theatrics are questions of power: power over the resources needed to develop advanced AI systems and the power to decide how to balance current harms against future risks and shape the future of this technology.
The vast resources needed to develop, train and run cutting-edge AI models reward scale and incentivize companies to seek market dominance, as the Open Market Institute outlined in a recent report. One way companies do this is by leveraging partnerships, investments and acquisitions to establish control and obtain access.
OpenAI has received more than $13bn worth of investment since 2019 from Microsoft, which reportedly acquired a 49% stake in the company and the right to three-quarters of OpenAI’s profits. Microsoft also ensured that it would be OpenAI’s sole cloud provider, locking in millions of dollars of value given the computational costs involved in running generative AI products.
While billed as a partnership, the deal looks more like a “killer acquisition” that gives Microsoft unparalleled access to a tech unicorn that was on track for a multibillion-dollar valuation before the shake-up.
This partnership is likely to get even tighter given the new cast of characters brought in to replace the non-profit board that fired Altman, reportedly over clashing views on how to balance safety and commercialization of the company’s revolutionary AI technology. The new board members appear more aligned with the tech sector’s mantra of “move fast, break things”. They include two members with deep roots in Silicon Valley and Larry Summers, the former treasury secretary with a track record of applying “free-market theory where it didn’t fit the circumstances”, in the words of the American Prospect, and cautioning against regulators using anti-trust to address economic concentration.
Microsoft is one of just a handful of gatekeeper firms – namely Alphabet (Google’s parent company), Apple, Amazon and Meta – that have the necessary computing power, access to data, and technical expertise needed to develop advanced AI systems. Their control of the AI development pipeline gives these companies the ability to dictate terms and fees and protect against challengers, as Microsoft did by limiting the availability of OpenAI’s API to other search engines and threatening to cut off access to its internet-search data if those rivals used it to develop their own AI chat products.
Microsoft also charges other cloud providers higher fees for purchasing and running its software outside of Azure, making it both expensive and technically difficult to switch, since data is often not interoperable across systems. This clearly does not promote innovation.
And Microsoft has been able to integrate OpenAI’s technology into its consumer-facing products, productivity tools, and business services, despite safety concerns expressed by their employees and warnings that it was not ready for integration into its Bing search engine.
This should cause deep concern for policymakers focused on AI safety, governance and innovation. Yet amid the flurry of efforts in the US and Europe to ensure the development of “responsible” and “safe” AI, the harms and risks of massive concentration in the generative AI ecosystem have been largely ignored or sidelined.
Big tech has justified the rapid and reckless rollout of generative AI by seeking to convince us that speed over safety is an inevitable part of technological development and crucial to innovation, especially if the US wants to compete with China. This narrative has allowed these corporations to divert attention away from the dangers posed by concentration at key chokepoints in the AI value chain, as well as their failure to address existing harms perpetuated by their platforms, from rampant disinformation and manipulation to addiction and surveillance capitalism.
If this latest drama playing out on the public stage doesn’t jolt us out of our complacency and force us to confront the dangers of monopoly, we will lose a critical opportunity to restructure the AI ecosystem by breaking up malignant concentrations of power that inhibit innovation in the public interest, distort our information systems, and threaten our national security.
Author is director of the Center for Journalism and Liberty at the Open Markets Institute