Argument
An expert's point of view on a current event.

What if Regulation Makes the AI Monopoly Worse?

In an industry already primed for concentration, creative alternatives for safeguarding the public interest are needed.

By , the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy.
British Prime Minister Rishi Sunak welcomes Italian Prime Minister Giorgia Meloni during the AI Safety Summit in England.
British Prime Minister Rishi Sunak welcomes Italian Prime Minister Giorgia Meloni during the AI Safety Summit in England.
British Prime Minister Rishi Sunak welcomes Italian Prime Minister Giorgia Meloni during the AI Safety Summit in Bletchley, England, on Nov. 2, 2023. Joe Giddens/Getty Images

Apart from being artificial intelligence’s breakout year, in the race to steer the technology’s development, 2023 was also the year when the AI community splintered into various tribes: accelerationists, doomers, and regulators.

Apart from being artificial intelligence’s breakout year, in the race to steer the technology’s development, 2023 was also the year when the AI community splintered into various tribes: accelerationists, doomers, and regulators.

By year’s end, it seemed as if the accelerationists had won. Power had consolidated with a handful of the largest of the Big Tech companies investing in the hottest of start-ups; generative AI products were being rushed out; and doomers, with their dire warnings of AI risks, were in retreat. The regulators were in hot pursuit of the accelerationists with uncharacteristic agility, unveiling bold regulation proposals and, with a year of many elections and an anticipated surge in AI-powered disinformation ahead, corralling bills to rush into law.

Ironically, though, the regulators may have added to the wind on the backs of the accelerationists: New regulations may inadvertently add to the accelerationists’ market power.

How can it be that regulators tasked with preserving the public interest could take actions that might make matters worse? Do we now need different regulations to rein in an even more powerful industry? Are there creative alternatives for safeguarding the public interest?

Consider, first, the reasons why the AI industry is already primed for concentration.


First and foremost, AI development is being led primarily by industry, not government. Despite the technology being held up as a national priority, the leading AI-producing country, the United States, leans on businesses for its dominance. The U.S. private sector’s share in the biggest AI models spiked from 11 percent in 2010 to 96 percent in 2021.

This industry-centrism is not just a U.S. phenomenon. In recent negotiations over the European Union’s AI regulations, Germany, France and Italy opposed restrictions that might put their own emerging private sector champions at a disadvantage. In China, too, the major AI players are private companies, albeit ones that are closely monitored by the state. The reasons are structural: Businesses already own the inputs critical to developing this technology, such as talent, data, computational power, and, of course, capital.

While there are large AI development teams in the biggest companies, a handful of smaller enterprises are the most dynamic innovators in developing foundational models—yet they draw on a handful of large companies for other critical inputs. They need massive datasets, computational resources, cloud computing access, and hundreds of millions of dollars.

It is no surprise, therefore, that the companies that already own such resources—Nvidia, Salesforce, Amazon, Google, and Microsoft—are the biggest investors in the leading AI start-ups. Last year, investments exceeding $18 billion by Microsoft, Google, and Amazon represented two-thirds of all global venture investment into generative AI ventures, with just three innovator companies—OpenAI, Anthropic, and Inflection—as the beneficiaries of their largesse.

The investments flow back into the large companies in many ways: The AI developers turn to Nvidia for graphics processing units or to the cloud providers, such as Amazon and Microsoft, to run the models. On top of that, Google and Microsoft are racing to integrate the AI models into their core products to defend their most important franchises.

In an industry already primed for concentration, regulations risk distilling power in the hands of a few. AI regulation is emerging as a global patchwork. China was an early mover, but last year was when regulators on either side of the Atlantic got serious about setting up guardrails. An October 2023 White House executive order on AI safety will be acted on by several government agencies this year, while the EU’s AI regulation, announced at the end of 2023 will be voted on in early 2024.

Many experts expect a “Brussels effect,” with the EU laws influencing regulators and industry standards elsewhere; at the same time, we should expect variations around the world, given the political ramifications of artificial intelligence. The African Union may institute an AI policy this year, while the United Kingdom, India, and Japan are expected to take a more laissez-faire approach.

Now, consider the potential effects of different features of AI regulations.

First, there’s the issue of inconsistent rules. In the United States, with no national legislation from Congress and a free-standing White House executive order, states are making their own AI regulations. A California bill would require AI models using more than a certain threshold of computing power to be held to transparency requirements. Other states regulate AI-aided manipulated content—for instance, South Carolina is considering legislation to ban deepfakes of candidates within 90 days of an election, and Washington, Minnesota, and Michigan are advancing similar election-related AI bills.

Such state-by-state differences put smaller companies at a disadvantage, as they lack the resources and legal support to comply with multiple laws. And the burdens increase as one considers the global patchwork.

Then there are the red-teaming requirements. The executive order, as well as the EU regulations, require generative AI models above a certain risk threshold to publish results from structured testing—involving simulated “red team” attacks, which use hackers impersonating adversaries in order to detect vulnerabilities—to identify security vulnerabilities. This is very different from the less expensive way that many tech start-ups have tested product safety, where they release early versions, users catch and report bugs, and updates are issued. The preemptive approach is not only costly, but also requires different forms of expertise—legal, technical, and geopolitical. Moreover, a start-up is unlikely to be able to vouch for externally sourced AI models. All of these factors tilt the field toward the large players.

In addition, both the White House executive order and the EU regulations on general purpose AI models also call for the “watermarking” of all AI-generated content, which involves embedding information into AI produced works or content to clearly identify that they are AI-generated. While this seems reasonable, watermarking is not foolproof, and so-called black hat hackers can bypass them. It can also be technically and legally cumbersome for smaller companies that rely on external content to verify watermarks.

An analysis from the Center for Data Innovation—a U.S. tech industry-funded think tank—had calculated that small and medium enterprises could incur compliance costs as high as 400,000 euros (about $435,000) by using some of the higher-risk AI models proposed by the European Union. Even if the numbers are debatable, the core concern remains that AI regulators add costs that disproportionately burden smaller firms, potentially acting as a barrier to entry.

Without recommending even more regulation (albeit of the antitrust kind) to counter the effects of regulation, what can be done? Or should we simply let natural market forces take their course?


One answer is to encourage new sources of competition. In a fast-developing space such as artificial intelligence, we can expect especially creative entrants. The leadership turmoil at OpenAI, the company that developed ChatGPT, is an indicator of the dissension within the industry—and competitors will likely emerge in response to perceived gaps.

The availability of open-source AI models can give such entrants a chance to compete. Even the U.S. Federal Trade Commission acknowledges the possibility. Such open-source models also create opportunities for competitors beyond the United States, and each could leverage distinctive features and creative concepts. Open-source models, however, are not a panacea, as many that were initially open have closed over time; for example, LlaMA 1, Meta’s open-source model, was transparent about its dataset, but this changed with the release of LlaMA 2, which didn’t reveal its training data.

Meanwhile, though foundational model-building is among the most prized components of artificial intelligence today, there is already intense rivalry here: Google’s Gemini, released in late 2023, is challenging OpenAI’s generative AI suite, as are open-source models.

Even the concentrated infrastructure layer could witness competition. Microsoft, Google, and Alibaba are challenging Amazon’s lead in cloud services, and a major chipmaker, AMD, is challenging Nvidia’s near-monopoly on chips on top of competition from Amazon and Chinese chipmakers. Opportunities for differentiation could migrate to applications and services that use upstream models and infrastructure, and tailor AI to downstream user needs, making the technology more competitive.

The computational needs of AI applications could decrease for different reasons. Chips could become more efficient, and many applications could turn to smaller, more specialized models through a process of knowledge distillation, which would cut down the need for “large” language models that have to be trained on massive datasets, controlled by a few firms.

These developments would reduce the dependence on the few Big Tech players. If they do play out, though, such market forces could take time. Policymakers could consider other steps in the near term—via preemptive negotiation.

Leaders from the large AI companies are proactively engaged in arguing for regulation to participate in rule-making. This provides policymakers with leverage to negotiate alternative deals with the largest players. For example:

  • Deploying industrial innovation models from history: Policymakers could draw inspiration from the 1956 U.S. federal consent decree involving AT&T and the Bell System. AT&T had a national telecommunications monopoly and was at the leading edge of technological innovation. But keeping prized assets critical to the national interest locked up in private hands did not look good for the company. The consent decree kept the monopoly intact; in exchange, AT&T’s Bell Labs was required to license all its patents royalty-free to others. Notably, Mervin Kelly, then the president of Bell, preemptively agreed to this arrangement.
  • Borrowing from existing public investment frameworks: The public sector could invest in AI development done collaboratively with the major companies, applying the Bayh-Dole Act (also known as the Patent and Trademark Law Amendments Act). The act permits businesses to retain ownership of the inventions while also giving the government the license to use the intellectual property generated for public purposes.
  • Using DPI’s public utility principles: Policymakers can also seek inspiration from the model of digital public infrastructure (DPI), which has captured the imagination of many in the technology-for-development community. The DPI model envisions “public rails” on top of which digital applications can be built by anyone. AI models could be requisitioned by governments as such public rails.
  • Applying progressive taxation principles: One way to subsidize the regulatory burdens on smaller companies is to levy a tax on AI-related revenues while also ensuring that the tax rate increases with company size.

AI industry concentration has implications beyond the usual ones—that is, the potential abuse of market power by dominant firms. Access to user data is a problem that has already dogged Big Tech. With AI, there are new worries. Fewer firms means that a narrower set of applications get prioritized and biases in datasets and algorithms get reinforced. Overreliance on too few companies also means that systemic failures can spread quickly: The risks of financial instability can spread quickly and become a global crisis via financial networks reliant on a few key AI platforms, or cyberattacks targeted at commonly used AI platforms. These can simultaneously expose many organizations or sectors.

2024 will be the year we shall see more AI rules of the road. Let’s make sure that the rules don’t ensure that only a few remain in the running, while a much-needed AI tribe—the entrants—miss their on-ramps.

Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher’s Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.
Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.

Arab Countries Have Israel’s Back—for Their Own Sake

Last weekend’s security cooperation in the Middle East doesn’t indicate a new future for the region.

A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.
A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.

Forget About Chips—China Is Coming for Ships

Beijing’s grab for hegemony in a critical sector follows a familiar playbook.

A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.
A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.

‘The Regime’ Misunderstands Autocracy

HBO’s new miniseries displays an undeniably American nonchalance toward power.

Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.
Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.

Washington’s Failed Africa Policy Needs a Reset

Instead of trying to put out security fires, U.S. policy should focus on governance and growth.