The AI Regulation Paradox

Regulating artificial intelligence to protect U.S. democracy could end up jeopardizing democracy abroad.

By , the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy.
A photo illustration shows the severed head of a Greek statue with cyber tech wires coming out of the opening of its neck for a story about AI tech regulation and the downfall of democracy.
A photo illustration shows the severed head of a Greek statue with cyber tech wires coming out of the opening of its neck for a story about AI tech regulation and the downfall of democracy.
Matt Chase illustration for Foreign Policy

While former U.S. President Donald Trump’s third indictment includes a charge of spreading “pervasive and destabilizing lies about election fraud,” we can rest assured that this will be followed by a fresh avalanche of disinformation. After all, Trump has already been on a roll for the upcoming election season. In May, he posted a fake video of CNN host Anderson Cooper saying that President Joe Biden “predictably continued to spew lie, after lie, after lie.”

While former U.S. President Donald Trump’s third indictment includes a charge of spreading “pervasive and destabilizing lies about election fraud,” we can rest assured that this will be followed by a fresh avalanche of disinformation. After all, Trump has already been on a roll for the upcoming election season. In May, he posted a fake video of CNN host Anderson Cooper saying that President Joe Biden “predictably continued to spew lie, after lie, after lie.”

To be fair, Trump is not alone in his creative endeavors. The presidential campaign of Florida Gov. Ron DeSantis, who is challenging Trump for the 2024 Republican ticket, tweeted out a video advertisement featuring artificial intelligence-generated images of Trump kissing and hugging Anthony Fauci, the former chief medical advisor to the president, who is reviled by the far right. Meanwhile, “I actually like Ron DeSantis a lot,” confesses former Secretary of State Hillary Clinton in yet another fake video that went viral. “He’s just the kind of guy this country needs, and I really mean that.”

It takes little to piece together digital falsehoods such as these, but artificial intelligence (AI) has added a fresh boost of creativity to the disinformation industry. Now anyone can become a political content creator thanks to new generative AI tools such as DALL-E, Reface, FaceMagic, and scores of others. Indeed, Meta just announced plans to release its new generative AI technology for public use, leading to even more possibilities for an explosion of such “creativity.”

The democratization of the disinformation process may well be the most serious threat yet to the functioning of U.S. democracy—an institution already under attack. Even the AI overlords are worried: Former Google CEO Eric Schmidt warned that “you can’t trust anything that you see or hear” in the elections thanks to AI. Sam Altman, CEO of OpenAI, the company that gave us ChatGPT, mentioned to U.S. lawmakers that he is nervous about the future of democracy.

Lawmakers have heard the message and plan to scramble the jets. Worried that “democracy could enter an era of steep decline,” Senate Majority Leader Chuck Schumer has proposed a new framework for AI regulation. In the House, Rep. Yvette Clarke of New York has introduced a bill that would require politicians to disclose when they use AI in their political ads, and there is similar legislation pending in the Senate. Election officials in states such as Michigan and Minnesota want to make it illegal for someone to knowingly spread false elections-related information. And some lawmakers have seemed receptive to the idea of creating an entirely new federal agency to regulate AI.

But there’s a catch—regulating AI to protect U.S. democracy could actually end up jeopardizing democracy abroad. Here is why: The louder the voices of lawmakers from commercially and politically important markets such as the United States and the European Union—where lawmakers and regulators have been even more vigorous in their efforts to rein in technology-powered disinformation—the more likely it is that disinformation will proliferate in the rest of the world. The aggregate effect amounts to a paradox of regulating disinformation: The more you regulate it in the West, the worse it gets globally.

There are many factors feeding the paradox. The primary carriers of disinformation, the major social media platforms, have steadily dismantled their disinformation-catching staff. This means that severely depleted teams must tend to the squeakiest of wheels—that is, lawmakers and regulators in the United States and the EU. The result is that there aren’t enough resources left to monitor content in the rest of the world. On top of this, the major social media platforms are distracted by other matters. And all of this coincides with 2024, a year packed with elections in places far from the United States.


Consider the 2024 electoral context first. It is a big year for the testing of democratic institutions, not just in the United States but also across the world, especially in places with energetic disinformation industries.

In Asia, there are countries such as India, with its rich history of disinformation-fueled political campaigns; Indonesia, where fake news purveyors exploited religious and ethnic divides across a nation that had to manage the world’s largest single-day election; and South Korea, where a bill to stamp out fake news was shelved after concerns were raised about discouraging critiques of powerful candidates. All three have elections forthcoming in 2024.

Meanwhile, more than a dozen countries in Africa have elections planned for 2024, and disinformation is widely deployed during such times. It plays a role in South African elections, where, as South African writer Robyn Porteous puts it, “news stories that’ve been doctored just right to masterfully blend truth and fiction … attempt to inspire outrage and mistrust.” At the other end of the continent, Egypt has a history of government repression ostensibly meant to stamp out disinformation but in reality designed to stifle dissent; most recently, a student and human rights advocate was accused of spreading fake news and sentenced to jail for writing about the plight of Egypt’s Coptic Christian minority. Both South Africa and Egypt have major elections in 2024.

The disinformation problem is rampant in Latin America as well. Mexico, for example, has a history of disinformation about public opinion polls amplified by mainstream media. In Peru, fact-checkers could barely keep up with the volume of disinformation flooding previous elections. And 2024 is an election year in Mexico and Peru as well as several other countries in the region.

With this context in mind, one would think that social media platforms would be setting up election war rooms—as they have done in the past—and putting in place plans to catch disinformation before it spreads. Instead, companies across the tech sector have had other pressing business to attend to, such as propping up profitability. Revenues have fallen, stock prices have dropped, and the needs to cut costs and attract more users have become near-term imperatives. This means reducing staff and making cuts in parts of the company that do not directly contribute to increasing revenue, bringing in new users, or encouraging existing users to post messages that engage others.


Meta offers a case in point. CEO Mark Zuckerberg declared 2023 as the “year of efficiency.” The trust and safety team that moderates content on Meta’s platforms—which include Facebook, Instagram, and WhatsApp—was drastically reduced, a fact-checking tool that had been in development for six months was shelved, and contracts with external content moderators were canceled.

Simultaneously, Meta is looking to revive its user base with new engagement opportunities. For example, it launched Threads as a Twitter competitor with stricter content guidelines (to align with those on Instagram), which meant that the company’s increasingly scarce content moderation resources were made even more scarce as they were prioritized for its high profile launch. And as mentioned above, Meta plans to make its large language model, Llama 2, free to the public. Yet the company has so far not announced any substantive plans to moderate the content that might find its way onto various social media platforms as an outcome of this decision.

These developments at Meta are mirrored by the dismantling of content moderation resources elsewhere across the industry. Under Elon Musk, Twitter (now called X) has decimated its content moderation teams, steadily lifted restrictions, and restored accounts that had been suspended. YouTube announced its intention to lift a ban on videos making false claims over the 2020 U.S. election. By February, Google had cut the team that monitors misinformation and toxic speech by a third. In fact, those who work in trust and safety have found that there aren’t many job openings in their area of expertise.


In light of these developments, it is natural to ask if there are alternative solutions. First, can algorithms do the jobs of people and take on content moderation at massive scale? Unfortunately, the answer seems to be no. According to a study by the Transatlantic Working Group at the University of Pennsylvania’s Annenberg Public Policy Center, algorithms are neither reliable nor effective in content moderation; for now, human intervention is indispensable.

Second, even with smaller content moderation teams, can they be redeployed in a “just-in-time” manner to different geographies depending on where the need surges? Here, too, there are challenges. The content moderation workforce is not globally fungible; to be effective in any given geography, the team must be trained in local languages, colloquialisms, locally relevant dog whistles, coded language, mores, and contexts. The content moderation teams on U.S. social media platforms are primarily English-speaking.

Third, can we trust that companies will allocate their content moderation resources to where the needs are the greatest? Past experience suggests not. The Wall Street Journal uncovered internal Facebook materials showing that in 2020, 87 percent of content moderation time was spent on posts from the United States. Yes, there were elections in the United States, but 2020 was also election year in Egypt, Poland, Sri Lanka, and Tanzania, among scores of other countries. This points to an allocation that is grossly out of proportion given that 90 percent of Facebook users are outside the United States.

All of this boils down to an inescapable reality: As the regulatory pressures in the United States—or, for that matter, in the European Union—build up, combined with the fact that the economic returns are so much more attractive in these markets, the rational decision for platforms is to over-allocate content moderation resources in the West at the expense of the rest of the world.

The consequence is a sobering one: With fewer resources for content moderation, particularly in nations across the developing world that have elections forthcoming, the volume and intensity of misinformation is likely to only increase as generative AI tools become more widely available—with potential for greater harm, as many of the institutional safety nets present in the West may be absent.

This is, in essence, the logic behind the paradox of regulating AI and disinformation: The desire to regulate content in one part of the world leads to less content moderation elsewhere. Since the “elsewhere” is the larger user base with greater social and political risks and users with fewer options for fact-checking or seeking alternative sources of news and information, global democracy suffers.

The world is gearing up to pay a high price for U.S. democracy—due in large part to platforms headquartered in the United States. There can be only one solution to this: U.S. (and EU) lawmakers must pass laws to regulate not only on the content hosted by the platforms, but also on the investments that platforms make on content moderation and how these resources are deployed across the world. For instance, platforms could be required to maintain a certain threshold of content moderation staff for countries in proportion to the number of users and the level of disinformation risk there.

There is little doubt that Trump’s travails and lasting popularity raise the risk in the United States. But ultimately, we must consider these platforms and their global impact. Moreover, lawmakers ought to keep an essential mantra in mind: U.S. democracy is not safe if global democracy is thrown to the wolves.

Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher’s Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.
Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.

Arab Countries Have Israel’s Back—for Their Own Sake

Last weekend’s security cooperation in the Middle East doesn’t indicate a new future for the region.

A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.
A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.

Forget About Chips—China Is Coming for Ships

Beijing’s grab for hegemony in a critical sector follows a familiar playbook.

A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.
A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.

‘The Regime’ Misunderstands Autocracy

HBO’s new miniseries displays an undeniably American nonchalance toward power.

Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.
Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.

Washington’s Failed Africa Policy Needs a Reset

Instead of trying to put out security fires, U.S. policy should focus on governance and growth.