How GPT Mania Could Harm AI Innovation

The scramble to win the GPT race could divert essential resources from the development of more socially meaningful uses of AI.

By , the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy.
A photo taken on March 31 in Manta, Italy, shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its ChatGPT bot.
A photo taken on March 31 in Manta, Italy, shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its ChatGPT bot.
A photo taken on March 31 in Manta, Italy, shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its ChatGPT bot. Marco Bertorello/AFP via Getty Images

Since OpenAI released its artificial intelligence chatbot ChatGPT in November, GPT mania has reached dizzying heights. It has accelerated public awareness about AI in ways that few would have thought possible even a year ago. Its adeptness at cracking the LSAT exam or writing legal briefs in sonnet form may not impress poets, but it has caught tech experts by surprise. It has led to a chatbot dogfight between digital tortoises, such as Microsoft and Google, not given to dogfights. It has made unlikely bedfellows out of the likes of Elon Musk, Yuval Noah Harari, and Steve Wozniak, who issued a call for a six-month timeout on the training of larger, even more advanced GPT models.

Since OpenAI released its artificial intelligence chatbot ChatGPT in November, GPT mania has reached dizzying heights. It has accelerated public awareness about AI in ways that few would have thought possible even a year ago. Its adeptness at cracking the LSAT exam or writing legal briefs in sonnet form may not impress poets, but it has caught tech experts by surprise. It has led to a chatbot dogfight between digital tortoises, such as Microsoft and Google, not given to dogfights. It has made unlikely bedfellows out of the likes of Elon Musk, Yuval Noah Harari, and Steve Wozniak, who issued a call for a six-month timeout on the training of larger, even more advanced GPT models.

GPT—which stands for generative pre-trainer transformer—is a type of generative AI. It works by using a neural network that is trained on enormous amounts of data so that it can perform many tasks as if they had been done by a highly efficient and knowledgeable human assistant. These include tasks such as answering questions in easily understandable narratives, creating well-composed text or vivid images based on prompts, and even generating food recipes or lines of code.

The explosion of interest in GPT has fired up AI, a technology that Alphabet chief executive officer Sundar Pichai promises will be more profound than the discovery of fire. No doubt, GPT has myriad potential uses: It can be fun to play with and can even boost labor productivity and deliver measurable economic impact. But the rapid scramble by companies across the board to join the GPT race could also burn up essential resources, diverting them away from the development of AI in its most meaningful uses in society.

We are running the risk of overlooking other, more essential uses of AI, such as accelerating drug development, detecting hard-to-spot cancers from diagnostic images, advancing climate action and conservancy initiatives worldwide, and enhancing precision agriculture techniques. All of these and so many other potential applications of AI need care, attention, and resources.


While AI has made inroads into many facets of our lives, much of the technology’s capabilities and translation into meaningful applications are still works in progress. AI resources are scarce, and allocation is a zero-sum game. This means that when Google issues a Code Red alert and reprioritizes its AI program to defend against the threat that GPT poses to its lucrative search business, and other AI giants see new opportunities to break Google’s stranglehold, other AI applications will get lower priority.

Consider the most important scarce resource: the humans behind AI. A typical AI project team calls upon many highly specialized skills, from data science, machine learning, and engineering expertise to product management and design capabilities. There is already an acute shortage of trained professionals with such skills and, to make matters worse, much of that talent pool is concentrated in a few locations. The San Francisco Bay Area accounts for the lion’s share of AI talent, followed by a long tail of other urban hubs. The shortage will get worse as generative AI projects soak up the talent.

A second set of gating factors, inevitably, are money and the interests of the organizations that invest in AI. GPT is an enormously expensive application, as it requires massive amounts of computing power to respond to prompts, which uses extremely expensive servers, while the specialized chips used to power the computations are primarily made by a single firm, Nvidia, which means they are not cheap. Microsoft’s Bing AI chatbot, for instance, had a market share of less than 3 percent globally—which might improve slightly because of the GPT boost—and yet it must spend $4 billion in infrastructure costs to serve the current responses to Bing users.

According to some experts, the post-GPT era of AI innovation will be defined by two companies alone: Google-owned DeepMind and Microsoft-funded OpenAI, the creator of ChatGPT. And both Alphabet and Microsoft, the lead runners in the GPT race, are steadily upping the ante, Google with a $300 million purchase of Anthropic, a leading generative AI startup, and Microsoft with a transformational investment, reportedly of $10 billion, in OpenAI. Private capital tends to act as a herd and, unsurprisingly, funding in this area has exploded. Other AI applications may just have to wait their turn or abandon their projects entirely.

Can governments step in to fill the breach and reprioritize AI towards national and societal interests? Unfortunately, governments—even that of the United States, which is locked in a race for AI dominance against China—are allocating budgets across competing priorities. Despite declarations of commitment to global leadership in AI, even the U.S. government is piggybacking on private sector investments. In 2021, nondefense U.S. government agencies allocated $1.5 billion to AI, similar to that allotted by the European Commission. In contrast, U.S. commercial investment in AI exceeded $340 billion that year, while the share of the U.S. private sector in the biggest AI models went up to 96 percent and their share of PhDs in AI-related fields rose to 70 percent. To make advances in AI, scale matters. This means that the stewardship of the technology has devolved to a handful of powerful companies that see the greatest gains. As a result, we will see a narrowing of applications to meet immediate commercial imperatives.

There is a third crucial bottleneck that determines the direction of AI: available data and how it is used for training algorithms. Deep learning, which is critical for the coming phase of AI advancement, gets better with bigger datasets, but many of them are being removed or funneled into proprietary databases. On top of this, data is geographically dispersed, but it is not uniformly accessible across national borders, with data governance rules varying from country to country and data localization regulations on the rise.

The growing rift between the two AI superpowers, the United States and China, will splinter data and research resources even further. Historically, the two have actually worked together. The number of AI research collaborations between U.S. and Chinese teams has increased four times since 2010, but the rate of such collaborations has declined. The intense focus on the GPT race in both countries has led to even further rifts between Washington and Beijing.

Yet another risk of GPT—and one that all AI algorithms share—is that it could propagate bias, misinformation, and harmful content. All of this means there is an even greater need for oversight and, possibly, regulations. But the number of AI researchers working on aligning AI with ethical and human values was already under resourced, and the large companies have let go of large portions of their ethical AI staff, while those who remain are being refocused on the GPT race. GPT’s rapid emergence has created a divide among AI governance experts, leading to fragmentation of a critical oversight resource.


It is worthwhile asking if GPT mania will come at a cost in terms of lost AI: other more socially meaningful applications that get delayed, compromised, or simply put aside because of the zero-sum nature of AI resources.

To make this more tangible, consider an analysis of numerous socially valuable uses of AI from the management consulting firm McKinsey. It identified human health, for example, as one of the areas with the highest potential for AI to make a positive difference. It is validated by healthcare initiatives consistently appearing in the “AI for Good” portfolios of major corporate players such as Intel, Google, and Microsoft. An impressive array of healthcare-related AI projects will be prominently showcased this July at the International Telecommunications Union’s AI for Good Global Summit.

It is hard to argue for a more essential use of AI. The applications to healthcare directly affect human life and well-being. But it is easy to miss the significance. There was modest fanfare – a whimper compared to the bang that accompanied ChatGPT’s arrival—accompanying a major breakthrough for AI in healthcare last year when an AI system called AlphaFold demonstrated that it could predict the structure of almost every protein cataloged by science. This could revolutionize the discovery of new medicines and bring efficiencies to processes that cost billions of dollars today, take decades to produce safe and effective drugs, and end up denying treatment to so many.

But we are far from enjoying the fruits of this monumental achievement. To translate AlphaFold’s breakthrough to lifesaving products will take continued focus and consistent dedication of resources. AlphaFold must be paired with painstaking experiments, further modeling of protein interactions, and AI algorithms for drug design. It is essential to ensure the AI powers that be put the critical mass of resources necessary into such applications. Unfortunately, such applications must compete with generative AI projects. Since it is not directly affecting Google’s search business, one doubts if AlphaFold—developed by Google-owned DeepMind—has an even chance of winning against all things GPT in the contest for resources.

AlphaFold is not the only major healthcare-related AI project in the works. Algorithms are being developed for diagnosing many different conditions, from remotely gauging mental health symptoms to early detection of breast cancer on mammograms. They all need to be fed and nurtured.

The pandemic gave us an opportunity to appreciate the power of AI in healthcare when it delivers as well as what happens when it fails. AI accelerated the journey to the pandemic’s endgame. Algorithms aided researchers in understanding SARS-CoV-2, the virus that causes COVID-19, and helped predict how to elicit an immune response, thereby contributing to the discovery of vaccines in record time. AI was also key to determining clinical trial sites and rapidly analyzing the vast amounts of trial data.

But AI in healthcare also ran into a wall because there was not enough time to put the necessary resources into it. During the surge of infections in 2020, for example, there were numerous AI-led experiments to diagnose COVID-19 from symptoms, which could have saved thousands of lives at a time when rapid coronavirus tests were in short supply and healthcare systems were overwhelmed. But those attempts all failed, pointing the way to the work that needs to be done and resources that ought to be invested to ensure AI is ready for the next pandemic. Unfortunately, GPT mania may mean there is no guarantee that we will have invested in AI systems in time for the next global health catastrophe.

There are numerous other problem areas where consistent dedication of resources into AI can be transformative, and save lives and livelihoods. Consider its uses in combating the effects of climate change. AI can be deployed to improve energy efficiency in highly energy-intensive buildings and facilities, design smart cities that are citizen-centric and sustainable, and predict wildfires and take preventive action.

Yet another area that is ripe with possibility is using AI for better disaster response. In emergencies such as earthquakes, it is essential to be able to map the terrain afterward to help first responders direct their resources efficiently and appropriately. In 2015, in the aftermath of the Nepal earthquake, AI was used to spot urgent rescue needs and crucial infrastructure vulnerabilities. More recently, humanitarian teams in Turkey and Syria put the very same technology to work identifying the areas most devastated by the February 2023 earthquake and mobilizing rescue teams.

It is well known that new technology goes through a hype cycle. In the case of GPT, though, we must reckon with a phenomenon far worse: a hyperbole cycle. Pichai’s fire analogy is matched by Musk’s warning that with AI, “we are summoning the demon.” Pichai is clearly pushing his teams at Google to “rise to moment” and go all in on GPT. Even Musk, who had called for a GPT timeout, went ahead to launch his own GPT venture, possibly to compete with whatever Pichai’s teams put out.

No matter which of these tech visionaries gets it right or wins, there is little doubt that GPT is hungry. And it will gobble up scarce AI resources. We are racing to a future where GPT-powered machines will write hilarious wedding speeches and give us clocks that spit out a poem every minute. But we also need AI to save and extend lives to ensure that there are more weddings and even more minutes in our lifetimes.

Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher’s Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.
Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.

Arab Countries Have Israel’s Back—for Their Own Sake

Last weekend’s security cooperation in the Middle East doesn’t indicate a new future for the region.

A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.
A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.

Forget About Chips—China Is Coming for Ships

Beijing’s grab for hegemony in a critical sector follows a familiar playbook.

A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.
A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.

‘The Regime’ Misunderstands Autocracy

HBO’s new miniseries displays an undeniably American nonchalance toward power.

Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.
Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.

Washington’s Failed Africa Policy Needs a Reset

Instead of trying to put out security fires, U.S. policy should focus on governance and growth.