Sunday, Apr 28, 2024
Advertisement
Premium

Look at AI, not ChatGPT

Few paid attention to the fact that the first alert of a mysterious new virus out of Wuhan, China, came through AI

It is one thing for regular humans to fret over new technology, but the discomfort is also being felt by tech overlords responsible for ushering in this artificial reality.It is one thing for regular humans to fret over new technology, but the discomfort is also being felt by tech overlords responsible for ushering in this artificial reality.
Listen to this article
Look at AI, not ChatGPT
x
00:00
1x 1.5x 1.8x

Almost overnight, artificial intelligence (AI) has broken out of techie talk circles and registered with regular humans. Thanks to that awkwardly named, “generative AI”, ChatGPT, we now know that anyone with access to the internet can turn in a B-grade machine-generated essay, the jobs of teachers or admissions officers have become harder, other jobs may become redundant, and the age of disinformation-at-scale is upon us. Many are experiencing this sudden arrival of AI into public view with a degree of discomfort.

It is one thing for regular humans to fret over new technology, but the discomfort is also being felt by tech overlords responsible for ushering in this artificial reality. When companies like Microsoft and Google, with some of the world’s smartest on their payrolls, rush out half-baked products, one thing becomes clear: Instead of enhancing it, AI may be testing — and laying bare — the fault lines of human intelligence. Let me offer some examples.

Faultline one — move fast and do stupid things: As soon as ChatGPT became the tech sensation of 2022, Microsoft was chomping at the bit to capitalise on its early investment in it and add some of that ChatGPT zing to its flagging search engine, Bing. The first outing was problematic: It confessed its desire to hack computers and spread misinformation and professed love for a New York Times journalist, while comparing another reporter to Hitler. For good measure, it said the reporter was “too short, with an ugly face and bad teeth”.

Advertisement

In parallel, Google rushed out its own response to ChatGPT, called Bard. While Bard’s answers to queries were less entertaining than those of Bing, a single mistake in responding to a question about the James Webb Space Telescope sent Google’s parent Alphabet’s shares plummeting, costing the company $100 billion in lost market value.

Why would Microsoft and Google — ordinarily, hyper-cautious, and slow-moving colossi — put their reputations and stock prices at risk? Microsoft may have seen it as a chance to appear nimble, for a change, inject some competition into the search business. At the very minimum, they were trying to get the reigning king of search, Google, to — in Microsoft CEO Satya Nadella’s words – “come out and show that they can dance”. Is coaxing Google onto the dance floor worth putting your market value on the line? Of course, even without Microsoft’s coaxing, as the world’s biggest spender on AI, Google clearly felt pressure to do something — anything — to respond to the explosive interest in ChatGPT. They came out to dance with Bard, clearly, reluctantly and, clearly, without practised dance moves.

Festive offer

Faultline two — detract from more meaningful issues: The limits of human intelligence are evident not only in the fumbles of tech giants. The frothy coverage of ChatGPT in the media has shown its myopic understanding of the AI landscape. With their incessant chatter about chatbots, reporters and commentators (present company included) may be adding to public unease about it and it comes at the cost of insufficient coverage of more societally meaningful uses of AI. This has consequences: Media narratives in buzzy tech areas drive attention, and misallocate scarce resources.

What would be an example of a more societally meaningful area of AI? How about AI that affects human health, where its contributions could be a matter of life and death?

Advertisement
Express Editorial | ChatGPT and the AI challenge

With far less fanfare than that accompanying ChatGPT, health-related AI crossed a major milestone last year: An AI system, Alphafold, showed it could predict the structure of almost every protein catalogued by science. This could open the door to breakthroughs in the discoveries of medicines and bring efficiencies to processes that cost billions, take decades and deny treatment to so many people.

Why didn’t Alphfold merit the wall-to-wall media coverage that accompanied ChatGPT, Bing and Bard? For one, its implications are harder for readers to grasp. Second, it hasn’t delivered immediately usable results. Finally, since we are programmed to appreciate the end-products, it is easy to look past the many breakthroughs and technological miracles, such as AI, that go into making the end-product a reality.

The end-products of AI in healthcare take time and require consistent focus and dedication of resources. Alphafold, for example, is a predictive tool. To make meaningful advances, the predictions must be paired with numerous other approaches, such as painstaking experiments and modelling of protein interactions. AI algorithms for drug design need lots of data to train on and the data must be released from disparate sources and from different formats owned by different institutions.

Opening the troves of data, providing the appropriate privacy protections and regulatory oversight will be critical to unlocking other AI advances in human health — algorithms for identifying patients at risk of opioid overuse, remotely gauging mental health symptoms or catching signs of breast cancer on mammograms.

Advertisement

Faultline three — short attention and shorter memories: Yet another limitation of human intelligence is our attention is ephemeral and we have short memories. A case in point is the Covid-19 pandemic, marking only its third anniversary.

Few of us paid attention to the fact that the first alert of a mysterious new virus out of Wuhan, China, came through AI. Data scraping systems raised a red flag before the humans at the WHO got wind of the impending disaster. At the other end, the search for a vaccine was accelerated by algorithms: Researchers got help from AI in understanding the SARS-CoV-2 virus better and predicting how to elicit an immune response. AI was key to determining clinical trial sites and analysing the vast amounts of trial data. In the thick of the pandemic, there were scores of AI experiments to diagnose Covid from symptoms, but those attempts, while noble in intent, failed. Meanwhile, if you ask most people today if there was a connection between AI and the beginning and end of the acute pandemic, you’ll probably get a blank stare.

AI’s salespeople are aware of the limits of human intelligence as they try to get our attention. Sundar Pichai declared that AI will be more profound than fire. Not to be outdone, his colleague, ex-Chief Business Officer for Google X, Mo Gawdat, said, “We’re creating God.” Always the contrarian, Elon Musk said, “We’re summoning the demon.”

It is time we paid attention to the right uses of AI and applied more intelligence to how to direct money, talent, data access and regulatory and ethical resources so that we end up with less demon, more god and usher in a technology that can set the world on fire — and, if we aren’t careful — burn us all down.

Advertisement

The writer is Dean of Global Business at The Fletcher School at Tufts University, founding executive director of Fletcher’s Institute for Business in the Global Context. He is author of The Slow Pace of Fast Change

First uploaded on: 15-03-2023 at 07:22 IST
Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement
close