Saturday, Apr 27, 2024
Advertisement
Premium

With elections in at least 83 countries, will 2024 be the year of AI freak-out?

Regulatory panic could do more harm than good. Rather than poor risk management today, rules should anticipate the greater risks that lie ahead.

AIFrom the regulation-loving Europeans and the regulation-allergic Americans to India’s minister of state for electronics and IT, rules to tackle AI-created disinformation are being rushed out. (Representational)

The year 2024 has been billed by Time magazine as the “ultimate election year” — the largest ever global exercise in democracy, with close to half the world’s population engaging in elections. This year, democracy’s defenders are fretting not only over the run-of-the-mill election demons, but also over newer demons of the digital kind. Since 2023 was the year of AI frenzy, there is a worry that AI will turbocharge 2024 election-related naughtiness. Think: Deepfakes, disinformation, robocalls and other dastardly forms of digital voter manipulation.

It is no surprise, therefore, that governments around the world have leapt into action. From the regulation-loving Europeans and the regulation-allergic Americans to India’s minister of state for electronics and IT, Rajeev Chandrashekhar — who claims to have “woken up earlier” than others to AI’s dangers — rules to tackle AI-created disinformation are being rushed out. I worry that in the scramble to nab the AI bogeyman during this year of elections, we may end up making the wider problems of AI worse. Here are three ways that can happen:

First, through disinformation surge. January’s elections-related experience has already raised alarms. Consider the case of Bangladesh Nationalist Party leader, Tarique Rahman, whose manipulated video showed him suggesting a toning down of support for Gaza’s bombing victims — a surefire way to lose votes in a Muslim-majority country. Facebook’s owner Meta took his own sweet time to take the fake video down. Granted, Tarique Rahman is no Taylor Swift — whose fake pornographic images caused quicker social media blackouts — but, surely, a bit more swiftness in catching the deepfakery would have been a nice gesture to the Bangladeshi voter.

Advertisement

It is not surprising that Meta would take its time, with their content moderation staff cut way back, as part of the massive layoffs of 2023. With elections taking place in so many different countries, the company has to make choices on where to allocate their thinned-out resources. Meta’s scaling back is just the tip of the iceberg — content moderation teams on social media elsewhere have also been decimated. With pressure from more consequential markets, such as the US or the EU and, in a pinch, India, they will attend to the squeakiest of wheels. This means voters in much of the rest of the world, like in Bangladesh, may have to fend for themselves. There are at least 83 elections being held this year. Ironically, the volume of disinformation worldwide could surge overall precisely because of the pressure to catch disinformation coming from a few powerful governments.

Second, the growing might of the already mighty. The piling up of AI regulations could lead to a second paradoxical outcome — reinforcing AI industry concentration. To get a sense of that concentration, consider this: Just three companies, OpenAI, Anthropic and Inflection, cornered two-thirds of all the investments made in generative AI last year and all that money came from just three other companies, Microsoft, Google and Amazon. The EU regulations as well as the Executive Order from the White House have seemingly sensible requirements of AI products, such as “watermarking” to clearly label AI-created content or requiring results of “red-teaming” exercises — simulated attacks by adversaries — to detect safety and security vulnerabilities in AI models. However, watermarking requirements are problematic since the watermarks aren’t fool-proof and smaller companies reliant on external sources cannot verify such sourced content. Red-teaming exercises are expensive once you account for the staffing and process complexities along with legal and documentation costs. Such regulations will only serve to lock in the power of the already powerful by creating a barrier to entry or making it infeasible for scrappy start-ups.

Festive offer

Concentration in the AI industry hands power to a few companies, possibly locking in ethical lapses to which they may be blind, allowing risks to proliferate without oversight or competitive forces and biases, and black box systems gaining control of consequential decisions.

Third, the perils of earnest guidelines. With these concerns about ethics, risk, and transparency in AI development, many regulators and civil society groups have been at work putting frameworks and guidelines in place. But these guidelines themselves could be problematic.

Advertisement

For example: Whose ethics and values should inform the framework? We live in polarised times, where we disagree over politics, religion, economics, and much more. Societies differ over fundamental ethical questions, such as: Should free speech be prioritised or should we have “guardrails” on expression to protect the weak? Even the idea of prioritising regulation based on levels of risk — as the EU regulations propose — can be contentious. Some believe AI’s risks are existential, while others believe that such dire warnings are distracting us from more immediate higher-likelihood risks. Members of Prime Minister Narendra Modi’s Economic Advisory Council themselves have argued that even the idea of risk management is risky in the case of AI. They argue that AI is non-linear, evolving and unpredictable — a “complex adaptive system” — and applying blunt instruments of pre-set risk frameworks would be foolhardy.

For transparency to be a meaningful requirement, we need audits of AI systems. But who will perform them and where are the laws that make them mandatory? A landmark law in New York that requires employers to use automated employment decision tools to audit them for race and gender bias was found to be toothless by a recent Cornell study. In the meantime, companies, such as IBM and OpenAI, have been volunteering their own transparency mechanisms. But we should be wary of foxes drafting reports on the state of the henhouse.

So, what should be done? First, we should remember that democracy has many demons to battle even before we get to the AI demon. Already, political candidates have been jailed in several parts of the world, bomb threats have gone out, cellphone networks have been shut down, candidates have warned of bedlam if they lose or are taken off the ballot, and vote-buying and ballot-stuffing are still part of the toolkit. Its novelty notwithstanding, AI-sorcery may, on the margin, not rank among the biggest mischief-makers this year.

Second, we should certainly take the electoral risks of AI seriously, but also keep in mind the risks presented by rushed efforts to manage those risks. There has been a scramble among regulators to reign in AI in advance of elections, setting up 2024 to be a year of the AI freak-out after 2023’s AI frenzy. It is better that these well-intended regulators understand the unintended consequences of rushed regulations.

Advertisement

Third, AI regulators ought to think several steps ahead and formulate rules that anticipate the greater risks tomorrow. Voters in elections beyond 2024 will be grateful for such foresight.

The writer is Dean of Global Business at The Fletcher School at Tufts University

First uploaded on: 18-02-2024 at 12:08 IST
Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement
close