RWANDA-VOTE

AI and democracy: Threats and opportunities

As artificial intelligence (AI) reshapes culture, information, and communication, and influences democratic processes, a group of international experts is preparing to release a “Global Policy Brief” to help policymakers use AI responsibly.

By Stefano Leszczynski and Linda Bordoni

The main focus of the guiding document “Global Policy Brief” drawn up by 8 international experts is to address the urgent global challenge posed by AI’s role in elections.

The document will be unveiled at the Summit for Action on Artificial Intelligence, scheduled for February 10-11, 2025, in Paris, in the presence of world leaders.

In an interview with Vatican Media, Catherine Régis, Professor at the Université de Montréal and Director at IVADO noted that 2024 is considered the “year of elections”, with more nations heading to the polls than ever before in recent history amid an increasing acknowledgement of AI's impact on the democratic process.

“We thought it was the right year to reflect on lessons regarding AI interference in elections. What can we learn from this? What can we do better?” she said.

Florian Martin-Bariteau, an internationally renowned expert on technology policy, explained the need to address the issue through global cooperation, pointing out that the stakes are global with instances of AI-fuelled disinformation and foreign interference having surfaced in regions spanning Europe, North America, and Latin America.

“No single country, or even regional alliances like the EU, can tackle this alone. Every democracy is at risk. To counter this global threat, we need international collaboration and concrete solutions,” he said.

AI as a tool: A double-edged sword

The experts noted that AI carries both promise and peril for democratic systems. It has the potential to enhance political participation and transparency, but it can also amplify misinformation campaigns and facilitate surveillance tools that undermine elections.

“We can’t just point fingers at a few large corporations,” Martin-Bariteau added, “There are many small startups around the world creating AI tools that amplify threats to democracy. Technology isn’t neutral; people decide how systems are designed.”

This, Martin-Bariteau and Régis argue, is why policymakers must step in - to ensure that AI developers act responsibly and consider societal harms when designing their systems.

From content moderation failures on platforms like TikTok or X to the targeting of vulnerable groups, they stress that AI’s design choices have far-reaching consequences.

Defending Democracy

Pope Francis has often spoken of a “Third World War fought in pieces”. Many analysts agree the defence of democracy amid AI’s rapid development is a part of this broader battle.

Reflecting on the fragility of democracies under pressure, Régis explained that “Democracy is a complex system. It demands transparency, energy, and continuous dialogue. AI adds an extra layer of complexity, one that could either strengthen democracies or make them even more fragile.”

Martin-Bariteau pointed out that responses must be multi-stakeholder - engaging governments, civil society, and the private sector alike, and he noted that the challenges transcend national borders.

“This is not just about one country or region. The solutions we propose must work globally,” he said.

Concrete action for Policymakers

The two experts agreed that the Global Policy Brief is more than a reflection: it’s a call to action. It urges governments to pool resources, enforce stricter accountability for AI developers, and leverage existing international frameworks to create robust protections for democratic integrity.

“We need global cooperation,” Régis concluded, “We already have international structures in place. Let’s inject AI expertise into these systems to tackle this challenge head-on.”

Listen to the interview with Catherine Régis and Florian Martin-Bariteau

Thank you for reading our article. You can keep up-to-date by subscribing to our daily newsletter. Just click here

19 December 2024, 16:31