INVI's chief analyst: The three red lines for the use of AI in politics

INVI's chief analyst Sofie Burgos-Thorsen was interviewed by AI Portalen magazine about the opportunities and pitfalls of using artificial intelligence in politics and public administration.

The starting point is the sensational case from Albania, where the government has appointed the AI Diella as minister responsible for selecting and automating public tenders. This means that an AI will now make decisions that we would normally leave to humans.

Sofie is—to put it mildly—quite skeptical about the Diella concept and instead presents three "red lines" for the use of AI in politics:

  • No AI without clear human responsibility.

  • No automation without transparency and access to appeal.

  • No symbolic roles that confuse technology with democracy.

Read the entire interview at AI Portal. See an excerpt below:

To understand why Albania's experiment is problematic, it is necessary to dig a little deeper into the idea behind it: the idea that technology can be neutral.

In public debate, a dichotomy is often drawn between technology and politics. Technology is presented as objective, data-driven, and free from the human errors and emotions that plague political decision-making processes. Politics, on the other hand, is perceived as subjective, dirty, and characterized by compromises and special interests. In this narrative, AI becomes an attractive alternative—a machine that can deliver the flawless, impartial decisions that democracy apparently finds so difficult to produce.

But according to Sofie Burgos-Thorsen, this notion is fundamentally flawed. AI systems are not neutral observers of reality. They are products of human choices—from the selection of training data to the design of algorithms and the definition of what “success” means for the system.

“Visions of technology as neutral or objective technical elements are simply based on a false premise. Regardless of which AI technologies we use, they are trained on specific data and have specific values built into them.”

Training data is, by definition, historical. It reflects the past—with all its inequalities, prejudices, and power structures. When an AI system is trained on this data, it does not just learn to recognize patterns. It also learns to reproduce the biases embedded in the data. And when the system is then presented as neutral, these biases take on a new and dangerous legitimacy.

Sofie Burgos-Thorsen has no doubt about what is at stake:

“Neutral technology simply does not exist. And the myth that there is such a thing as completely unbiased, neutral technology that can make decisions without being influenced by all kinds of human judgment is one of the most dangerous fallacies for democracy. This Albanian case may reinforce that myth.”

What's next
What's next

#21 How we help people out of loneliness. Kompas Live from Vartov Church