Who controls AI? The seductive dance between human and algorithm (4 of 8)

Who controls AI?
This image evokes discomfort. Rightly so? Or is it exactly what's needed to have the uncomfortable conversation?

Discussions about AI often focus on what it can do (blog 3), how it works (blog 2), and why it’s needed (blog 1). But the “who” question is just as important—perhaps even more fundamental. Asking who controls AI touches on core principles of power and authority. AI isn’t just technology: it’s increasingly embedded in decision-making traditionally reserved for institutions under the separation of powers.

1. Who controls AI?

We’re using AI more and more—as a helper, a mirror, a decision accelerator. But as the technology advances, one question remains underexplored: who controls AI? Who sets the rules? Who provides oversight? And who takes responsibility when things go wrong?

Where lawmakers once made laws, governments implemented policies, and judges ruled, algorithms are likely to play a bigger role in decisions about welfare, employment, healthcare, education, and security—even in the private sector. It becomes crucial to understand who governs, monitors, and adjusts these systems.

This blog explores the governance question around AI:

  • Which institutions are in control?
  • Which people are making decisions—and who is affected?
  • And how do we strike a balance between technology and society?

2. Two models in China and the US, one concern: who governs AI?

The question of who governs AI receives different answers around the world—with major consequences for democracy, power, and public values. How AI is developed and governed varies widely between global powers. The United States and China represent two contrasting models, each with fundamental implications for the protection or erosion of public values.

US: Market leads, government observes

In the US, control over AI largely lies in the hands of a few major tech companies such as OpenAI, Google, Meta, and Microsoft. They drive innovation, often without transparency or democratic accountability. The government plays a limited, corrective role through policy efforts like Biden’s AI Executive Order, but struggles to keep up with the speed and scale of private development.

This market-driven model fosters rapid innovation but puts public values at risk. Responsibility for failures is rarely clearly defined, ethical guidelines are mostly voluntary, and legal frameworks lag behind. The result is a democratic deficit: decisions about AI are made without sufficient public oversight, leading to social inequality, bias, exclusion, and increasing dependence on opaque systems.

China: AI as an extension of state power

In China, AI serves as a strategic tool for both economic growth and social control. The government sets the AI agenda through national plans like the Next Generation AI Plan, with tech companies such as Tencent, Alibaba, and Baidu acting as enforcers of state policy. In exchange for data access and funding, they follow central government directives closely.

AI is used for behavioral steering, surveillance, and social stability. Citizens have little to no say; technologies like facial recognition and social credit scores are widespread. Human rights and individual freedoms are subordinated to state goals. Transparency and public accountability are largely absent.

Market or state—citizens left out

Despite their ideological differences—market versus state—both systems share similar risks: concentrated power, a lack of democratic legitimacy, and the neglect of fundamental rights. In the US, public values are squeezed by profit motives; in China, technology is used for systemic control. In both cases, citizens risk remaining mere users without real agency.

3. Europe between ambition and dependency

Europe presents a unique patchwork of political, cultural, and institutional diversity. This complexity makes it hard to act as a united front in the AI domain. While the US and China can centrally direct their AI paths, Europe lacks a cohesive engine. This stems from three persistent tensions:

  • Cultural fragmentation: Europe consists of dozens of countries with distinct identities. Just as AI calls for cross-border collaboration, nations retreat into national reflexes. Think of renewed regional autonomy, resistance to globalization, and fear of losing control. This hinders the formation of a shared vision or investment strategy.
  • Institutional sluggishness: The EU is a slow-moving bureaucratic system where policy takes time and compromise. Innovation and execution are scattered across member states. The AI Act is a step forward but requires national implementation and enforcement, leading to inconsistency and delays.
  • Normative leadership without means: Europe strongly emphasizes ethics and regulation but lacks control over the technology itself. Most AI models and infrastructure come from the US or China, and many European startups relocate to regions with more funding and fewer rules.

This creates a painful paradox: Europe wants to set the rules but relies on others’ tools. Its moral leadership is sincere but powerless without investment, technological autonomy, and talent retention.

What Europe risks losing

  • Public services rely on American infrastructure.
  • Innovation and talent migrate to less regulated regions.
  • Europe sets standards but lacks technological clout.
  • Lack of access to scalable, European datasets hampers development.
  • Young talent rarely sees Brussels as a place to make an impact.

What Europe can learn from the Dutch industrial revolution

The situation recalls 19th-century Netherlands, which largely missed out on the Industrial Revolution. At the time, the focus was on regional interests, guilds, and trade; there was no sense of urgency or national leadership. Conservatism and bureaucratic inertia led to stagnation—until foreign pressure forced change.

The parallels with today’s AI developments are striking:

Aspect 19th century – Netherlands 21st century – Europe without action
Governance focus Local, guild-oriented Fragmented AI policy
Sense of urgency Lacking AI as a future issue, not a systemic one
Political leadership Conservative Reactive, lacking a mandate
Culture Moderation, restraint Risk-averse, ethics as the end goal
Outcome Industrialized too late Risk of dependence on US/China

The parallels with the past serve as a warning: without shared vision, investment, and leadership, Europe risks missing yet another fundamental technological shift. But fatalism is not an option. The key lies in action at all levels—from citizens to policymakers, from local institutions to European bodies. So who needs to step up—and how?

4. So who can do what?

AI is changing the rules of decision-making and society. That’s why talking about regulation alone isn’t enough—we also need to examine who’s actually in control. From citizens to parliaments, this requires action, responsibility, and collaboration at all levels.

Curious about how AI affects your area of interest? Click on one of the topics below to explore the role AI plays—complete with examples.

    European citizens as carriers of public values

    AI is not a neutral technology. The values, inequalities, and assumptions we embed in our societies come back amplified through algorithms. This makes AI not just a technical issue, but also a political and social one: who is seen, heard, and recognized—and who isn’t?

    Interestingly, many concerns among European citizens are remarkably similar, especially among people with comparable social roles. Parents, teachers, or healthcare workers across different countries share worries about digital inequality, the loss of human contact, and lack of transparency. A teacher in the Netherlands often relates more to a colleague in Spain than to a tech executive in their own country. Social position connects people more than nationality.

    Yet AI is mainly discussed within national frameworks, even though the technology itself crosses borders. It’s time to organize civic engagement on a European level—through shared projects, direct connections, and new forms of participation. Not just to be heard, but to help shape the digital future together.

    Citizens are the bearers of public values. Their involvement is essential for democratic legitimacy. That requires breaking down caricatures and combating polarization—not by labeling people, but by actively listening and learning from one another.

    Local politics: use AI to listen better

    Why shouldn’t Amsterdam learn from how Munich handles school absenteeism? Or Groningen take inspiration from a waste collection algorithm in Copenhagen? AI reveals shared challenges—based on data, not ideology. Not because it’s in a party platform, but because it works.

    Local governments can use AI as a benchmarking tool: to see what works elsewhere in similar neighborhoods. Often, off-the-shelf AI is better, faster, and cheaper than hiring an external consultant.

    AI also creates opportunities to organize policy differently—from the bottom up, rather than top-down. Municipalities no longer need to rely on standardized surveys with Net Promoter Scores as the outcome. Instead, residents can answer open-ended questions: What’s going on in the neighborhood? What’s working? What’s missing? Where once this input was hard to process, AI now helps recognize patterns, find connections, and provide feedback: here’s where pain lies, here’s where potential exists.

    In this way, AI becomes more than a technological tool. It becomes a democratic instrument—a listening machine that gathers signals, synthesizes insights, and offers direction—not through multiple-choice menus, but through nuance, lived experience, and stories.

    Public institutions: take the lead, build trust

    AI is often seen as something big and technical—something civil society organizations can’t influence. But it’s precisely institutions like schools, healthcare providers, municipalities, and housing corporations that can use AI to strengthen public values and restore a human touch.

    That starts with setting clear requirements: for transparency, human oversight, and appeal procedures. If public institutions decide who uses AI, how it’s used, and whether it’s auditable, then technology can shift from being a risk to a tool for justice.

    AI can also break through old logics. Think of a housing corporation lobbying for years to change policy. What if instead they used AI to locally develop solutions—focusing on affordable housing, tenant flow, or sustainability—with direct impact? Technology becomes a driver of social innovation rather than a tool for passive policy implementation.

    Even at the micro level, AI adds value. Take letters sent to residents after the death of a loved one: AI can review template texts for distant or confusing language and improve them before they reach grieving families. It’s a small example, but it shows how empathy can be scaled.

    Finally, AI can help deepen participation. No more predictable town halls or closed-question surveys, but open space for people to express what really matters. AI organizes these stories, identifies themes, and feeds insights back transparently. This enables a new form of listening that enriches policy—not with numbers, but with meaning.

    Professionals will soon write for AI too

    Where now policy documents are mainly reviewed by people, AI is increasingly becoming the first reader. Policy advisors, lawyers, and communications staff will soon write not only for colleagues or executives, but also for algorithms checking logic, comparing reasoning, and flagging inconsistencies or bias. AI rewrites sentences, exposes weak arguments, and suggests alternative scenarios.

    This makes policymaking more precise—but also more vulnerable: documents are scrutinized faster, sometimes without context. Writers must be more explicit: Why this choice? What assumptions? What’s missing?

    The same applies to regulators. If algorithms can analyze policy documents, they can also support supervision—more systematically, but also more strictly. But who decides what constitutes “good reasoning”? And what standard applies?

    For communication professionals, this marks a fundamental shift. Clarity, honesty, and ethics become not just content choices but strategic tools to maintain trust—in an era where AI reads first, and people follow.

    AI doesn’t ask questions for form’s sake. It exposes form—and demands new responsibility from everyone who writes, evaluates, or explains.

    Parliaments: break the inertia, steer with AI

    AI is set to fundamentally change how laws are created, reviewed, and amended. Legislative processes, which now rely heavily on individual cases, expert advice, and political judgment, can be enriched by AI’s ability to analyze thousands of rulings, complaints, implementation reviews, and policy documents at once. Not as a substitute for political judgment, but as an added layer that reveals where policy clashes with reality, where patterns emerge, and where new interpretations arise—even before the courts weigh in.

    This enables a new kind of legislative practice: one that learns from signals rather than reacting to crises. AI can identify where laws have disproportionate effects, where they infringe on fundamental rights, and where exceptions have turned into systemic gaps.

    Yet lawmaking today often grinds to a halt. Politics relies heavily on advisory rounds, feasibility checks, consultations, and impact assessments—from planning bureaus to the Council of State. The process advises, delays, and broadens—but rarely accelerates. AI offers a way to make lawmaking both faster and more careful: by identifying friction points sooner and responding more quickly to public concerns.

    AI can also elevate international comparisons. Parliaments can clearly and systematically see how other countries handle similar issues, where choices conflict with European standards, and which interpretations improve legal certainty.

    All this calls for a parliamentary culture not just open to technology, but actively engaging with it. A politics that uses AI to make laws fairer, more precise, and more human-centered—not abandoning human judgment, but no longer drowning in slowness disguised as diligence.

    European Union – Move beyond funding logic, toward a fair playing field

    If Europe is serious about AI, it needs to let go of the idea that a big fund is the answer. Time and again we see such funds primarily benefit consultancies, established companies, or players who know how to navigate the paperwork—not the people actually building solutions. Instead of collaboration, it breeds competition. Instead of innovation, promising ideas get stuck in forms, procedures, and delays. Bureaucracy suffocates innovation before it can breathe.

    A fund without clear and fair rules increases inequality: it strengthens the already strong and leaves the rest behind. The result is fragmentation, not acceleration. Policy noise instead of strategic vision.

    If we want to advance AI, we must flip the approach: don’t start with money—start with the playing field. Invest in shared infrastructure—European data platforms, compute power, testing environments—that are freely accessible. Not through tenders or contests, but as collective goods.

    Create space for experimentation. Allow public and civil organizations to test what works without spending 18 months navigating a grant process. What works can grow. What doesn’t still yields insight—and therefore value.

    Lastly, establish clear rules. As long as no one knows exactly what responsible AI entails or who enforces it, companies remain cautious and citizens distrustful. Transparent standards build trust, clarity, and fairness.

    Europe doesn’t need a fund everyone fights over. It needs a field where everyone can participate, learn, and scale. Not a luxury—but a strategic necessity.

The roles may differ—but the goal is shared: protecting public values in a world where technology operates more and more autonomously. That raises a fundamental question: as AI learns and evolves, who stays in control?

5. AI learns on its own—but we must steer

AI doesn’t just change how we work, learn, and communicate—it shifts power. Whoever develops, trains, and deploys algorithms shapes how societies function and how decisions are made. The question “who controls AI?” isn’t a technical footnote—it’s a democratic core issue, with direct consequences for public values, institutions, and citizens.

As long as governance lies primarily in the hands of companies or states with different norms, public interests risk being pushed to the margins. Europe may have moral ambitions, but still lacks the clout to realize them. Citizens want to participate, but rarely feel heard. National politics often confuses slowness with diligence—and watches passively instead of looking ahead.

The choice is now ours. Either we let go of the wheel—or we build a technological system guided by public values, civic engagement, and political courage.

AI may be self-learning—but giving it direction is still up to us.

Previous blogs

  1. Ethical AI starts not with rules, but with reflection.
  2. AI calls for redesign from the citizen’s perspective.
  3. AI and Europe: power shifts, institutions shake.

Leave the first comment