In this series, we explore the role of ethics in the age of AI — not as a final step, but as the starting point for renewal. Because what if technology doesn’t just call for rules, but also for reflection? This blog is about why Europe must now dare to choose a different path.
1. Introduction: Why reflection on ethical AI is needed
There’s widespread commentary about how Europe is falling behind in global AI development — with pace, capital, scale, and technological leadership concentrated in the US and China. Meanwhile, Europe is primarily focused on regulating a technology still very much in flux.
There’s truth to that portrayal. Europe is slower, less ambitious, and less aggressive in scaling up. And perhaps that’s a problem. But maybe it’s not.
Is AI purely a technological breakthrough? Is it only about computing power, infrastructure, and algorithms? Or is something deeper at play — something more fundamental?
What makes this transition so loaded isn’t just its speed or complexity, but the way it quietly embeds itself into our societies: invisible, omnipresent, irreversible. That’s where tension arises. A growing number of people feel uneasy about what’s happening. Not because they fear technology, but because they feel they’re losing something harder to define — a sense of where we stand morally.
This discomfort isn’t fear of progress. It’s a form of existential confusion. How do we reflect on what is right when a system learns from behavior and optimizes based on past outcomes? How do we safeguard human dignity when models increasingly predict our actions — and we feel less and less free to choose otherwise?
2. The US approach: Ethics as an add-on
From Silicon Valley and broader American tech culture, technology is often viewed through a fundamentally instrumental lens. Technology isn’t a question — it’s an answer: to scalability, efficiency, and especially market success. In this model, humans aren’t moral agents, but users, data points, cogs in an optimizable chain.
In this model, ethics isn’t a foundational concern, but an afterthought — an appendix. It’s added only after the system works — ideally when it’s already being sold. Moral reflection becomes an optional feature for those who care about it.
AI isn’t seen as a mirror of human complexity, but as a brilliantly scalable brain — a system that thinks, predicts, and automates. Most AI development is rooted in the logic of control, cost-cutting, and acceleration. What’s often missing are the deeper layers of being human — inner contradictions, value conflicts, ambiguity, self-image, and context.
Underlying this worldview is a deep belief in the market as the ultimate validator of success and value. What works, sells. What sells, scales. What scales, becomes truth. In this model, success becomes the measure of moral legitimacy — not whether something is right, but whether it sells well.
The damage caused by this logic is rarely part of the design. Think of how social media reduces girls to appearance and attention, or how boys disappear into game worlds where control, competition, and addiction blend into identity. These aren’t accidents — they are systemic results of platforms optimized for screen time, which zero in on our vulnerabilities.
3. China: Ethics as a tool of control
In China, the approach is markedly different. AI is not primarily about market dynamics or user experience, but a tool for societal regulation. Technology is inseparably linked with political objectives — like stability, predictability, and collective progress.
The citizen is not a consumer or a user, but a functional part of a larger system. Behavior is not merely observed but actively directed. AI is not a personal assistant, but a means of reinforcing — or discouraging — certain behaviors.
Ethics, in this context, is not ignored but functionally framed. The question is not “what is good?” but “what contributes to order?” Morality is equated with effectiveness — and effectiveness is defined by the system’s goals. Consider the social credit system, where behaviors — from online posts to financial habits — are scored to determine access to education, travel, or services.
This too is a form of meaning-making — but one where subjective experience, personal choice, and individual reflection are largely absent. AI is part of an infrastructure that maps, evaluates, and adjusts behavior. Deviating from it means losing something: access, trust, opportunity. And for those outside the system, the lack of alternatives and constant pressure can result in quiet, deep exhaustion.
4. The European approach: Draft, delegate, defer
And then there’s Europe. Our first instinct is often legalistic. We draft rules, define frameworks, and identify risks. Think of the AI Act, impact assessments, or oversight structures. That’s not a bad thing — regulation is necessary, especially with technologies that influence society at scale and speed.
But there’s a tension. Lawyers — often first to engage with technologies like AI — are among the professions least familiar with the underlying technical concepts. Data science, algorithmic systems, and machine learning rarely appear in legal education. This can lead to well-intentioned but impractical policies — like cookie laws no one takes seriously, or privacy rules that hinder innovation before it begins.
The result is a system driven more by institutional reflex than by vision:
- Workshops are held — and assumed to be enough. Innovation becomes a calendar item, not a practice.
- Task forces are created — but without mandates or budgets. Everyone is “doing something,” yet no one is responsible.
- National governments wait for Brussels, local ones for The Hague — and all defer to privacy authorities. Ultimately, everyone waits for everyone else.
- AI gets “checked off” to meet innovation goals — while systems evolve but thinking does not.
- Ethics goes to a commission, tech to IT. What’s left is a twilight zone where accountability gets passed around.
- Experiments happen — but without direction and mostly in low-impact areas. Pilots without vision, prototypes that reinforce the status quo.
- We speak of risks, EU funding opportunities, or environmental harm — but action rarely follows. Fear of making mistakes outweighs the will to create meaningful change.
This isn’t about cynicism or bad intentions. It’s behavior shaped by a culture of risk avoidance, box-checking, and caution — fed by fear of failure, habits of compliance, and a shortage of moral courage. It feels safer to treat AI like a package at the door: inspected from the outside, given labels and rules — but never opened, for fear of what might be inside.
5. Who do we show up for, really?
Across all these approaches — whether driven by market dominance, behavioral control, or legal restraint — runs a common thread: a deep tension between system and person. Between scale and meaning, between control and conscience. Each continent has chosen a path, but none have resolved this tension.
For many young people, technology is no longer a neutral mirror, but one that distorts. They learn to position themselves, to manage visibility, to curate a persona — professional, political, idealistic. Over time, that image begins to replace the person beneath. When identity is reduced to validated opinions, polished content, profile data, likes and followers, it’s no surprise that AI can eventually take over that performance with ease.
For organizations, the tension lies elsewhere — but it runs just as deep. They wrestle with a simple yet uncomfortable question: Who do we serve — and why? The public sector is meant to serve citizens, but does it? Companies claim to be customer-first — but act otherwise. Many institutions operate on accountability, risk management and reputation. Self-preservation often trumps the will to be truly meaningful. Somewhere in this machinery of rules, systems and procedures, personal responsibility disappears.
And so we arrive at the question we can no longer avoid: What do we truly expect from AI, if we already struggle to show who we are — and who we are here for? After all, technology doesn’t learn from our ideals, but from our behavior. If we build systems on distance, control and fear of failure, those systems will reflect exactly that. AI becomes not a mirror of our humanity, but of our absence of it.
And yet, maybe there is hope in that. That AI doesn’t replace us — but holds up a mirror. Not to decide for us, but to confront us with what we may have lost along the way. And if we dare to look — really look — to help us remember what makes us human, and what truly matters.

