When AI Rules Meet Real Life

A Decision Has Already Been Made
Somewhere right now, a loan application has just been rejected by a model no human fully understands. A job candidate’s résumé has been filtered out overnight, automatically, while everyone was asleep. A family has been flagged as a fraud risk by a government system nobody can explain. A worker’s schedule has been generated by an algorithm that has never met them.
These are not hypothetical scenarios. They are documented cases from courts, regulatory investigations, and audit reports. The algorithm has already crossed the line from tool to decision-maker. And yet in most of these cases, nobody — not the organization using the system, not the person affected, not the regulator — could fully answer a very basic question: why did it decide that?
This is the heart of what we call Responsible AI. This essay is about the gap between the principles we have written down and the practical conditions needed to honour them — and about one modest, necessary step toward closing it.
The Law Has Arrived. The Practice Has Not.
For years, conversations about AI ethics felt abstract. Words like “fairness”, “transparency”, and “accountability” appeared in countless documents. They sounded good. They changed almost nothing in practice, because nobody could explain what they meant in a factory, a hospital, a bank, or a municipality.
The European Union’s AI Act has changed the register. It is binding law, not a guideline. It places real obligations on organizations that deploy AI in high-risk contexts — hiring, credit, healthcare, law enforcement. Crucially, the responsibility for monitoring AI behavior falls on the deploying institution, not the tech company that built the system.
This is the right principle. But the AI Act is not without serious critics. An early draft required deployers to involve civil society and affected communities in impact assessments. That requirement was removed from the final text. Law enforcement agencies are not even required to disclose publicly that they use high-risk AI. And 52 civil society organizations have warned against watering down its provisions in the name of competitiveness. Regulation is necessary. But a regulation that excludes the people most affected from its oversight mechanisms is a regulation built primarily for the powerful.
The problem is double: institutions are being asked to monitor systems they cannot see inside — and the people those systems affect have been largely written out of the accountability process. A framework designed to protect rights, but built without the people whose rights are at stake, will protect paperwork first and people second.
Principles Without Purchase
Most organizations using AI are not reckless. They have ethics policies and voluntary codes. They genuinely want to do the right thing. The difficulty is that wanting to be responsible and being able to demonstrate responsibility are two very different things.
Most AI systems in business and public administration are commercial products — black boxes. The hospital sends data, the model returns a score. The employer uploads a CV, the platform returns a ranking. What happens in between is proprietary, invisible to the deploying organization. You see what goes in. You see what comes out. The rest is darkness.
The Dutch childcare benefits scandal is the starkest illustration of where this leads. A government algorithm flagged tens of thousands of families — disproportionately from minority backgrounds — as fraud risks. It ran for years. No record existed to show how any individual family had been scored. Accountability was, in the words of one official investigation, “a black hole.”
The same structural failure appears in more everyday settings. ProPublica’s investigation of COMPAS — used in US courts to assess reoffending risk — found that when the algorithm was wrong, it was wrong differently for Black and white defendants: Black defendants were almost twice as likely to be wrongly flagged as high-risk. Two groups can look equally treated in aggregate while one absorbs a disproportionate share of the harm. Without case-level information, no one inside the organization can detect this until it is too late.
This is the translation problem — the chasm between abstract principles and concrete operational practice. In AI-assisted decision-making, the chain of responsibility often breaks: a human operator reviews a system’s output but feels unable to override it because they cannot evaluate it; a regulator asks for records and receives aggregate statistics rather than anything actionable. Practitioners working on ISO/IEC 42001 alignment have found exactly this: wanting to comply is not the same as knowing how.
A Different Kind of Challenge: Democratic Organizations
This gap cuts deepest in organizations that are not simply optimizing for profit but are trying to remain true to a social mission — cooperatives, democratic institutions, civic organizations.
Consider the Mondragon cooperative network, one of the world’s largest worker-owned ecosystems, based in the Basque Country. Over seventy years, Mondragon has navigated waves of industrial change while maintaining collective ownership, democratic governance, and shared prosperity as non-negotiable values. It is no coincidence that ASETT — the Arizmendiarrieta Social Economy Think Tank — was born in this same territory, with technology and AI explicitly among its core areas of inquiry.
In a worker cooperative, the person who sweeps the floor and the person who signs contracts are, in principle, equal members. Democratic governance is not a feature of these organizations — it is their reason for existing. Decisions are made collectively, through deliberation, over time. The process of reaching a decision is part of the mission. Slow by design. Accountable by design.
AI does not naturally respect this. It was not designed for it. The dominant logic of AI development — efficiency, optimization, scale — is in direct tension with the slow, deliberate, value-laden process of cooperative governance. When a worker cooperative adopts an AI scheduling system or an AI-assisted hiring tool, it is not simply adding a new instrument to its operations. It is potentially importing a set of assumptions about decision-making that were built for a different kind of organization entirely. CICOPA — the international organization of industrial and service cooperatives — has named the tension directly: the tools available were not designed with democratic governance in mind. CECOP, representing 50,000 cooperatives and over a million workers across Europe, goes further: automation is a structural challenge to the democratic practices that define cooperative identity.
The deeper risk is subtle. As AI-assisted decisions accumulate and become routine, the capacity for collective deliberation quietly erodes. People stop questioning because the algorithm already has an answer. Expertise concentrates in whoever controls the data. What researchers have called the “immune system” of democratic institutions — the tacit practices and shared knowledge that allow them to absorb shocks — weakens case by case. Some voices in the social economy are beginning to name this openly: the sector cannot adopt AI uncritically, but it cannot afford to ignore it either.
The erosion is largely invisible. A system that produces good-enough outputs is hard to challenge — until members realize they have lost the capacity to know whether it is working or not. By then, the deliberative infrastructure is already weakened. This is not a future threat. It is happening now, in real organizations, at a pace that outstrips the slow, necessary work of cooperative governance. For organizations with cooperative values, this is not an abstract concern. It is a question of institutional survival.
The Capacity to Question
So what can be done? There is no complete solution. But there is a practical contribution that a field called Explainable AI, or XAI can make — not by solving the black-box problem, but by doing something more modest and immediately useful: making algorithmic behavior questionable.
In the social economy, accountability is always relational. You are accountable to your members, to your community, to the values your organization was built to serve. A system that cannot be questioned is not compatible with that accountability — no matter what the vendor’s documentation says.
The problem is that questioning an algorithm requires something to question. Right now, most organizations see outcomes but have no basis for understanding whether those outcomes reflect consistent, value-aligned behavior. Two similar people receive different results. A pattern emerges. But without any way to probe the system, the pattern cannot be investigated or corrected.
XAI tools offer a partial remedy. They cannot open the black box. But they can probe it — generating information about which aspects of a case seem to drive outcomes, whether the system behaves consistently across comparable situations, whether its patterns align with the organization’s stated values. This is less a diagnostic for individual cases than a capacity for organizational interrogation: a set of instruments that makes it possible to ask hard questions of a system that would otherwise remain silent. And critically, these instruments work from the outside — they do not require access to the vendor’s source code or model internals, which means they are available to the organizations that actually bear responsibility, not just to those who built the system.
A workers’ council debating an automated assessment tool needs more than vendor assurances. A general assembly considering whether an AI scheduling system treats members fairly needs more than aggregate statistics. XAI tools do not answer these questions. They generate the information that allows the organization to begin asking them seriously. The Bank for International Settlements has concluded that probing a system from the outside is often the only realistic path available to deploying organizations. It is imperfect. But it is navigable — and it points toward organizational capacity rather than technical compliance.
Not a Solution. A Beginning.
These tools do not make a biased system fair. Probing a model will surface a discriminatory pattern — postcodes that correlate with ethnicity, names that signal nationality — but it will not remove it. Making the pattern visible is necessary. It is not sufficient.
Nor do these tools replace human judgment. What they surface tells you something important about how a system tends to behave — not whether that behavior is right. A human, with domain knowledge and ethical responsibility, still has to interpret the findings and decide what to do. The tools open a conversation; they do not close it. And they do not resolve the deeper cooperative governance challenge: building the capacity to interrogate a system is not the same as building the AI literacy across the workforce and deliberative culture needed to act on what you find.
What these tools create is a starting point. They give the workers’ council something to examine. They give the auditor something to trace. They give the person who was rejected, flagged, or passed over a basis for asking why — and for contesting what they find.
This is what the platform cooperativism movement has been calling Solidarity AI: not AI used by cooperatives, but AI whose governance, data, and purpose are genuinely co-owned — AI that structurally strengthens cooperation rather than eroding it. The difference is not technical. It is political. It is about who decides, who benefits, and who can challenge the system when it goes wrong. Not after the fact, in a court. But in real time, in the assembly, in the workers’ council, in the ordinary practice of democratic governance.
For that challenge to be real, the system must be questionable. For it to be questionable, there must be something to question — patterns of behavior that can be examined, debated, and contested by the people the system affects. Building that capacity is not a one-time compliance exercise. It is an ongoing institutional commitment.
For organizations rooted in a social mission, this is not a compliance checkbox. It is a condition for trust — the kind that allows people to deliberate together, to override the algorithm when their values demand it, and to know that the technology serves the mission rather than replacing it.
That is what Responsible AI looks like when it moves from principle to practice. Imperfect. Partial. Necessary. One step at a time.
About the author:
Dorleta Urrutia-Onate is a 2026/207 ICDE Fellow and Ph.D. student, University of Deusto. Bilbao, Basque Country