Beyond Ethical AI

The Challenge of Enduring Alternatives

Artificial intelligence is often presented as inevitable and increasingly essential, spreading across economic and social domains in ways that appear difficult to resist or redirect. Framed in this way, its expansion is treated less as a political choice than as a technical necessity. Yet the material conditions underpinning its development are widely recognized as exploitative – both in the labor practices that sustain data production and model training, and in the extractive supply chains tied to mineral sourcing in the Global South, often shaped by enduring colonial and neo-colonial power relations. Despite this, these foundations remain largely unchanged, sustained by the concentration of power over AI.

This concentration is evident in the structure of the AI ecosystem itself, where a small number of firms dominate its core infrastructure: Amazon, Microsoft, and Google control cloud computing and deployment; NVIDIA supplies key hardware; and companies such as OpenAI, Anthropic, and Meta lead in model development. What is often presented as technological progress is, in fact, also a consolidation of economic power within a highly oligopolistic system. Such dominance not only entrenches control over access, standards, and innovation, but also enables these actors to shape policy and regulatory agendas in their own interests. More than simply dominating markets, it actively structures technological trajectories, steering them in line with prevailing logics of capital accumulation

In this sense, infrastructure is not merely technical but institutional and political, influencing what technologies are developed, how they are optimized, and whose interests they ultimately serve. When organized around scale, efficiency, and revenue extraction, it tends to marginalize or foreclose alternatives oriented towards public value, democratic accountability, or collective well-being. Alternatives are nonetheless emerging in the form of cooperative, public, and community-led approaches that seek to reconfigure ownership and governance. However, these initiatives continue to operate within the same capital-intensive ecosystem and remain materially dependent on the structures they aim to challenge. 

These dynamics highlight a central issue: the extent to which alternative AI can remain distinct when embedded in arrangements organized around fundamentally different priorities. 

Why Endurance Matters

Beyond their emergence, a further concern is whether such alternatives can endure under these conditions. AI systems are rapidly expanding into domains that have traditionally relied on human judgment, including healthcare, finance, and public administration. At the same time, issues around data extraction, platform concentration, and algorithmic governance are increasingly influencing policy debates and organizational strategies, often in ways that reinforce existing asymmetries of power and control.

Many proposed responses fall into two broad categories: efforts to regulate existing approaches and efforts to build alternatives. Both are important, but the question of durability remains unresolved. Alternative models must operate within the same broader institutional and infrastructural environment, often relying on resources, standards, and platforms governed by dominant actors. As a result, the challenge is not only how such initiatives are designed, but how they persist within conditions that may pull them in different directions.

The focus, therefore, shifts from the possibility of alternatives to the terms of their survival. What does durability look like for non-extractive or public-oriented AI? While alternatives may be relatively easy to imagine, assessing when and how they can remain viable – and resist pressures such as financialisation, dependency, and corporate capture – is far more difficult, especially as they grow, professionalise, or become embedded in larger institutional ecosystems.

About the Project

My work sits at the intersection of political economy, digital infrastructure, and governance, exploring how legal, financial, and organizational systems influence emerging technologies. I research alternative models – such as cooperatives, community enterprises, and mission-driven organizations – that aim to support more democratic, inclusive, and sustainable forms of development, particularly through community-led and participatory approaches.

Following my PhD on mission-led businesses such as B Corps, I became skeptical of the ability of firms embedded in growth- and profit-oriented market structures to deliver meaningful social impact. While B Corp certification introduces standards around social and environmental performance, accountability, and transparency, it does not fundamentally alter ownership arrangements or the underlying pressures of capital accumulation. This led me to question whether profit and purpose can genuinely be reconciled within conventional corporate forms, and to examine how such approaches remain vulnerable to weak governance and leadership failures. I then shifted my focus towards organizational forms that embed participation more directly, including employee-owned firms and cooperatives, and their role in community development and local wealth building.

More recently, I have looked at how community organisations engage with digital technologies and AI – not just as tools, but as socio-technical arrangements that shape relationships with communities and stakeholders. As an ICDE Fellow, I now focus on the conditions under which alternative models of ownership and governance can structure how AI is developed and used. In particular, I explore how community-owned, cooperative, public, or commons-based AI can remain non-extractive and resist forms of corporate capture within an increasingly concentrated digital economy. At its core, the project asks:

What conditions enable alternative AI systems to resist capture and sustain non-extractive or public-oriented goals over time?

This is less about designing ideal systems in abstraction and more about analysing the conditions under which they emerge, operate, and persist in practice. Doing so also requires moving beyond a narrow firm-level lens to examine how inter-organizational collaborations – across public institutions, civil society, and alternative organisational forms – might sustain forms of cooperative intelligence that are not reducible to competitive or proprietary models. At stake is whether such arrangements can endure without being absorbed, reshaped, or gradually subordinated to dominant market logics in ways that erode their original aims.

From “Ethical AI” to Institutional Conditions

Much of the current discussion around “ethical AI” focuses on improving the properties of individual models: making them more transparent, reducing bias, or strengthening accountability mechanisms. These are important interventions, but they tend to operate at the level of individual AI components. Even where approaches are smaller-scale or locally deployable, they do not on their own address the broader institutional and infrastructural contexts in which they are developed and used. A growing body of work instead emphasises the need to situate AI within its wider political and economic context.

Taking this limitation seriously requires shifting attention outward, towards the institutional and material environments in which these technologies are developed and deployed. From this angle, emphasis shifts away from evaluating isolated systems and towards understanding how structural conditions shape what remains possible.

For instance, how do different forms of financing influence organisational priorities as projects grow, and under what circumstances do they enable or constrain non-extractive orientations? What kinds of governance arrangements can withstand external pressures – such as financialization or corporate pressure – without drifting from their original aims? How does reliance on large-scale platform providers inform both technical and economic decision-making, and with what implications for autonomy and dependency? And in everyday practice, how are decisions made about when to rely on automated systems and when to override them, and whose knowledge and authority govern those decisions?

These questions point towards a broader concern: even where organisations adopt alternative ownership or governance structures, their ability to remain non-extractive may depend on the legal, financial, and infrastructural arrangements in which they operate, including pressures related to financialisation, dependency, and corporate capture.

Research in Progress: Following the Tensions

Rather than starting from a fixed framework or set of prescriptions, this project takes an empirical approach, grounded in a series of interviews with people working across a range of domains, including cooperative and community-led technology initiatives, public digital infrastructure, legal and governance design, alternative funding schemes, and technical work on decentralised or federated systems.

The aim is not to document individual organisations in isolation, but to identify recurring patterns – especially where tensions, trade-offs, or structural constraints emerge – and to consider how these might navigate collectively. This involves examining whether such initiatives can move beyond isolated organisational successes to form interconnected modes of production, and whether they can operate prefiguratively, demonstrating what non-extractive alternatives might look like through their everyday practices.

In some cases, these tensions relate to external pressures, such as financial dependencies, infrastructural limitations, or requirements imposed by funders, regulators, or platform providers. In others, they concern how organisations interpret and act upon the outputs of the technologies they build and deploy, including how responsibility is allocated when automated processes are used in decision-making.

What is becoming increasingly apparent is that these dynamics rarely operate in isolation. Changes in one area can have cascading effects elsewhere, shaping both organisational behaviour and technological outcomes in ways that are not always immediately visible – for example, where shifts in funding models lead to changes in governance practices or technical design choices.

At this stage, these observations remain provisional. The goal is not to advance a settled framework, but to better understand the conditions under which alternative configurations can remain non-extractive, coordinate across organisations, and endure over time – and where they become more fragile than they initially appear, particularly under pressures to scale, secure funding, or integrate with existing infrastructure.

An Open Invitation

This project is very much a work in progress, and it depends on engagement with a wide range of perspectives. Input from those working in practice, policy, and research is essential to understanding how these dynamics play out in real-world settings.

I am interested in hearing from individuals with experience in areas such as cooperative and community-led technology initiatives, public digital infrastructure, governance and institutional design, alternative funding schemes, and technical work on decentralized systems. Perspectives that highlight constraints, trade-offs, or unintended consequences are particularly valuable where initial goals have been gradually reshaped.

This work does not seek to produce a fixed framework, but to develop a more nuanced understanding of the conditions and constraints under which alternative approaches can emerge, remain non-extractive, and endure – and where they may encounter structural limits, including points at which compromise or adaptation becomes unavoidable.

Beyond Academia: What This Might Mean in Practice

While this research engages with ongoing debates in political economy and governance, it is also motivated by a set of practical questions that extend beyond academic contexts.

For policymakers, it points to a need to think not only about regulating AI, but also about how digital infrastructure itself is designed and sustained. This includes questions around forms of financing, legal frameworks, and governance arrangements, and how these might configure accountability and long-term institutional trajectories, including who retains control over data, compute, and decision-making authority.

For cooperatives and community organisations, it suggests that ownership, while important, may be only one part of a broader picture. Issues such as capital structure, decision-making processes, and everyday institutional practices may also determine how organisations respond to external pressures and whether they can sustain non-extractive orientations in the longer term, not least under circumstances of resource scarcity or growth pressures.

For technologists, it raises the question of how AI systems reflect the institutional settings in which they are developed and used. Design choices do not operate in isolation; they interact with organisational structures and influence how technologies are interpreted, trusted, and acted upon in practice, particularly in how trade-offs between efficiency, accountability, and human oversight are negotiated.

For the broader public, it opens up a more general conversation about what AI is for, and how its purposes are defined. Rather than assuming a single trajectory centred on optimisation and efficiency, it may be worth asking what alternative forms of coordination, care, or collective decision-making might look like – and what institutional conditions would be required to support them, such as forms of participation, oversight, and shared ownership.

Towards a Different Framing of AI

Rather than treating AI as an autonomous force, the more pressing question is how far changing ownership and governance structures can alter the trajectories it currently follows. AI systems are deeply embedded in social, economic, and political contexts, reflecting the incentives, constraints, and power relations that organise their development and deployment.

This suggests that ownership alone may be insufficient as a point of intervention. It may be that the future of AI is shaped just as much by the broader configuration of infrastructure, governance, and political economy within which these systems operate. If underlying infrastructures, dependencies, and incentive structures remain intact, shifts in ownership or governance may have only limited effects. On this view, both what gets built and what endures depend not only on design or ownership, but on how these wider arrangements are structured and whose interests they continue to serve.

Understanding these relationships does not provide easy answers. It does, however, reframe the problem: alternative technological futures depend less on isolated design choices and more on the conditions that allow them to persist. These conditions shape not only the viability of different organisational forms, but also their capacity to resist or diverge from dominant market logics, especially under pressures to scale, standardise, or commercialise.

The challenge, then, is not simply to imagine alternatives, but to identify where they can realistically endure – and where they are likely to be reshaped by the very dynamics they seek to contest.

About the author:
Malu Villela is a 2026/2027 ICDE Fellow and Lecturer in Management at Essex Business School at the University of Essex