Reflections on the Cooperative AI Conference

I am just returning from our conference on Cooperative AI in Istanbul. Part of what I love about conferences—what keeps me doing this, convening these global PCC events —is meeting people who are both smart and resilient. And I love the little nuggets I take from them: ideas, concepts, provocations, fragments of projects; that I find myself chewing on.

What came through again and again across the four days was a shared understanding: critique is essential, but critique alone does not push us further. It must be paired with experiments, infrastructures, solidarities, and the courage to build. This year, we also had one unexpected group of participants: the cats of Istanbul, drifting in and out of sessions as if checking on our progress.

Meanwhile, a larger question hovered in the background of the entire gathering: What excites people? What keeps them engaged in this movement? What follows is a look at how these forces—from Hollywood to small Latin American cities—are redefining what “AI from below” can mean.

This year, that question was shaped by the text that framed our gathering—Cooperative AI?, the essay Stefano Tortorici and I published just weeks before we all arrived in Florya. In it, we argue that AI today is not neutral infrastructure but a magnifier of capitalism’s deepest fractures. It accelerates labor exploitation, sharpens inequalities, and militarizes everyday life. And while the policy conversation keeps circling around ethics, transparency, and governance frameworks, the real issue often slips out of view: Who owns the infrastructure of AI? Who benefits? Who is harmed? Who decides?

That is why the cooperative movement matters here—not because co-ops are perfect, but because they are one of the few global traditions with living experience in democratic governance, mutual aid, and resisting enclosure. Yet our text is honest about the tensions: co-ops sometimes scale by becoming more capitalist than cooperative; they reproduce inequalities they seek to undo; even Mondragon is both inspiration and cautionary tale. As we wrote, the movement sits in a long dialectic “between prefiguration and constraint,” capable of shielding communities while also vulnerable to co-optation.

Still, cooperatives remain what Erik Olin Wright once called “laboratories of prefiguration”—real-world experiments where different technological and social relations can be enacted long before they become mainstream.

In Istanbul, we picked up that thread: Cooperative AI is not AI used by cooperatives. It is AI that structurally strengthens cooperation. It is AI whose data, compute, governance, and purpose are co-owned; AI that supports collective intelligence rather than extraction; AI that is built with, by, and for communities. That vision—the refusal of enclosure, the collective reclaiming of technological agency—hung in the air throughout the conference. People weren’t just imagining alternatives; they were building them.

We saw this in projects like ROOL LLM, developed by the Hypha Worker Co-operative in Toronto as an open, configurable language model integrated directly into their cooperative workflows and governed collectively rather than by any single company, and in Austin Robey’s new post-Ampled project, Subvert—a musician- and label-owned platform cooperative designed as a democratic alternative to Bandcamp.

The conference started with the documentary In the Belly of AI (directed by Henri Poulain and co-written with Antonio Casilli) exposes the hidden human and ecological costs that make today’s AI systems appear effortless—data-labelers, ghost workers, water-hungry data centers, and extractive infrastructures stretched across the Global South. It reminded many of us in Istanbul that we are not standing outside AI critiquing it; AI is built on human labor.

Melissa Terras used her talk to push back against the idea that “AI is evil,” arguing instead that the real problem lies in the business models behind mainstream AI systems. She emphasized that AI itself is “just lines of code,” and that cooperatives offer a concrete mechanism for building responsible AI because their governance structures make extractive practices impossible: members simply would not allow environmentally harmful processing or exploitative data use. Terras showed how this plays out inside READ-COOP, where librarians, archivists, historians, and technologists co-develop tools, maintain unusually high rates of member participation, and guide features through deep community engagement. She argued that the seven cooperative principles map directly onto what responsible AI frameworks claim to want—justice, autonomy, beneficence, explicability—but rarely know how to implement. Terras also warned that Big Tech increasingly co-opts the language of cooperation without practicing it, while true AI cooperatives face structural challenges: no option to sell out, no angels to bail them out, and reliance on volunteer labor until sustainability is reached. Still, she insisted that cooperative governance provides one of the only viable pathways to building AI tools that serve communities rather than extracting from them.

In our Harvard Business Review article, we outlined five concrete interventions cooperatives can make in the AI landscape.

First, they can democratize data governance, as seen in health-data co-ops like MIDATA or sectoral models like Pescadata, which put communities—not corporations—in control of data access and use.

Second, co-ops can bridge research, civil society, and policy, translating elite AI debates into community spaces through institutions such as ICDE, Aapti Institute, or Code for Africa.

Third, cooperatives can advance AI education, addressing not just technical literacy but the power asymmetries that shape how AI is deployed and who benefits.

Fourth, they can build alternative ownership models, creating shared data layers, cooperative cloud infrastructure, and democratic governance structures that challenge the dominance of AI oligopolies.

Finally, co-ops can adapt AI for cooperative ends, applying the seven cooperative principles to ensure that AI systems serve collective well-being, reinvest surplus into communities, and embed solidarity—an ethical value largely absent from mainstream AI frameworks. These five interventions show how cooperatives can shape AI development in material, actionable ways.

Ana Margarida Esteves set the tone by arguing that AI cannot be separated from the ecologies, relations, and worldviews that shape it. She called for dynamic relational ontologies—rejecting the fixed, hierarchical assumptions of mainstream AI and insisting that technology must be understood as part of a living web of relationships. AI reflects the beliefs of its builders, she noted; if those builders imagine the world as hierarchical, their systems will reproduce those hierarchies. To show this, she drew on Indigenous cosmologies, feminist materialisms, pluriversal ontologies, and thinkers like Fred Turner and Guy Debord to illustrate how cultural ideas imprint themselves onto technological systems. For Esteves, cooperative AI must therefore begin from a fundamentally different worldview—one rooted in connection, care, and community, informed by Indigenous wisdom, feminist practices, and community-led understandings of interdependence—so that the technologies we build help people grow together rather than drift apart.

Marcelo Vieta advanced the idea of AI as a labour commons by drawing on the traditions of autogestión, worker-recuperated enterprises, and critical theories of technology. Based on decades of research in Argentina, Italy, and Canada, he described autogestión as a lived practice in which workers “collectively create, control, and sustain their productive and creative life.” From this foundation, he introduced the labour commons—developed with Fari Razzolini—which reframes labor as a shared resource whose value is collectively governed and reinvested in community life. For Vieta, cooperative AI can follow the same logic: data, algorithms, and digital infrastructure governed democratically, rather than as sites of extraction.

He linked these traditions to the philosophy of technology, revisiting Greek techne as craft grounded in proportion and care, and contrasting it with Herbert Marcuse’s “technological rationality,” which reduces nature and labor to material for optimization. Drawing on Marcuse, Andrew Feenberg, and Ursula Franklin’s distinction between holistic and prescriptive technologies, he argued that cooperative AI can become redemptive technology when rooted in democratic participation. Using Marshall McLuhan’s tetrad, he showed how AI currently intensifies labor exploitation and risks reversing into dehumanization if left unchecked. His conclusion was pragmatic: worker cooperatives already using algorithmic dispatch—like The Drivers Cooperative —demonstrate that treating data and algorithms as common-pool resources is not theoretical but already underway.

Kaya Genç offered one of the clearest and most bravest portraits of how AI is being mobilized within Turkey’s political landscape. In his talk on Solidarity, Equity, and Cooperativism in Turkey in the Age of AI, he showed how what is often presented as “national AI”—a supposedly moral and patriotic project—is in fact an extension of long-standing hierarchies. Rather than functioning as a neutral technological ambition, he argued, it reproduces the exclusions already embedded in the social fabric, sidelining women, Kurds, Alevis, queer and trans communities, and refugees.
In the session on the Poetry of Solidarity, Kaya Genç opened with poems like “I Want to Be Mechanized” by Nâzım Hikmet Ran (1923) and “If You Ask Me” by Beni Sorarsan (1977). It was moving to see how nearly a dozen participants spontaneously pulled up poems of their own to share.

In one of the workshops, led by Morshed Mannan and Tara Merk—participants wrestled with what a community- or cooperative-owned data center might actually look like. Remarkably, every group independently envisioned a distributed architecture: small, household- or building-level nodes that pool storage and compute into a resilient, federated network rather than a single, monolithic facility. Some imagined solar-powered micro-nodes; others sketched multi-story cooperative data hubs with living roofs and public-interest mandates. People debated everything from naming and governance to financing models, environmental constraints, and the hard question of “who counts as the community.” What emerged was less a single blueprint than a shared impulse to build infrastructure that is democratically governed, ecologically grounded, and collective by design. The excitement came from watching participants realize that these systems are not utopian abstractions, but designable, buildable, and already in existence.

Payal Arora focused on the steady optimism she encounters among young people in the Global South. She noted that this outlook does not come from her own position—“it’s very easy for me… I’m privileged, I have a good job”—but from the refugees, women workers, and youth she works with, who are often more optimistic than she is. This optimism appears in settings shaped by constraint, whether political, economic, or social, and it emerges from how people use digital tools to navigate these limits.

Rafael Grohmann showed how worker-led AI governance—through unions, co-ops, and collectives—emerges when workers mobilize discursive, economic, and institutional power to shape or resist the use of AI in their sectors. Drawing on research with screenwriters, voice actors, and tech cooperatives, he traced the tensions between cultures of AI experimentation and cultures of refusal, as well as the possibilities and limits of intersectionality as a source of worker power. He also highlighted a parallel phenomenon: women AI consultants in small Latin American cities acting as intermediaries and mediators in local AI economies, revealing new forms of grassroots engagement with technology from the margins.

Giuseppe Guerini, president of Cooperatives Europe, addressed whether digital platforms and AI risk making cooperatives obsolete by outlining four lenses: how co-ops use AI, why they use it, who controls the data and technology, and what rules govern it. He noted that cooperatives already deploy AI across sectors—logistics, manufacturing, energy, healthcare, education—much like other enterprises, but differ in how people participate and how benefits are shared: profits are reinvested, reserves strengthened, and work and services improved. For Guerini, cooperatives adopt AI not simply for efficiency but to ensure digitalization reinforces democracy, participation, and justice—what he calls “responsible innovation,” in contrast to Big Tech’s extractive model. By developing digital mutuality, cooperatives can help humanize digital transformation and advance economic democracy in the digital age.

In my own talk in Istanbul, I described the Solidarity Stack as a cooperative architecture that stretches from the earth to the cloud, because with AI we have to think and innovate all along the supply chains. I argued that no single fix—certainly not legislation alone—can solve the problems workers face; the stack has to be built bottom-up, through coordination among cooperatives, social movements, unions, municipalities, and community groups. I emphasized that many in the room were already working on single points of that stack—energy, data centers, civic tech, labor organizing, open models—but that these dots only matter if we link them. And I made clear that this is not an alternative to industrial-scale LLMs; it is work done in the cracks of that system, where cooperatives can still claim agency and build counter-infrastructure. Looking at how every layer of today’s AI systems is controlled by a handful of firms, I proposed the Solidarity Stack as a way to build community-owned energy and minerals, cooperative data centers instead of hyperscale clouds, organized and fairly paid data labor, open federated models, and shared governance across civil society. These pieces already exist—in Ashton’s cooperative data center, in READ-COOP’s AI cooperative Transkribus, in NeedsMap’s civic digital infrastructure here in Turkey—but cooperatives need to act now. As a tangible next step, I urged everyone to form small Solidarity Stack Circles—three or four people committing to take concrete action together after the conference—to begin stitching these fragments into a living, cooperative AI ecosystem.

The ICDE Fellows brought a distinctly grounded and analytical energy to the conference, offering a snapshot of what the fellowship has become: a global laboratory where doctoral researchers remake cooperative studies for the algorithmic age. Their contributions were rooted in real contexts—Stefano Tortorici articulated the political tension that sits beneath all this work: “Cooperatives lack a democratic political voice. There is not a serious political debate on their role. ICA fails to represent co-ops politically. Many exploit workers and have lost their alternative drive.” Adolfo Acosta on platform cooperatives and Mexico’s strained health-care system; Jeongone (Joh) Seo on municipally run ride-hailing in South Korea; Anne-Pauline De Cler on decentralized food-system governance in France; Akkanut Wantanasombut on Thailand’s platform coop Tamsang-Tamsong; Kenzo Seto on the limits of Brazilian federal legislation with regard to platform coops; Vangelis Papadimitropoulos on the philosophical politics of AI; Dorleta Urrutia-Oñate on using AI capabilities to fund cooperative development; and Ganesh Gopal, who brought the room back to the lived realities of Kerala’s digital economy—from drivers waiting in Kochi’s choking traffic and youth builders experimenting in Tinkerhub to the fragile promise of Kerala Savaari and the 7,500-member Cable TV Operators Association already operating as a de facto distributed communications cooperative. What united them was a shared method: theory braided with practice, research co-produced with workers, and a refusal to treat cooperatives as nostalgia. Together, they showed why the ICDE fellowship is one of the movement’s strongest engines, producing scholars who are also builders, organizers, and critics, and reminding us that cooperative digital futures are not abstractions but infrastructures being built and defended every day.

Nicholas Bequelin’s session, China’s Alternative Vision of AI, offered a grounded, often surprising corrective to the stereotypes that dominate Western discussions of China. Rather than presenting a monolithic or ideologically coherent “model,” he walked the audience through concrete differences in mindset, language, policy, and political structure. Starting with the Chinese characters for artificial intelligenceperson + work—and computerelectricity + brain—he illustrated how Chinese terminology foregrounds materiality and labor in ways English obscures. He emphasized that China is not pursuing AGI fantasies or Mars-colonization mythology; its orientation is relentlessly practical: apply AI to manufacturing, economic problems, and modernization, with a long-standing belief that technology is the engine of national development. Crucially, he argued that decisions on AI in China are taken at the very top of the political system—by a leadership in which two-thirds hold engineering degrees—producing a state that openly claims responsibility for steering technology, unlike the U.S. discourse that casts AI as an unstoppable force beyond political control. He also detailed how China’s push toward open-weight models like DeepSeek has pressured global AI firms to follow suit, while geopolitical rivalry with the U.S. shapes chip embargoes, rare-earth retaliation, and a narrative in Washington that “we cannot regulate AI because China won’t”. What animated the room was his insistence on nuance: China’s rapid AI advances, its practical focus, and its authoritarian constraints coexist with human aspirations for dignity and solidarity—even if organizing independently is extraordinarily difficult and risky.

If you want to see the conference through everyone’s eyes, we’ve gathered a small gallery of impressions—moments people captured in between sessions. You can browse it here.

My deepest thanks to the entire NeedsMap team for their extraordinary support under very difficult conditions.