Voices from Istanbul

Across four days in Istanbul, participants from more than thirty countries surfaced an urgent, shared concern: AI is being shaped by very few actors, with profound consequences for democracy, labor, culture, and the planet. But far from a lament, the gathering offered counter narratives rooted in cooperative governance, diversity, and solidarity as the foundations of an alternative AI ecosystem.

Power

Many speakers warned that today’s AI landscape is defined by extreme centralization. Amanda Claro noted that AI is being shaped primarily by cisgender white men in the Global North that own big tech. “ AI is turning towards their world vision, not ours.” Kaya Genç described private tech moguls as “scary people” whose models “go with the flow,” even when the flow turns fascist.

Scholars like Maurilio Pirone, Nicholas Bequelin, Antonio Casilli, and Payal Arora echoed this: power is locked up by a few tech companies—supported by the world’s most powerful states: China, or the US, or Western countries, or the UAE, or Saudi Arabia. The training data itself reproduces colonial, racial, and class hierarchies. As Veronyka Gimenez put it, “we are facing digital colonialism.” Bequelin posed that “The main problem with the direction of AI today is that the power to shape that direction is within the hands of very few people, and these very few people also have the backing of the most powerful states, whether it’s.”

Morshed Mannan cautioned that “What concerns [him] most about artificial intelligence at this current stage is that it is hard to get away from. We are embedded in systems in which AI is everywhere. … The usual legal tools … have become all the more important, but as we know from the earlier developments with digital technologies, such as ways in which we try to break up big tech companies or whether we try to regulate them, all of these proposals that have been made have become harder and harder to do because they are so pervasive.”

Yet participants also pointed to a widening constellation of alternatives. Vincent Line De Clercq argued that cooperatives can bring “multiplicity of voices, … from infrastructure or the application layer, from data trust to training, renewable energies, co-ops have shown that they can represent a variety of voices and show a diversity in applications.” Arora stressed that with ninety percent of the world’s young people living in the Global South, the cultural center of the internet is already shifting.

Harms

The harms participants raised were diverse but overlapping. Stuart Fulton emphasized the ecological costs of hyperscale AI, pointing to water and energy use in data center expansion. Melissa Terras warned of the cultural damage done when generative AI becomes synonymous with AI itself.

She commented: “For a lot of people, they believe that generative AI is the only AI. Generative AI is so problematic and is so extractive and is so bad for culture and creativity, and for humankind, it’s so bad for the economy, that what we’re seeing is so many people become anti-AI because they’re anti-generative AI. But there’s so much good that AI can be doing. Machine learning and medicine, machine learning in the area I work in, libraries, archives, and museums to help tidy up and promote and give better access to the past and people’s histories.“

For Ana Margarida Esteves, the deeper issue is how AI hides the violence embedded in the commodification of life—making extractive relations seem “sanitized” and acceptable. Many, including Kenzo Soares and Casilli, highlighted the invisible labor powering AI, especially in the Global South, where millions perform data labor for ten hours a day “for one dollar.”

Alternatives

Despite these concerns, the conference was alive with concrete alternatives. Participants repeatedly stressed that cooperatives and solidarity movements must intervene at every layer of the AI stack.

Trebor Scholz described the Solidarity Stack, a framework that connects existing “dots”—from cooperative mining to tech co-ops (from the earth to the cloud), platform co-ops, community-run data centers, and cooperative research institutions—into a viable ecosystem. “Not to take down OpenAI,” he noted, “but to build feasible alternatives.” Scholz invited all at the conference to become part of that stack.

Speakers from Mondragon in Spain, Germany, Thailand, and Brazil shared examples of community-led data projects, federated data collaboratives, worker-owned tech development, and new cooperative models for data governance. Dorleta Urrutia of Mondragon University noted that the core challenge is “cultural, not just technological,” emphasizing the need to ensure AI “supports people instead of distancing them from decisions.

Others, like Akkanut Wantanasombut, Ela Kagel, and Andi Argast, stressed that democratic governance must be built from the ground up. Kagel captured the energy of the moment: communities are already “building their own language models,” developing small data centers, and creating local agreements for data governance.

Acknowledging alternatives, Mannan, simultaneously warned “that … there are areas in which we shouldn’t be using AI at all.”

Alliances

A recurring theme was that cooperatives alone cannot shift the direction of AI. Stefano Tortorici argued that to reclaim AI, cooperatives must “forge alliances with social movements, unions, and the broader public,” and take explicit political positions on crises. He said: “AI raises the stake to a point where cooperatives can no longer wait to take a radical political stance on the multiple crises of our times, from environmental disasters to wars, to all kinds of oppressions along the lines of gender, class, race, colonialism, and beyond. We really think that to reclaim AI, cooperatives must really build lasting solidarity across movements.”

Rafael Grohmann underscored that “worker-led AI governance” — through unions, co-ops, and community-based movements — is key to building Indigenous, feminist, and Global South–led AI systems. Michelle Nie emphasized the role of civil society in governing cloud infrastructure as a public utility.

Nicholas Bequelin put it succinctly: “The key to positive change is A, to set a horizon, and B, to mobilize political energies towards that horizon.”

Future

Across the conversations, a shared vision emerged: AI must be reclaimed as a public, democratic, and cooperative technology. That means governing data collectively, embedding ethics and care into technical design, building federated infrastructures, and nurturing a culture of cooperation from local communities to transnational federations.

Unal Ornekr reminded participants that the cooperative tradition is rooted in “human and social service,” and remains one of the few global movements capable of generating large-scale, people-centered alternatives.

The conference left participants both sober and inspired. As Uygar Ozesmi asked, “Is this the world we want?” If not, a different kind of AI must be built — one rooted in solidarity, democratic governance, and steered by collective imagination.