There can be no just transition without challenging the AI apparatus that is accelerating the social and environmental crisis, writes Dan McQuillan
Let’s cut to the chase: AI is an anti-worker and anti-community technology. A few moments outside the miasma of AI hype are enough to reveal this; just look at the way generative AI is heralded as the replacement for 40% of jobs, or the way machine learning is applied to welfare with the presumption that the poor are always trying to cheat the system. It’s not that these are nasty applications of an otherwise productive technology. AI can do some clever tricks, but at heart it’s just a form of statistical pattern finding. ChatGPT extrudes text based on probabilistic similarity to its training data but there’s no causality or common sense, and all of AI’s other classifications and predictions are the same. AI’s product is fake text and fake knowledge, even when it seems to get it right.
Outside of true believers, the agendas marshalling behind AI are mixtures of profit-taking and politics. When the social contract threatens to splinter on decades of underinvestment, any talk of restructuring can be diverted by promising to “harness the incredible potential of AI to transform our hospitals and schools”.1 The real effect of such shoddy solutionism is a net transfer of assets and control. Actual AI isn’t a sci-fi future but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form.
Extractivism
One reason that AI power is so concentrated is its dependency on eye-wateringly vast amounts of computation and data. Grinding out models that generate plausible output means a lot of values that need to be set (ChatGPT has 175 billion) and computing power of the order of 10^26 calculations (that’s 10 followed by 26 zeroes!). The corresponding volume of training data that’s required explains the corporate ambition to ingest the entire internet and every other dataset on Earth. These computations aren’t an abstract activity but occur on racks and racks of servers arrayed in giant data centres. All the hyperbolic fantasies about AI’s abilities are grounded in very material and non-negotiable needs for megawatts of energy to power the servers, millions of gallons of water to cool them, and the mineral resources needed to make the chips themselves. It’s not for nothing that market capitalisation of Nvidia, the main manufacturer of AI processors, recently outstripped that of Amazon and Tesla combined. At the back end, the forces powering AI aren’t futuristic but all too familiar; extractivism, the siphoning of raw materials from the Global South, and an uncomfortably tight entanglement with the fossil fuel industry. Where Big Tech’s claims to be green refer to real renewables not dodgy carbon offsets, it’s because they’re diverting the available renewable energy sources away from communities. Even capturing big chunks of the electricity grid isn’t enough, and the talk now is of nuclear reactors and nuclear fusion. Meanwhile, diminishing water sources are denuded and the demand for more chips is dependent on conflict minerals from the Democratic Republic of Congo.
Workers’ and People’s Councils
Despite AI’s foundational shortcomings and toxic effects, it has the full-throated backing of institutions from the EU to the Tony Blair Institute. Even many labour organisations seem unwilling to call it out, for fear of seeming Luddite in the face of an ‘inevitable’ technology. Any real challenge to AI’s intensification of existing social injustice isn’t going to come from these entities but from the workers and communities who are most affected. That’s why, in my book Resisting AI, I called for the formation of workers’ and people’s councils to act as a brake on the harmfulness of unrestrained technosolutionism.
One of the primary appeals of AI for both states and corporations is its claim to generalisability; by abstracting away all the specific context, the underlying mechanics of neural network pattern finding can apparently be applied to any desired prediction or emulation. For corporates this is a product for every market, and for institutions, a magic wand for any wicked problem that evades structural change or engagement with the views of the marginalised.
The purpose of workers’ and people’s councils is to constitute a form of mutual and relational accountability that not only refuses to be sidelined but asserts the right to have a view about alternative and more convivial futures. These councils are ways of generating the difficult questions about AI. In that sense, they start from the specifics of lived experience and from the practice-based and tacit understandings of how things really get done, of all the negotiation, adaptation and cooperation that make everything from manufacturing to invisibilised care work actually happen. This feminist ethics of care is a position from which to call out the harms of fake automation and algorithmic optimisation, and its careless and often cruel consequences. But this is also the basis for bigger questions about large-scale technical and social transformation. An urgent task for these councils is to ask what constitutes a just transition for ordinary people when existing social and environmental crises are accelerated by AI.
Just Transition
The idea of a just transition is highly applicable to AI. The origins of the term lie with workers’ movements in the 1970s and 1980s in the USA who were faced with the large-scale automation of their jobs. Now, of course, it’s expanded to mean the idea of justice for workers and communities under the necessary transformation to decarbonised economies. AI entangles these social and environmental consequences, and does so in ways that refocus our attention on aspects of our system that never went away; in particular, on colonial relations of power and the unquestioned fetishisation of ‘growth’ (GDP) at all costs. Silicon Valley and AI are all about scale and a commitment to unrestricted growth. Along with the size of the models, the use of energy and resources and the global theft of data, there’s simply no way for the scale of data labelling and sanitisation needed to train AI to be economically feasible without outsourcing the labour to the Global South. AI is a form of computation but it is also an apparatus, in the sense of a configuration of concepts, investments, policies, institutions and subjectivities that act in concert to produce a certain kind of end result.
With its absolute compulsion to expand and enclose, the apparatus of AI embeds the logics of empire. Questioning these logics, as workers’ and people’s councils will need to do, leads to wider questions about decolonial and degrowth perspectives. Degrowth doesn’t mean stunting economic activity but radically changing it to reflect the primacy of social and environmental values and an end to the exploitation of people and planet. The various means of achieving this, from localised production to an ethics of socio-ecological care, align well with forms of direct democracy in the workplace and the community. This kind of restructuring is also decolonial in that it delegitimises ongoing expansion and extractivism, and recognises the need to replace racialised divisions with international solidarity.
Prefigurative Change
No serious attempt to challenge AI can pretend it’s going to be easy, precisely because AI is so deeply rooted in the underlying matrix of our political economy. AI isn’t simply a problematic technology but an apparatus that is shaped by the injustices of our existing social relations and which, in turn, reshapes and intensifies them. It’s a form of machinery forged by the mineral and energy flows that powered industrialism and imperialism, and which returns us to the same geopolitical scramble for raw materials.
This feedback and entanglement might seem to complicate the possibilities for progressive action. It’s certainly hard to conceive of a top-down model of change that doesn’t very quickly get bogged down in military-industrial interdependencies. It could be said that the situation isn’t even one of interactions but intra-actions; of articulations of the social, the environmental and the technical that only exist in relation to each other, that emerge and co-constitute each other. However, there is a way to stay aligned with values and purposes while acknowledging that the exact path to them is so complex as to be unknowable, and that is to adopt a prefigurative approach.
Prefigurative social movements embody within their ongoing praxis the forms of social relations, decision-making, culture and human experience that are the ultimate goal. From the perspective of workers’ and people’s councils, this suggests adopting an approach to technologies that aligns as closely as possible to environmental sustainability and social equity at every step. In practice, of course, this will be an imperfect and iterative process, but it’s a starting point for challenging the harms of AI while at the same time “forming the structure of the new society within the shell of the old”.2
Insorgiamo
Prefigurative resistance to AI can find inspiration in other attempts to rein in harmful tech through actions that are social and environmental at the same time. In the 1970s, workers’ councils at Lucas Aerospace created an entire plan to transition from the arms trade to the production of kidney dialysis machines, wind turbines and hybrid vehicles. In the present day, the workers at the GKN factory in Florence, Italy occupied their axle-manufacturing workplace when a hedge fund wanted to shut it down, and have crowdfunded a shift to making cargo bikes and photovoltaic panels under the partisan slogan ‘Insorgiamo!’ (‘we rise up’).
The ultimate test of acceptability for any technology, as it was in the time of the Luddites, is whether it is hurtful to the common good. When communities and people’s councils challenge the degradation of healthcare, education and welfare through the forced assimilation of AI, they can invoke principles of what I call ‘decomputing’; the refusal of dependency on large-scale infrastructures, the elevation of relational, embodied and democratic decision-making over reductive automation, and the rejection of a world of data assets in favour of the commons and commonality.
AI, which seems like the apotheosis of modernity and the final achievement of science fiction dreams, may in fact be one of the markers of the end of neoliberalism. Such is its exponential demand for scale that generative AI, for example, is already starting to consume itself; an uncanny metaphor for global industrialism as a whole. We are well able to imagine forms of technology that support more convivial and adaptive ways of life, and which draw on the indigenous understanding of our inseparability from, and mutual dependency on, all other aspects of our planetary lifeworld. Indigenous communities, too, are facing the renewed colonialism of data centres and extractivism. Resistance to AI is a point of convergence for decolonial, feminist, labour and climate movements and an affirmation that we still have worlds to win.
Dan McQuillan has worked with people with learning disabilities and mental health issues, created websites with asylum seekers, run social tech camps in Kyrgyzstan and Sarajevo, and worked for Amnesty International and the NHS. He has a PhD in Experimental Particle Physics, and recently authored Resisting AI: An Anti-fascist Approach to Artificial Intelligence.