The environmental impact of AI is only half the story
The energy it uses is staggering, but the real threat is what it’s being used for: silencing truth, protecting power, and manipulating the narrative
Yes, the numbers are frightening. Running large language models like ChatGPT requires enormous amounts of electricity, and with it, water to cool the data centres. Each prompt may seem small, but when multiplied across millions of users, the resource use adds up fast. Globally, data centres now consume more energy than entire countries. Some projections suggest AI-driven demand could double the sector's energy use in just a few years. The climate pollution tied to this growth are comparable to aviation, but they’re rising much faster. Some analysts are already warning of an impending energy crisis as AI workloads expand.
But while we debate water use and emissions, something even more dangerous is unfolding. AI is not just heating the planet. It is warping how we understand it. Artificial intelligence is being used to spread disinformation, manipulate public opinion, protect entrenched industries, and undermine the movements fighting for climate justice. And it is doing all of this with a speed, scale, and subtlety that makes resistance harder and truth more fragile.
Over the past year, we’ve seen a surge in AI-generated content muddying the waters of public debate. Articles with fake authors, posts that mimic the voice of experts, opinion pieces that downplay ecological collapse - all created not by people, but by algorithms. At the same time, comment sections and public consultations are increasingly filled with scripted responses, some of them clearly machine-written. In a world already overwhelmed by information, the addition of synthetic noise makes it harder than ever for meaningful voices to break through.
The lie is no longer handcrafted. It is mass-produced. What once required a team of lobbyists or communications specialists can now be automated with a few prompts. The content is polished, persuasive, and nearly impossible to trace. AI-generated talking points are already being tested across news sites, ads, and political messaging. And the tools are only getting better. As the ability to generate disinformation becomes cheaper and more accessible, it raises urgent questions about how societies can defend themselves from manufactured doubt.
But denial is only one side of the story. The other is distraction. Corporations are now using AI to automate greenwashing. Sustainability reports, corporate climate statements, glossy advertising campaigns - many are now shaped by generative AI. These tools are trained to mimic the language of responsibility. They know how to sound ethical, how to signal progress, how to wrap inaction in the language of care. The result is a flood of content that looks like commitment but hides the status quo. A simulation of progress, at scale.
Alongside this comes the rise of synthetic people. Entire online personas are now being built by algorithms. These aren't just bots spamming links. They have profile pictures, backstories, political views. They engage in climate discussions. They respond to activists. They argue against regulation. This tactic, once used sparingly through paid troll farms, is now being scaled through AI. The effect is a distortion of public perception. It creates the illusion of a balanced debate where there is overwhelming scientific consensus. It drowns out urgency. It isolates organisers. It makes the movement feel smaller than it is.
These tools do more than speak. They shape what we see. Social media platforms, powered by opaque algorithms, already reward outrage, certainty, and simplicity. Now, with generative AI feeding those systems, the volume of polarising content increases. Complex issues like climate change are reduced to memes and bait. Nuanced analysis gets buried. Clickbait thrives. And every piece of content, from every side, is optimised for engagement, not accuracy, not truth, not justice.
Behind all of this lies a deeper problem. These technologies are not neutral. They are being developed by corporations (and tech bros) funded by venture capital, designed to generate profit, and shaped by the interests of those who already hold the most power. Their logic is extractive. Their values are economic. Their priorities are not aligned with ecological survival or collective liberation. When we allow AI to mediate how we see the world, we are outsourcing our understanding to systems built to protect the status quo.
Artificial intelligence, as it currently exists, is not being used to dismantle destructive systems. It is being used to reinforce them. It gives polluters new tools, new narratives, new shields. It adds another layer of complexity to the fight for truth. Another arena where power plays out in silence, code, and scale.
And yet, there may still be hope. Across the world, people are refusing to give up that fight. Educators, journalists, scientists, artists, and organisers are pushing back. New watchdogs are emerging to track disinformation, expose greenwashing, and demand transparency. Technologists are calling for public interest AI, built with values like accountability, cooperation, and care. Communities are building their own platforms, amplifying their own voices, and telling their own stories, not because they are louder, but because they are real.
We cannot automate our way out of a crisis caused by extraction, enclosure, and disconnection. But we can organise. We can remember what machines cannot feel. We can build relationships that algorithms cannot simulate, and movements that data cannot predict. The struggle for climate justice is also a struggle for clarity. For memory. For meaning.
Because when machines lie, people are the ones who suffer. The crisis is not artificial. Our response can’t be either.
There is no wonder that capitalists would create a tool that manipulates and furthers the goal of total extraction for the benefit of the few while selling it to the rest of us as the next best thing! The irony could be that the energy needs and the self-interested obfuscation could really tip the balance just enough to exacerbate the climate crisis and speeding up our demise before any check can be placed on its use.
I would be really interested to hear from anyone who has done useful work with AI to help against climate change? Either research analysis or planning?