top of page

Not All AI Is Created Equal: Understanding AI vs. Generative AI

Every time I log onto social media these days, I come across posts (or articles much like this one) where people declare that AI is bad. Sometimes, it’s about jobs, sometimes it’s about misinformation, but one of the louder refrains lately has been that AI is bad for the environment. And when people say that, I notice they don’t usually mean all of AI, even though it is, technically, all of AI. They just mean generative AI. ChatGPT, image generators, video generators, and whatever new model has just been released.

But the more I read these takes, the more I find myself pausing and thinking: wait a second… is all AI bad? What even is AI, anyway? And more importantly, why does it seem like people have started equating “AI” with just “ChatGPT”?


Those might seem like far too many questions, but we can slow down and untangle them pretty easily.


A Longer History Than We Remember

Artificial intelligence, if we strip it back to basics, is a whole field in computer science that has one big, ambitious goal: to make machines behave in ways that resemble human intelligence. That might sound lofty, but it’s really a wide umbrella. AI can be a robot exploring the surface of Mars, or a piece of software figuring out how to recommend a movie to you, or even the autocorrect system on your phone predicting the word you’re about to type.


These things certainly don’t look or feel like “AI” in the futuristic, sci-fi sense, but they all belong to that lineage. AI isn’t new (it’s been with us for decades), and in some sense, if you stretch your imagination, its roots go back even further, into humanity’s long fascination with creating machines or systems that can imitate us in times long before people were generating pictures of cats in Renaissance costumes or asking ChatGPT to write them an email. 


The idea of building artificial intelligence (though without that term) has been around for centuries in myths, stories, and early philosophies, but let’s keep things grounded in actual computing history, because that’s where the story really picks up. Also, because, to be honest, I’m no scientist myself.


If you start in the late 20th century, you’ll find all these fascinating milestones that don’t usually make it into today’s “AI is destroying the world” conversations. In 1997, IBM’s Deep Blue famously beat Garry Kasparov, the reigning world chess champion, in a match that was broadcast everywhere. That was one of the first moments many people realised that a machine could outthink us at something we considered a hallmark of intelligence.


That same year, Dragon Systems launched one of the earliest usable speech recognition programs for Windows. Suddenly, talking to your computer wasn’t just science fiction. By 2000, Cynthia Breazeal had developed Kismet, a robot head with expressive eyes and eyebrows and ears that could mimic emotions in a way that startled people who encountered it. It was clunky, sure, but it hinted at a new kind of interaction with machines. 

Two years later, the Roomba came out. It seems so ordinary now, a vacuum cleaner that scoots around the living room on its own, but back then it was quietly revolutionary. Then in 2003, NASA’s Spirit and Opportunity rovers touched down on Mars and began exploring the terrain semi-autonomously, making decisions in environments where no human could intervene in real time.


Through the 2000s and 2010s, AI slid even more deeply into our lives, mostly without us noticing. Netflix and Facebook started using recommendation algorithms to keep us hooked; suddenly, the idea that “the computer knows me better than I know myself” was a business model. In 2010, Microsoft launched the Kinect for Xbox, which let you use your body as the controller. In 2011, IBM’s Watson won Jeopardy! against two human champions, which felt like another cultural moment where people sat up and said, “Whoa, this is getting real.” That same year, Apple gave us Siri, the voice assistant that would live in our pockets and normalise the idea that you could talk to your phone and it would talk back. And all of this was AI.


The funny thing is, none of these technologies sparked the kind of panic or outrage we see today. No one was tweeting angrily about Siri draining the power grid, or the Roomba killing creativity, or Watson taking away jobs from game show contestants. For the most part, people were impressed, sometimes even charmed. AI was a useful, even delightful, background force. 


So what changed? 


Enter Generative AI

Generative AI is different, I’ll admit. Unlike traditional AI, which often focuses on prediction, recognition, or optimisation, generative AI produces new content. Ask it to write a story and it will. Ask it to generate a picture of the Pope in a puffer jacket, and it can. Ask it to code you an app or compose music in the style of Mozart, and it won’t hesitate. 

That leap into creation has made it visible in a way traditional AI never was. You can’t really “see” Netflix’s recommendation algorithm at work, but you can see, immediately, the essay ChatGPT writes for you or the image Stable Diffusion spits out. It feels magical, and because it feels magical, it has also become the controversial face of “AI” in public discourse, for with great visibility comes great scrutiny.


Unlike traditional AI, which require an immense amount of energy in themselves, training

generative AI models requires staggering amounts of computational power. These models are not built overnight; they take weeks or months of training on clusters of thousands of GPUs running nonstop, consuming massive amounts of electricity. Noman Bashir at MIT has said that a generative AI training cluster can use seven or eight times more energy than typical computing workloads. And that’s just training one model. Multiply that across all the companies building, retraining, and deploying models, and the environmental impact becomes undeniable. People are right to worry about that.


Why the Distinction Matters

Still, lumping all AI into the same bucket, saying “AI is bad”,—flattens the nuance we need to actually think about this responsibly. It would be like saying “medicine is bad” because some drugs have dangerous side effects, or “cars are bad” because gas-guzzling SUVs pollute more than bicycles. The truth is more complicated. 


Traditional AI is in medical diagnostics, in agriculture, in accessibility technologies for the visually impaired. It powers language translation apps that let people connect across cultures, and it models climate change patterns to help us understand and prepare for the future. These applications don’t carry the same outsized energy demands as generative AI, and dismissing all of them under the label of “bad” risks undermining research and innovation that could make the world tangibly better.


And this is why the distinction matters. It isn’t just a semantic quibble. If we collapse “AI” into “generative AI,” then our conversations about regulation, about sustainability, about what kind of future we want, all get skewed. Policymakers might overregulate or underregulate the wrong things. The public might resist technologies that could save lives because they associate them with ChatGPT spitting out mediocre poems. And researchers working on genuinely world-changing projects, like AI systems that detect cancer earlier than human doctors, might lose funding or support because the entire field has been tarred with one broad brushstroke.


At the same time, we can’t just wave away the environmental cost of generative AI. The genie is out of the bottle: people aren’t going to stop using it. Students, writers, businesses, programmers…they’re all finding it useful, even indispensable, and if history is any guide, once a technology becomes woven into everyday life, we rarely give it up. 

Cars polluted, but we didn’t abandon them. Smartphones were resource-intensive to produce, but we didn’t stop buying them. We adapted, slowly and imperfectly, by looking for cleaner fuels, greener supply chains, and more efficient devices, and even then, these are still battles we are fighting, or more accurately, have given up on fighting. 


Maybe the same will happen with generative AI. Maybe the real challenge now is not whether we should use it, but how we can use it without burning the planet down. That might mean building more energy-efficient models, investing in renewable power for data centres, or rethinking which uses of generative AI are necessary and which are frivolous.

And so I keep circling back to this: not all AI is bad. Not all AI is the same. Making distinctions between traditional AI and generative AI, between high-impact and low-impact uses, helps us advocate better, think more clearly, and avoid throwing the baby out with the bathwater. It also helps us imagine a future where we don’t demonise the entire field, but instead ask harder, more useful questions: 


How do we protect the planet while keeping the benefits? How do we build systems that are worth their energy cost? How do we balance human creativity with machine assistance without losing sight of what makes us human?


I don’t have all the answers. I suspect no one does yet. But I do know this: painting with too broad a brush never helps. If we want to move forward responsibly, we need nuance, we need distinctions, and we need to resist the temptation to collapse a rich, varied, decades-long field into a single, monolithic villain. Because at the end of the day, artificial intelligence is not just ChatGPT, and the story of AI is bigger, older, and more complex than any one tool that happens to be trending right now.


Comments


ABOUT US >

We are a student-led non-profit organization devoted to make a positive impact in the world through education and empowerment. Our objective is simple: to bridge the gap and give equal access to critical resources in the fields of Sustainable Development Goals (SDG), Science, Technology, Engineering, and Mathematics (STEM), and sustainability. We are devoted to create a brighter, more sustainable future for everyone with a team of motivated and forward-thinking students from all walks of life. We hope to enable individuals, schools, and communities to succeed in an ever-changing world by breaking down knowledge barriers, encouraging creativity, and instilling a sense of responsibility for our planet. Join us as we strive for a more sustainable, fair, and knowledge-driven future.

Subscribe to Our Newsletter

Thanks for submitting!

© 2035 by EquiSTEM International.
Powered and secured by Wix

bottom of page