"Why the AI hype is a net negative for humanity"

Pre-Introduction

First, I want to let you know that this post will be made into a YouTube video, so you don't have to read it. It's more of a collection of ideas for me to use in the video, and it will be updated as needed. It's just a raw collection of ideas, so be warned.

Second, when I use the term AI here, I am not refering to the OG neural networks, evolutionary algorithms, genetic programming and others. I use it in the media way. Meaning LLMs like GPT, LLAMA and such. For example, I am a huge fan of evolutionary algorithms for optimisations. I love to use them. I am not against algorithmic learning at all. I am also not a proponent of LLMs in general because the technology is actually pretty interesting. But those huge tech companies with their hype driven VC fundings and deceiving spokespersons are just a pain in the ass, especially because media shoves every statement made by them straight in your face and I can't figure out how to filter all this stuff out of my news feeds, it's freaking everywhere.

Third, when I say 'net negative,' I don't mean any SCP K-Class world-ending scenarios or anything like that. What I mean is that the influence of huge tech companies over your daily life will become even more invasive than it is now. This is also your reminder to install an adblocker, sponsor block, and privacy plugin in your browser, and to support your favorite content creators with a few bucks via Patreon, ko-fi, or direct transfer. This is a much better income source for the creators and much less demanding for your brain cells. Louis Rossmann explains why.

Introduction

Hi everyone,

First of all, I want to share my current position and explain why I think the current hype around the AI bubble is a net negative for humanity. Afterwards, I'll back up these views with some solid data (this is where the video format will shine because I can draw pretty* pictures for you). The problems can be grouped into a few different categories:

  1. Energy – Why the huge energy consumption of these models is not really good for humanity.
  2. Products – What AI products actually exist.
  3. The Arts – How AI affects artistic traits.
  4. Workplace – How workplaces will change for the worse.
  5. Insane amounts of data - Why the LLM generated data is actually bad for AI.

I'll back up every claim I make with sources, and I'll also mark everything that's my own opinion as such.

LLMs, how do they work?

If you want a detailed overview of how LLMs (or what is nowadays known as AI) work, I highly recommend 3 Blue 1 Browns playlist on neural networks. Those videos are an excellent primer on the topic. But the TL;DR is: LLMs are text prediction machines that calculate, based on their training data (which is insanely huge), what word should come next in the response that's generated. The really interesting part is why this concept works so well. Keep in mind that this is a gross oversimplification, but it is the heart of the technology. This alone makes LLMs not self-aware, despite what some people might want you to believe for some reason. I hope this simple explanation demystifies the concept of LLMs for some people.

Problems

Let's not waste any more time and look at some problems my dum dum smooth brain, and many other much smarter people then I am, found about AI.

Energy

There are multiple problems with the amount of energy consumed by a state-of-the-art LLM. During training, a model like GPT-3 consumes as much energy as 130 US homes.

This is a pretty huge number, but if the LLM were finished consuming energy afterwards, this wouldn't be such a crazy number. Sadly, LLMs are not done yet; each interaction with such a model consumes power. Luccioni et. al. have shown that, for example, generating an image takes as much energy as charging your phone from zero to hero. And in my experience, people do not generate just one image because they do something called prompt engineering (which is probably the most cringe-inducing term ever created).

I'm by no means an environmental activist, but I know that climate change is a problem we have to solve as a civilization, and it seems a bit counterproductive to use these incredibly wasteful technologies for toy projects. This is something we’ll discuss in the next section.

Products

To be honest, the points I discuss in this section are all my opinion because I can't cite a product that I don't know of. But if you search the internet, what AI-powered products really need to exist? The most products I can think of or find on the internet are:

  1. Programming auto-completion
  2. Chatbots
  3. Text summaries
  4. Text generation
  5. Image generation

The first one is pretty common for me to come across because I do some of the good old programming myself. And like I said, this is just my own opinion, but whenever I have a programming problem I can't solve, the AI always recommends nonsensical solutions that look right but are wrong. This is incredibly bad because if programmers and software engineers let their guard down for a second and trust their little AI companion too much, you get a pretty bad result. These models are trained on open-source repositories, and this code is written by humans. If this code contains errors, it means those errors were not detected by humans. You see where this is going? You end up with erroneous code that is hard to detect by humans. If you write the code yourself, at least you know what you were thinking when writing it.

Chatbots are borderline unusable. I've chatted with the bot on Microsoft Azure, which is also a platform that provides these chatbots. If the bot does decide to respond with a message that contains actual content, it’s the same response you would get from the simplest of search engines (without AI super-duper powers).

And the funniest part is the addition of terms like "AI-generated content may be incorrect." Bruh, then don’t use it if you can’t generate correct answers. Why are we paying so much for this? Imagine buying an axe for wood chopping, and you can swing it 4 times as fast, but it has a 5% chance of missing and a 2% chance of chopping off your arm. Would you pay a premium for this axe? I think not. If I buy a tool, I want it to work properly, or at least have someone to talk to who can fix it if it goes haywire. But AIs are inherently unfixable because you can't test them. There are always edge cases that you haven’t thought about that break the system.

Text summarisers are another category of programs made with AI. But you get the same problem as with all other discussed AI products. You get a text summary of a long text, and now you don't know if the AI has left out any important facts or if it interpreted something like irony or a language idiom incorrectly. So, you have to read the text yourself anyway. What's the point of having the summary then? At least we can have fun with stuff like Apple Intelligence, or should I say Apple Not-Intelligence? Hue hue hue.

Text optimisation is the only tool that I can see some merit in, because LLMs are trained on such huge amounts of text data that they are pretty good at finding typos, punctuation, and grammar errors (you know this is hand-written because my English grammar sucks—greetings from Germany). But you have to create the text in the first place for it to be reviewed. Otherwise, you'll end up with a text about things that don't exist (I'm pretty sure Tom Scott did a video with an AI script about stuff that doesn’t exist, but I can't find it anymore).

Image generation is not usable at the moment. All AI-generated images look like AI images (if that makes sense); they need a lot of touch-ups after generation, and for some reason, they are super easy to detect, though I don't know why. The only good part about AI images is that you can see which companies don't value the time of artists and instead generate AI images rather than paying a real person to do the job properly, as seen in an example further down.

All in all, these products do not need to exist. I don't know why they exist, and I don't know who uses them, because for me, they are either unethical, bad, or even harmful.

Products I didn't knew off

This is an addendum.

I asked my incredible large cough following over on Bluesky (shameless plug) if they knew about products that are AI powered and I got at least two entries that are actually pretty nice.

DeepL and Grammarly are both the core of what makes an LLM good. They are not predicting the stock market, try to deceive you that they are sentient, write bad code, scam you out of your money or try to rage bait you into engagement by spamming completely brain rotten social media posts. They just do what their training data is best for. Creating and analysing text structure. You as a person have to feed them real human input and they can analyse it to help improve it by being a database for text structure based on billions of examples on human written words.

The Arts

In the conclusion of the last paragraph, I used the word "unethical." And this is why I (and many others) find AI unethical. Tech giants like OpenAI, Meta, and Amazon have used the entire internet as a training source for their models. LLMs now produce output based on those texts, yet they've never asked for permission. The same applies to images. But it made me smile when OpenAI cried about DeepSeek's data theft. Using texts or images created by these models in a commercial manner is, in my opinion, a punch in the face of every artist who has ever put a piece of art on the internet.

Just to be clear, I know that tools like MidJourney can create astonishing art (and also really bad art), and in my opinion, it's absolutely no problem if you use those tools in your personal life to create some background art for your Dungeons and Dragons group. But as soon as they are used commercially, it becomes really unethical because you're using the art of other people (due to the training data) to enrich yourself. I mean, look at this shit. Why does a company not pay a photographer at a real Formula 1 race a single dollar for a better picture? Or just buy a stock image of a race start? The race obviously doesn’t matter because the AI-generated one doesn't mirror anything real in the first place. This one really infuriated me.

And I hope my fellow artists out there know about tools like Glaze and Nightshade, which are not proven to work, but at least Sam Altman, the CEO of ClosedAI, had to say something about them, so they probably do something.

But if you want the perfect video about what AI does to creative thinkers, look at this video by Freya Holmer. This video pretty much sums it up. If you want it a bit more crude, here's MoistCr1TiKaL on an AI artist.

Workplaces

I can only speak for my own professional field (software engineering) here, but I can see a real shift in the work environment. People are using Devin, Co-Pilot, or ChatGPT to code by speedily tabbing through predictions without really checking what was created in the first place. They end up generating a ton of Docker configurations that seem to work, but afterwards, nobody knows what random containers are running on the system. I mean, I can see why people like this so much. You get stuff done fast, and in many cases, you get okay-ish to good results (if you're using mainstream tech stacks).

Since I can only speak from my own experience, I have to ask: what iron-willed person is mentally strong enough not to succumb to the habit of thinking, "Okay, this worked the last few times, why should I check this piece of generated code in detail?" In my opinion, if you're presented with a shortcut, you're more likely to take it. Previously, you'd think about solutions for a given problem, searching the internet for people with the same issue who found solutions. But now, why not just press the button to get a solution for your problem, even if you can't verify it? It's faster.

And that's probably what people find so fascinating about AI: You could potentially create software a lot faster. If AI worked as the AI hype beneficiaries want you to believe, this would actually be a game-changer. But what the hype promises, what people believe, and what actually works is quite a chasm. And I don't mean a small one—it's like two completely different things.

Just look at what Sam Altman, CEO of ClosedAI has to say about AGI (Artificial General Intelligence). A term that was widely used for human-like self-learning intelligence. He just redefined what AGI means. Now, AGI refers to agents that are good at one simple task, like writing emails or entering meetings into your schedule. Which is a pretty hefty downgrade from human-like self-learning AI, if not the exact opposite. lol

Insane amounts of data

Lastly, I want to talk about the learning process of LLMs. They learn from an immense amount of data, and they do so continuously. The huge benefit of earlier models was that all the data they were fed was actually human-created. Nowadays, humanity has created over 90% of all data in the last two years, and I’m just guessing, but the timing suggests that AI is a huge part of it. If I remember correctly, there’s also a video by Kyle Hill on the topic, in which he states that humanity creates more data every two weeks than was created from the invention of the printing press to GPT-3.

The problem for new LLMs is that they now have to learn from data that they themselves have created, which is a big no-no if you want to use the lingo of machine learning specialists. So, you actually need a way to distinguish AI-generated text from human-generated text. This should be pretty hard to do, because the whole point of AI generating text is to be as human-like as possible. You see the problem? There is no way to prevent LLMs from getting stuffed full of their own generated data.

Funny bits

At least we get some comedy out of the hype.

AI company does not want you to use AI

A reader brought my attention to the following article on 404media.co. This is funny because they create an AI model that should produce text that's indistinguishable from human-written one. You reap what you sow, or something like this.

Nepenthes

Someone built a funny little tool that tries to trap web crawlers in an infinite maze of urls and slow loading web pages. I can't find the repository where this bad boy is slumbering, but I think that's a nice little idea. Article on heise.de

Conclusion

AI is here, and it’s not going anywhere in the near future. I have to admit, there are use cases in categorisation problems that are well-suited for AI. For example, the categorisation of photos in a collection or if you want to enhance your Dungeons and Dragons night with some nice artwork for your group. LLMs are also pretty good at supporting you while writing with the correct grammar, word choices and writing tipps in general. But where I think the whole AI thing takes a turn for the worse is in commercial applications, especially generative ones that spit in the face of every person that got their work sucked into the training data of thos LLMs without consent, just don’t work as promised, produce wrong results, or even do harmful things (let’s hope nobody implements an AI flight system in the near future).

And the worst part of it is the discrepancy between what AI is actually capable of and what some people think it can do. It's wild how different those two ways of thinking about AI are and how incompatible they are. Just look at every LinkedIn post ever created in the last year. You'll get the gist.

So, if you're in a position where you have to make business decisions, don’t believe blindly everything the hype is trying to sell you. Make your own educated decision. Learn how these things actually work and think for yourself if you believe Mr. Sam Altman from ClosedAI when he tries to sell you the concept of AGI after redefining it for the seventh time to fit their current models.

Sorry for the rant about the AI stuff, but this is the stuff that keeps me up at night. I'm a millenial (but I identify as gen-z skibidi toilet fr fr) and I had the pleasure to grow up with the internet when it started to exist. It was slow, there was no social media like we know it today, there were forums for passionate people about a single topic they were passionate about, web pages looked like complete garbage. I fucking loved it. Nobody gave a single flying f about monetarisation or economic growth. It all was made by real people for real people. No SEO cheating, no rage bait, no click bait. It was wonderful.

I came to terms that tech monopolies like google with their search engine, have set the status quo for visibility in the internet and that you have to obey their rules if you want to be seen.

But what I can't stand is that nowadays people try to spam the internet with all their AI generated stuff just to grab one or two bucks of ad revenue (another call to install an adblock plugin in your browser) for a worse experience for all of us. And as a single person you can't do anything against it. I don't think I have the resources to build a new search engine that's only containing passion projects (but maybe I try). I don't have the resources to train an AI model that detects other AI stuff and filters it (I would try, but this is impossible without huge amount of moolah). Maybe it would be possible to crowd-source an okayish filter the same way sponsor block does it. That's probably wishful thinking. Sadly the internet as we old peeps knew it seizes to exist.

ripinterwebz

Sorry for the sad tone at the end. But I don't know how to solve those problems. Maybe this isn't a problem at all and I am the only one who liked the old internet better. Hit me up with your thoughts over at Bsky if you know something we could do to preserve at least a little bit of humanity in the internet.

Have a good one!

P.S. I think it's incredible funny to call OpenAI 'ClosedAI' because nothing is open anymore. #cringehumour