AI Is A Moving Goalpost
Artificial Intelligence does not exist. Artificial General Intelligence does not actually mean anything. Both of these things come from Science-Fiction writing. Both of these things have been co-opted by companies selling Large Language Model technology.
Machine Learning is a technology, and it underlies ALL of this. Good and bad.
Both AI and AGI are marketing terms, and in the last 4 years, they each have had their meaning changed, multiple times, by the people who are selling them.
Large Language Models
Large Language Models, LLMs, including “Image Generation”, “Chat”, and “Coding”, etc. are all, regardless of the company behind them, prone to make things up. This is inherent to the way machine learning itself actually works. It is all proprietary, and each one does things a tiny bit differently, but underlying all of it is a random number generator.
It is absolutely important that two people ask the same software the exact same question, and get different outputs, in the same way that if two people ask ME the same question on two different days, they are likely to get a unique response from me. I will say the same thing, but I’m definitely not going to phrase it the same way.
It is also absolutely important that something in the LLM, be able to scan the random output and tell the LLM when it is about to output a random word that doesn’t make sense. So, grammar filters, sentence filters, topic filters, in some cases, full mechanisms to pull back output already presented and start over.
These filters are often also LLMs that are trained specifically to argue with the initial LLM. As a money saving measure, it has also recently leaked that at least one popular LLM was using straight regular expressions to narrow in on topic categories, and user mood.
Why throw the compute of an LLM at an input when a simple text-search can tell us the user is not happy with the output.
Stochastic Parrots
A lot of people say this, and argue against it without understanding the basic meaning. First, stochastic is derived from statistics, that is a way to say, using random output but applying it to probability. Parrot is more straight forward, it literally means to repeat without understanding.
So, someone asks a question, and the first thing to accomplish is to search training data for correlation to what was said. Then, literally, start with a list of words that are the first words of matched training. Take the list, weighted, and apply a random to choose one of them, most used are most likely, even against random because of the weight.
Continue this for each word, but for the entirety of matching articles. This is how most LLMs worked up until about 2015 or so (give or take). A lot of people had fun of laughing at some of the stupid malformed blob that would come out of these things.
The first AIs that could reliably beat chess masters were Stochastic Parrots. That is, they take the history of all recorded chess games, take the top most common next winning move, and apply a random, but weighted choice.
GAN Enters the Chat
Starting around 2014, a new concept in Machine Learning came about. The Generative Adversarial Network.
In basic concept, this takes the Stochastic Parrot, and feeds that output into a second, separately trained system. The second system is specifically made to try to find “fake”.
It didn’t take long for researchers to start to apply adversarial (discriminative) filters across the LLM landscape.
For a different way to say this, here’s a paragraph from the Wikipedia Article on GAN:
The generative network generates candidates while the discriminative network evaluates them. This creates a contest based on data distributions, where the generator learns to map from a latent space to the true data distribution, aiming to produce candidates that the discriminator cannot distinguish from real data. The discriminator’s goal is to correctly identify these candidates, but as the generator improves, its task becomes more challenging, increasing the discriminator’s error rate.
The inherent goal is that the output improves to the point that the discriminator cannot tell if the output is inherently wrong.
Because LLMs all also use GAN techniques does not inherently mean that they are no longer Stochastic Parrots. It just means that they are ever tuned to sound more convincing.
Not Fixing, But Patching The Broken
Up to about 3 years ago, AI generated images tended to have missing, merged or extra fingers. It became a joke on the internet. There were multiple articles that confidently told the public that AI couldn’t do hands, and “you can always tell it’s AI because …”. So, of course, every image generator started training it’s adversarial filtering to look for wrong hands. While it is still possible to find LLMs that do bad hands, the latest models no longer do.
Insane looking eyes became the next confident tell, that was also fixed.
But of course, none of this was fixed at the source, it’s just that an image generator now has to go through four, five or six times as many images internally to get to each image it is now willing to present. It’s little wonder that even AFTER an LLM has been trained, the compute cost of running that LLM is always going up.
I have a few things that I use at “tests” to see if LLMs have gotten better. I won’t disclose which ones I use. They are all “easy to patch”, which is to say, a filter can be created to look for these things. Filtering out a wrong result doesn’t actually fix the real problem, it just sets a filter on one specific issue, and takes away another tell.
Layering of Specialized LLMs
Now, some LLMs are called, Thinking Models, in that they will be able to output what they are thinking, but that isn’t what is really happening.
Instead we take an LLM model that is trained to break input into facts and actions, then output that.
Then that output is fed into an LLM, maybe in chunks, and that LLM will attempt to simulate the thought actions, and output those.
Then all of those outputs are sent into another LLM, often specifically chosen by the steps prior, which is meant to combine these things into a final output.
Coding LLMs can now have internal compilers for code languages, and verify that output passes through the compiler without throwing errors. Sometimes, they still get the language wrong, and hand back valid JavaScript for a C++ problem (this has been complained about online, so will probably be patched out soon enough).
Real Damage
I cannot write this without acknowledging that LLMs uses a huge amount of gathered energy and directly dissipates it as heat. As an aside, a computer is a very “efficient” space heater, in that almost all of the energy it uses for its transistors are directly output as heat.
This means more energy must be used to cool these things off. Heat sink, fan or liquid impeller, and at some point in the loop, that heat has to be dissipated. Hotter air or hotter water, but also the energy of running cooling stacks. None of this works without air conditioning in some form.
All of this has impact. Water for cooling that evaporates into the atmosphere, or phase change chemicals that leak into the atmosphere over time. The more heat involved, the more this happens. The more “accidental” damage that happens.
Even an LLM that is open source and run locally had to have been trained in a very power hungry data-center. I mean, it is useful that most of the open models come from China, and that country is actually VERY good at using renewable energy, but the extra heat has to go somewhere.
I don’t think this is as huge as some folks make it out to be, but I also don’t think it is “nothing” as many “AI Advocates” would like us to believe.
Also, of course, there are known instances of US data centers that use on-site fossil-fuel burning generators to run their data centers because official local power-distribution doesn’t have enough capacity. As long as that keeps happening, I absolutely understand those that shun all of AI as an environmental disaster.
My 2026 Opinion
My observations, generally, are that LLMs are being trained and tuned to sell non-experts on the idea that AI is good enough to replace the expertise a business leader might have formerly paid someone else for.
While there are arguments back and forth about AI being responsible for mass layoffs across knowledge industries, it cannot be disputed that some managers have replaced some human labor with AI, with varying degrees of success.
I strongly feel that the people who actually do the training and tuning of LLM systems have given up on the idea of getting a true AGI, or even a truth telling AI out of the current line of ever improving LLM products. I think that the random underpinning of LLMs and the effectiveness of Adversarial Network filters being as good as they are at convincing, even some technical people, that they have gained conciousness, means that actual progress towards a truly Thinking Machine is going to be hindered by the marketing need to keep up the improvements of the current stochastic parrot processing that sits behind the current product lines.
Not that AI will never be possible, just that the current approach is fundamentally flawed. The approach itself will need a reset at the very basic building blocks. Doing this will look very backward in the human interface part of this. That phase, I don’t think, will be able to be productised, because it will take the apparent output maturity back 20 years.
All of this to say, I hate AI. I really like machine learning as a fundamental technology. When machine learning is specialized, it has proven to solve some really interesting and difficult problems. When searching for protein folds, generating a huge pool of false fold patterns (that are then easily discarded as wrong) causes little harm, because in the end, finding new “right” answers is MUCH more expensive to do manually.
But General Knowledge LLM marketed as AI is so fundamentally broken that I see no point in trying to make it part of my own life. At least not in 2026.