The long wait for our AI overlords

The long wait for our AI overlords
Photo by Pawel Janiak / Unsplash

I enjoy using ChatGPT, Bing, Stable Diffusion, Dall-E, Kobold AI, and more. Especially when it's generating content and hits it right on the nail or provides me something unexpected but exactly what I want. It's quite a rush. It gets it just right. And if it is wrong, then we just try again.

In that regard, it feels like a slot machine that rewards the happiness center in our brain by compelling us to pull that lever repeatedly. When we get a terrible result, it's the large language model (LLM) fault. If we get a good result, it's either because it is intelligent or because we gave it the correct explanation.

In both cases, we attribute triumph to things that are not a factor. The LLM just knows the statistical likelihood of which words needs to follow which other word. A prompt that starts with "If it looks like a duck, swims like a duck, and quacks like a duck" then it will most likely finish the sentence with "then it probably is a duck," which is the reason why I always say it is a goose because, at a young age, I thought the only difference between the two was size.

This is what a large language model knows and does: It predicts which words (or parts of words) can be used to complete the previous input. And since the model has statistics of the most excellent encyclopedia of the world, the internet, it knows many variations. It also knows what replies are better than others since most humans express gratitude when they value helpful responses. And most "AI tutorials" start with sentiment analysis.

Sentiment analysis is about identifying a text's sentiment, such as happiness or anger. That way, one can only focus on the responses that express gratitude. Of course, not all humans are the paragon of truth. Some just want to see the world burn and approve language that harms others. The good news is that there is a statistical difference in how they use language. Of course, humans are intelligent, and subterfuge is nothing new. But even dog whistling, saying something so that only a select few know the true meaning of the words, can be found, in again, statistically relevant ways.

Today's artificial intelligence might show intelligent behavior but remains nothing more than algorithms and data towards language. It doesn't include rational reasoning, among others. They are only comparable to human intelligence if we assume that human intelligence is nothing more than algorithms and data. You know what? Let's assume that human intelligence is nothing more than algorithms and data.

Suppose human intelligence is nothing more than algorithms and data. In that case, the human brain is a marvel of efficiency compared to most household appliances. A brain takes around 12~20 watts of power. Which is approximately 20% of the metabolic load. So if we do some unscientific math, our brain could consume at most 100 watts if our body puts everything it has towards it. The downside is that all other organs will not get any required energy.

This power consumption would make it comparable to an average laptop and an average desktop computer. Both would be unable to deliver anything near what a human can perform. Even ChatGPT requires a massive data center to perform the computation that imitates intelligence.

From a biological view, it is not uncommon for a species to choose a less or more energy-efficient approach. One theory about the creation of DNA was that it evolved from RNA, which is used by viruses and other simple life. DNA was less efficient but could perform more complex functions. Assuming this theory is true, then one of the requirements for DNA is that there is plenty of food to sustain

AI food consists of electrical power, and there is a severe shortage. Worse, in parts of the world, electricity is considered unreliable. So the only way for AI to become the dominant form of life is when it becomes very efficient or when there is enough power.

And even then: it needs mechanisms to reproduce (factories) and evolution (mutation, natural selection, and more). All of these concepts are now fully facilitated by humans and not themselves. All this means that humanity is holding all the keys.

As such, I find it hard to imagine that our AI overlords are around the corners. We may see one of the very first steps, but until AI becomes self-sufficient, I don't see a reason to fear it more than humanity. And even then, it's trained with the best of humanity, but that may be part of our fear.