OpenAI CEO aims to secure massive investment for AI computer chip production.AI 

AI systems like ChatGPT are not genuinely intelligent, experts say

Sam Altman, CEO of OpenAI, is said to be seeking up to $7 trillion in investment to produce the large quantities of computer chips necessary for running AI systems, according to reports.

Altman also recently said that the world will need more energy in the AI-saturated future he envisions — so much more that some kind of technological breakthrough like nuclear fusion might be necessary.

Altman clearly has big plans for his company’s technology, but is the future of artificial intelligence really that rosy? As a long-time “artificial intelligence” researcher, I have my doubts.

Today’s AI systems – especially generative AI tools like ChatGPT – are not truly intelligent. Moreover, there is no evidence that they can become such without fundamental changes in the way they work.

What is AI?

One definition of artificial intelligence is a computer system that can “perform tasks commonly associated with intelligent beings.”

This definition, like many others, is somewhat ambiguous: should we call spreadsheets AI because they can perform calculations that would once have been a high-level human task? What about the factory robots that have not only replaced humans, but in many cases surpassed us in their ability to perform complex and delicate tasks?

While spreadsheets and robots can indeed do things that used to be the domain of humans, they do so by following an algorithm—a process or set of rules for approaching and completing a task.

One thing we can say is that there is no such thing as “AI” in the sense that a system can perform various intelligent functions in a human way. Rather, there are many different AI technologies that can do quite different things.

Making decisions vs producing results

Perhaps the most important difference is between “discriminative AI” and “generative AI”.

Discriminating AI helps with decision-making, such as whether a bank should give a small business loan or whether a doctor has diagnosed a patient with disease X or disease Y. These kinds of AI technologies have been around for decades, and bigger and better. pops up all the time.

On the other hand, generative AI systems—ChatGPT, Midjourney, and their relatives—produce outputs in response to inputs: in other words, they invent things. Basically, they are exposed to billions of data points (like sentences) and use this to guess the likely answer to a prompt. The answer can often be “true” depending on the source data, but there are no guarantees.

In generative AI, there is no difference between “hallucinations” – a false response invented by the system – and a response judged by a human to be true. This appears to be an inherent flaw in the technology, which uses a type of neural network called a transformer.

AI, but not intelligent

Another example shows how the “AI” goalposts are constantly moving. In the 1980s I worked on a computer system designed to provide expert medical advice on laboratory results. It was recorded in the US research literature as one of the first four medical “expert systems” in clinical use, and in 1986 an Australian government report described it as the most successful expert system developed in Australia.

I was quite proud of this. It was a landmark in artificial intelligence, performing a task that normally required highly trained medical specialists. However, the system was not smart at all. It was really just a lookup table of sorts that matched lab test results to high-level diagnostics and patient management.

Today, there is technology that makes it very easy to build such systems, so there are thousands of them in use around the world. (This technology, based on research by myself and my colleagues, is provided by an Australian company called Beamtree.)

When doing the tasks of highly trained experts, they are certainly “AI”, but they are still not intelligent at all (although the more complex ones may have thousands and thousands of rules to find the answers).

The transformer networks used in generative AI systems still operate according to rules, even though there may be millions or billions of them, and they cannot be easily explained by humans.

What is true intelligence?

If algorithms can produce dazzling results like ChatGPT without being intelligent, what is real intelligence?

We could say that intelligence is understanding: judging that something is or is not a good idea. Think of Archimedes, who jumped from his bath and shouted “Eureka” because he had an understanding of the principle of buoyancy.

Generative AI has no insight. ChatGPT cannot tell if its answer to a question is better than Gemini’s. (Gemini, until recently known as Bard, is Google’s competitor to OpenAI’s GPT-AI family of tools.)

Or to put it another way: a generative AI might produce amazing Monet-esque paintings, but if it was only trained in Renaissance art, it would never invent Impressionism.

Generative AI is exceptional, and humans will undoubtedly find widespread and highly valuable uses for it. It already provides very useful tools for transforming and representing (but not discovering) data, and tools for turning definitions into code are already in routine use.

These are getting better and better: for example, Google’s just-released Gemini seems to try to minimize the hallucination problem by using search and then re-expressing the search results.

However, when we get to know creative AI systems, we see more clearly that it is not really intelligent; there is no insight. It’s not magic, but a very clever magician’s trick: an algorithm that is the result of extraordinary human ingenuity.

Also read these top stories today:

Bad news for players! Nintendo is advising game publishers that its next-gen console will be delayed. This article has some interesting details.

In the age of artificial intelligence, Google is clearly not enough! The rise of generative AI chatbots is giving people new and different ways to search for information. Read all about it here.

“TikTok has made me suffocate.” “It hijacks my brain”! Many people have compared social media addiction to smoking. Learn how to throw this nasty habit here.

Related posts

Leave a Comment