The researchers from the University of California - Los Angeles (UCLA), US, asked GPT-3 to predict the next shape which followed a complicated arrangement of shapes. They also asked the AI to answer SAT analogy questions (AP)News 

GPT-3 Artificial Intelligence Tool Demonstrates Reasoning Abilities Comparable to College Undergraduates

Scientists have discovered that GPT-3, the widely used AI tool, possesses reasoning abilities comparable to those of college undergraduate students.

The Large Language Model (LLM) of artificial intelligence was asked to solve inference problems typical of intelligence tests and standardized tests such as the SAT used by colleges and universities in the United States and other countries to make admissions decisions.

Researchers at the University of California, Los Angeles (UCLA) asked GPT-3 to predict the next shape that followed a complex arrangement of shapes. They also asked the AI to answer SAT analogy questions while making sure the AI had never encountered these questions before.

They also asked 40 UCLA undergraduates to solve the same problems.

In a shape prediction test, the GPT-3 was found to solve 80 percent of the problems correctly, just below the 60 percent average human score and between the highest scores.

“Surprisingly, GPT-3 not only performed as well as humans, but also made similar mistakes,” said UCLA psychology professor Hongjing Lu, senior author of the study published in the journal Nature Human Behavior.

In solving SAT analogies, the AI tool was found to outperform the average human score. Analogical reasoning is solving unprecedented problems by comparing them to familiar ones and extending them to new ones.

In the questions, test takers were asked to choose pairs of words that have the same type of relationship. For example, in the problem “”Love” is to “hate” as “rich” is to what word?”, the solution would be “poor”.

However, the AI did worse than the students in solving the analogies based on the short story. These problems involved reading one passage and then identifying another story that conveyed the same meaning.

“Language learning models only try to predict words, so we’re surprised that they can reason,” Lu said. “In the past two years, the technology has taken a big leap from its previous incarnations.”

Without access to the inner workings of GPT-3, which is guarded by its creator OpenAI, the researchers said they were not sure how its reasoning powers work, whether LLMs actually start to “think” like humans or do something completely different that just imitates human thought.

They said they hope to look into this.

“GPT-3 might be human thinking in a way. But on the other hand, people didn’t learn by ingesting the whole internet, so the training method is completely different.

“We’d like to know if it really does what humans do, or if it’s something entirely new — true artificial intelligence — which would be amazing in itself,” said UCLA psychology professor Keith Holyoak, co-author. from the research.

Related posts

Leave a Comment