Trained on text data, AI could change social scientific research, AI scientists say. (REUTERS)News 

AI Scientists Claim Text Data Training Could Revolutionize Social Science Research

According to an article by scientists from the University of Waterloo and University of Toronto in Canada, as well as Yale University and the University of Pennsylvania in the US, social science research may be replaced or altered by Artificial Intelligence (AI).

“In this paper, we wanted to explore how social science research practices can be adapted, even reinvented, to harness the power of artificial intelligence,” said Waterloo psychology professor Igor Grossmann.

Large language models (LLMs), such as ChatGPT and Google Bard are examples, are increasingly able to simulate human-like reactions and behavior because they have been trained on huge amounts of text data, according to an article published in the journal Science.

They said this offered new opportunities to test theories and hypotheses about human behavior on a large scale and quickly.

They said that the goals of social science research are to get a general picture of the characteristics of individuals, groups, cultures and their dynamics.

With advanced artificial intelligence systems, scientists said the landscape of data collection in the social sciences, known to traditionally rely on methods such as questionnaires, behavioral tests, observational studies and experiments, could change.

“AI models can represent a wide range of human experiences and perspectives, potentially giving them greater freedom to produce different› their responses from the standard methods of human participation concerns in research is reduced by AI models.

“LLMs may be displacing human participants in data collection,” said Pennsylvania psychology professor Philip Tetlock.

“In fact, LLMs have already demonstrated their ability to produce realistic survey responses on consumer behavior.

“Large language models will revolutionize human-based prediction in the next three years,” Tetlock said.

Tetlock also said that in serious political debates, it would not make sense for humans without artificial intelligence to dare to make probabilistic assessments.

“I give it a 90 percent chance. Of course, how people react to it all is another thing,” Tetlock said.

The researchers said studies using simulated participants could be used to generate new hypotheses that could then be validated in human populations, although opinion is divided on the feasibility of this application of artificial intelligence.

Scientists warn that LLMs are often trained to exclude socio-cultural biases that exist in real-life people. This meant that sociologists using AI in this way would not be able to study these biases, they said in the paper.

Researchers need to develop guidelines for managing LLMs in research, said Dawn Parker, co-author of the paper at the University of Waterloo.

“Pragmatic concerns about data quality, fairness and equity for effective AI systems are significant,” Parker said.

“So we need to ensure that social science LLMs, like all scientific models, are open source, meaning that their algorithms and ideally their data are available for anyone to explore, test and modify.

“Only by maintaining transparency and reproducibility can we ensure that AI-supported Social Science research truly contributes to our understanding of the human experience,” said Parker.

Related posts

Leave a Comment