Research Suggests Artificial Intelligence Could Take Over Human Jobs
The nature of work may be changed by AI, specifically large language models (LLMs), according to researchers.
The study was published in the journal, ‘Science’.
Grossmann and colleagues found that large language models trained on vast amounts of textual data are increasingly able to simulate human-like responses and behavior. This opens up new possibilities for testing theories and hypotheses about human behavior at an unprecedented scale and speed.
“In this paper, we wanted to explore how social science research practices can be adapted, even reinvented, to harness the power of artificial intelligence,” said Waterloo psychology professor Igor Grossmann.
Traditionally, social sciences rely on several methods, such as questionnaires, behavioral tests, observational studies and experiments. The common goal of social science research is to get a general representation of the characteristics of individuals, groups, and cultures and their dynamics. With advanced artificial intelligence systems, the landscape of social science data collection may change.
“AI models can represent a wide range of human experiences and perspectives, potentially giving them greater freedom to produce differentReactions•прослушаться проблемы in reducing generalizability issues in research by AI models can represent a wide range of human experiences and perspectives.
“LLMs may displace human participants in data collection,” said UPenn psychology professor Philip Tetlock, adding, “In fact, LLMs have already demonstrated their ability to produce realistic survey responses about consumer behavior. Large language models will revolutionize human-based forecasting in the next 3 years. It doesn’t make sense that people without artificial intelligence dares to make probabilistic judgments in serious political debates. I give it a 90 percent chance. Of course, how people react to all this is another matter.”
Although opinions differ on the feasibility of this application of advanced artificial intelligence systems, studies using simulated participants could be used to generate new hypotheses that could then be validated in human populations.
But the researchers warn of some potential pitfalls of this approach – including that LLMs are often trained to exclude socio-cultural biases that exist in real-life people. This means that sociologists using AI in this way could not study these biases.
Professor Dawn Parker, co-author of the paper from the University of Waterloo, notes that researchers need to develop guidelines for the management of LLMs in research.
“Pragmatic concerns about data quality, fairness and equity in effective AI systems are significant,” Parker said, adding, “So we need to make sure that individuals, groups, cultures, LLMs, like all scientific models, are open-source, which means , that their algorithms and preferably the data are available for everyone to review, test and edit. Only by maintaining transparency and repeatability can we ensure that AI-supported Social Science research really contributes to understanding people’s experience.”