Stanford Scientists Call for Increased Transparency in AI Development by Tech Companies
A report released by Stanford University researchers on Wednesday assessed the transparency of artificial intelligence foundation models developed by companies such as OpenAI and Google. The authors of the report called upon these companies to disclose additional details, including the data and human effort employed in training these models.
“Over the past three years, it’s clear that transparency has been declining, while capacity has been going up,” said Stanford professor Percy Liang, the researcher behind the Foundation Model Transparency Index. “We see this as very problematic because we are in other industries, like social media, we see that when transparency is reduced, bad things can result.”
Basic models are artificial intelligence systems that are trained on huge data sets and can perform a variety of tasks from typing to coding. Companies developing basic models are driving the rise of generative artificial intelligence, which since the release of ChatGPT, a hit product from Microsoft-backed OpenAI, has captivated companies of all sizes.
In a world that increasingly relies on these models for decision-making and automation, understanding their limitations and biases is critical, the report’s authors say.
The index rated 10 popular models on 100 different transparency indicators, such as training data and count. All models scored “unimpressive”: even the most transparent model, Meta’s Llama 2, scored 53/100. Amazon’s Titan model ranked the lowest, 11/100. OpenAI’s GPT-4 model got 47 points. out of 100.
The authors of the index hope that the report will encourage companies to increase the transparency of their Basic Model and also serve as a starting point for governments grappling with regulating the emerging field.
The index is a project of the Model Research Center established by the Stanford Institute for Human-Centered Artificial Intelligence.