Nvidia Chief Executive Officer Jensen Huang met with employees in 2020 over risks posed by artificial intelligence. (Bloomberg)AI 

CEO Jensen Huang of Nvidia was warned by employees about the potential harm AI could cause to minority groups.

After their 2020 meeting with Nvidia Corp. CEO Jensen Huang, Masheika Allgood and Alexander Tsado departed feeling disappointed. As former presidents of the company’s Black employees group, they had dedicated a year collaborating with colleagues from various departments to create a presentation aimed at alerting Huang about the potential risks that artificial intelligence (AI) technology could pose, particularly to minority groups.

The 22-slide package and other documents reviewed by Bloomberg News pointed to Nvidia’s growing role in shaping the future of artificial intelligence — saying its chips will make AI ubiquitous — and warned that more regulation is inevitable. The discussion included cases of misleading facial recognition technologies used by the industry to power self-driving cars. Their goal, the pair told Bloomberg, was to find a way to deal with the potentially dangerous unintended consequences of artificial intelligence — consequences that are likely to be felt first by marginalized communities.

According to Allgood and Tsado, Huang did most of the talking during the meeting. They didn’t feel like he was really listening to them, and more importantly, they didn’t understand that Nvidia would prioritize work to address the potential bias of AI technology that might put underrepresented groups at risk.

Tsado, who worked as a product marketing manager, told Bloomberg News that he wanted Huang to understand that the problem needs to be addressed immediately — that CEOs may have the luxury of waiting, but “I’m a member of underserved communities, so there’s nothing more important to me than this. We’re building these tools and I look at them and think , that this isn’t going to work for me because I’m black.”

Both Allgood and Tsado left the company shortly thereafter. Allgood’s decision to leave his position as software product manager, he said, was because Nvidia “wasn’t willing to lead in an area that was very important to me.” In a LinkedIn post, he called the meeting “the most devastating 45 minutes of my working life.”

Although Allgood and Tsado have left, the concerns they raised about making AI safe and inclusive still hangs over the company and the AI industry in general. The chipmaker has one of the poorest records among major tech companies when it comes to black and Hispanic representation in its workforce, and one of its generative artificial intelligence products has come under fire for not taking into account people of color.

In the meantime, the issues raised by Allgood and Tsado have also gained ground. While Nvidia declined to comment on the details of the meeting, the company said it “continues to devote enormous resources to ensuring that AI benefits everyone.”

“Achieving secure and reliable AI is a goal we’re working towards with the community,” Nvidia said in a statement. “It’s going to be a long journey with many conversations.”

One of the topics of the meeting is not controversial. Nvidia has become absolutely central to the explosive adoption of artificial intelligence systems. Sales of its chips, computers and related software have taken off, sending its shares on an unprecedented rally. It is now the world’s only chip maker with a trillion dollar market cap.

Previously niche computing is entering everyday life in the form of advanced chatbots, self-driving cars and image recognition. Artificial intelligence models — which analyze existing data patterns to make predictions intended to mimic human intelligence — are being developed for use in everything from drug development and industrial design to advertising, the military and security industries. With this spread, concern about the risks it poses has only grown. Models are usually trained using massive datasets created by collecting information and imagery from the Internet.

As artificial intelligence evolves into a technology that intrudes ever deeper into everyday life, some Silicon Valley workers aren’t embracing it with the same confidence they’ve shown in other advances. Huang and her peers are likely constantly getting calls from employees who feel they need to be heard.

And while Silicon Valley figures like Elon Musk have expressed fears about AI’s potential threat to human existence, some underrepresented minorities say they face a much more immediate set of problems. Without participating in the creation of software and services, they fear that self-driving cars might not stop for them, or that security cameras might misidentify them.

“The whole point of bringing diversity to the workplace is that we need to bring our voices and help companies build tools that are better for all communities,” Allgood said. During the meeting, Allgood said he raised concerns that the biased facial recognition technologies used to power self-driving cars could pose greater threats to minorities. Huang responded that the company would limit the risk by testing the vehicles on the highway rather than on city streets, he said.

The lack of diversity and its potential implications are particularly important at Nvidia. Only one of a sample of 88 S&P 100 companies ranked below Nvidia in terms of the percentage of black and Hispanic employees in 2021, according to data compiled by Bloomberg from the U.S. Equal Employment Opportunity Commission. Four of the five most disadvantaged companies for black workers are chip makers: Advanced Micro Devices Inc., Broadcom Inc., Qualcomm Inc. and Nvidia. Even by technical standards — the industry has long been criticized for its lack of diversity — the numbers are small.

During the meeting, Allgood recalled Huang saying that the company’s diversity ensures that its AI products are ethical. At the time, only 1% of Nvidia’s employees were black — a number that hadn’t changed since 2016, according to data compiled by Bloomberg. That compared with 5 percent for Intel Corp. and Microsoft Corp., 4 percent for Meta Platforms Inc. and 14 percent of the black population in the U.S. in 2020, the data showed. People with knowledge of the meeting, who asked not to be identified to discuss its content, said Huang was referring to diversity of thought and not specifically race.

According to Nvidia, a lot has happened since Allgood and Tsado met with the CEO. The company says it has done considerable work to make its AI-related products fair and safe for everyone. The AI models it delivers to clients come with warning labels, and it audits the underlying datasets to remove bias. It also aims to ensure that AI focuses on its intended purpose after deployment.

In emails dated March 2020 reviewed by Bloomberg, Huang gave permission to begin consideration of some of Allgood’s proposals, but by then he had already filed his notice.

Shortly after Allgood and Tsado left Nvidia, the chipmaker hired Nikki Pope to lead its internal Trustworthy AI project. Co-author of a book on wrongful convictions and prison sentences, Pope is director of Nvidia’s AI & Legal Ethics Program.

Rivals Alphabet Inc.’s Google and Microsoft have already established similar AI ethics groups a few years earlier. Google publicly announced its “AI Principles” in 2018 and has been providing updates on its progress. Microsoft had a team of 30 engineers, researchers and philosophers on its AI ethics team in 2020, some of whom it laid off this year.

Pope, who is black, said he does not accept the argument that minorities must be directly involved in order to produce unbiased models. Nvidia examines the data sets the software is trained on, he said, and makes sure they’re comprehensive enough.

“I’m satisfied that the models we offer our customers to use and edit have been tested, that the groups that will interact with those models are represented,” Pope said in an interview.

The company has created an open source platform called NeMo Guardrails that helps chatbots filter unwanted content and stay on topic. Nvidia now publishes “model cards” with their AI models, which tell more about what the model does and how it’s made, as well as its intended use and limitations.

Nvidia also works with internal affinity groups to diversify their datasets and test models for distortion before release. Pope said self-driving car datasets are now trained on images that include parents in strollers, people in wheelchairs and dark-skinned people.

Pope and colleague Liz Archibald, who is Nvidia’s director of corporate communications and also Black, said they once had a “tough meeting” with Huang about AI transparency and security. But they felt his questions made their work harder.

“I think his ultimate goal was to press our arguments and examine the logic to help figure out how he could make it even better for the company as a whole,” Archibald said in an email.

Some researchers say that minorities are so underrepresented in technology, and especially in artificial intelligence, that without their input, algorithms are likely to have blind spots. An article by New York University’s AI Now Institute has linked the lack of representation in the AI workforce to bias in the models, calling it a “diversity disaster.”

In 2020, Duke University researchers set out to create software that converts blurry images into high-resolution images using Nvidia’s large language model called StyleGAN, which was developed to produce fake but hyper-realistic-looking human faces and trained on a dataset. images from the photo site Flickr. As users played with the tool, they found it struggled with low-resolution photos of people of color — including former President Barack Obama and Congresswoman Alexandria Ocasio-Cortez — accidentally creating images of faces with lighter skin tones and eye colors. The researchers later said the bias was likely due to Nvidia’s design and updated their software.

Nvidia mentions in its code archives that its version of the dataset is collected from Flickr and inherits “all the biases of that website”. In 2022, it added that the data set should not be used “to develop or improve facial recognition technologies.”

According to the Pope, the criticized model has been replaced by a new one.

Nvidia joins a group of large companies where some minority workers have expressed concern that the new technology poses dangers, particularly to people of color. AI ethics researcher Timnit Gebru left Google after the company wanted him to retract a paper warning about the dangers of training AI models (Gebru said Google fired him; the company announced his resignation). He has said that any method that uses datasets “too large to document is inherently risky,” as reported by MIT Technology Review.

Gebru and Joy Buolamwini, founder of the Algorithmic Justice League, published a paper called “Gender Shades” that showed how facial recognition technologies make more mistakes in identifying women and people of color. A growing body of research now supports their research that the data sets behind AI models are biased and can harm minorities. International Business Machines Corp, Microsoft and Amazon.com Inc. have stopped selling facial recognition technology to police agencies.

Read more: People are biased. Generative AI is even worse

“If you look at the history of the tech industry, it’s not a sign of a serious commitment to diversity,” said Sarah Myers West, CEO of the AI Now Institute and co-author of the paper on the lack of diversity. in the AI workforce. The industry has a long history of not taking minorities and their concerns seriously, he said.

Shelly Cerio, head of human resources at Nvidia, told Bloomberg that while the company operated like a startup — and worried about survival — it primarily hired for immediate skill needs: as many college graduates as it could find. Now that it’s bigger, Nvidia has made diversity even more important in its hiring process.

“Have we made progress? Yes,” he said. “Have we made enough progress? Absolutely not.”

The company improved the hiring of black employees after 2020. Black representation increased from 1.1 percent in 2020 to 2.5 percent in 2021, the most recent year for which data is available. Asians are the company’s largest ethnic group, followed by white employees.

Pope said not all of the company’s actions “guarantee or eliminate” bias, but provide a rich data set that can help address it. He said that at a fast-paced company that has released hundreds of models, expanding processes to improve safety is one of the challenges of his role.

It will also be years before we know whether this work is enough to keep AI systems safe in the real world. For example, self-driving cars are still rare.

A few weeks before Allgood left the company, he wrote one last email to Huang, reflecting on when he had worked as a teacher in his previous career. She wrote that when she took her students on field trips, she relied on parents and volunteers to help her manage them. This recognition that no one, no matter how skilled, could handle a group of children in the wild.

“The AI has moved permanently into the field trip phase,” read the email. “You need colleagues and structure to manage the chaos.”

–With assistance from Jeff Green.

More stories like this are available at bloomberg.com

©2023 Bloomberg L.P.

Related posts

Leave a Comment