AI companies, like social media firms, have no regulatory obligation to disclose their safety practices, leaving their processes behind closed doors. (Pixabay)AI 

Understanding the methods employed by AI companies to combat deepfakes is crucial

The concerns surrounding artificial intelligence are not solely based on future possibilities, but also on past experiences, particularly the negative impact of social media. Facebook and Twitter struggled for years to effectively address misinformation and hate speech, allowing them to spread worldwide. Now, deepfakes are also becoming a problem on these platforms, and although Facebook bears responsibility for the distribution of harmful content, the AI companies creating these deepfakes also have a role in cleaning up the mess. Regrettably, similar to social media companies in the past, they are conducting this work privately and without transparency.

I contacted a dozen creative AI companies whose tools could generate photorealistic images, videos, text, and sounds to ask how they made sure their users followed their rules.(1) Ten responded, and all confirmed they use the software for surveillance. what their users poisoned and most said they have also had people check these systems. Hardly anyone agreed to reveal how many people were tasked with monitoring these systems.

And why should they? Unlike other industries such as pharmaceuticals, automobiles, and food, AI companies are not required by law to disclose their security practices. They, like social media companies, can be as secretive about their work as they want, and likely will be for years to come. Europe’s upcoming AI law has touted “transparency requirements,” but it’s unclear whether it will force AI companies to audit their security practices in the same way that automakers and food manufacturers do.

In these other industries, the implementation of strict safety standards took decades. But the world can’t afford AI tools for so long when they’re evolving so fast. Midjourney recently updated its software to create images so photorealistic that they showed politicians’ pores and fine lines. At the start of a huge election year, with nearly half the world voting, a gaping regulatory vacuum means AI-generated content could have a devastating impact on democracy, women’s rights, the creative arts and more.

Here are some ways to solve the problem. One is to encourage AI companies to be more open about their security practices, which starts with questions. When I reached out to OpenAI, Microsoft, Midjourney and others, I made the questions simple: how do you enforce your rules using software and people, and how many people are involved?

Most were willing to share several paragraphs of their detailed processes to prevent abuse (albeit in vague terms). For example, OpenAI had two teams helping to retrain their AI models to make them more secure or responsive to malicious output. The company behind the controversial image generator Stable Diffusion said it used security filters to block images that violated its rules, and human moderators reviewed the prompts and images that were flagged.

As shown in the table above, only a few companies reported how many people were monitoring these systems. Think of these people as internal security auditors. In social media, they are known as content moderators and have played a challenging but critical role in vetting content that social media algorithms flag as racist, misogynistic or violent. Facebook has more than 15,000 moderators to maintain the site’s integrity without stifling users’ freedoms. It’s a delicate balance that humans do best.

Of course, most AI tools don’t have the kind of toxic content that people do on Facebook with their built-in security filters. But they could still make themselves safer and more reliable if they hired more human supervisors. Humans are the best stop in the absence of better software to catch malicious content, which so far has proven lacking.

Pornographic deepfakes of Taylor Swift and voice clones of President Joe Biden and other international politicians have gone viral, to name just a few examples, highlighting that AI and tech companies are not investing enough in security. However, hiring more people to help them enforce their rules is like getting more buckets of water to put out a house fire. It may not solve the entire problem, but it will improve it temporarily.

“If you’re a startup building a tool with a generative AI component, hiring people at different stages of the development process is very smart and vital,” says Ben Whitelaw, founder of the Every in Moderation newsletter on online security. .

Several AI companies admitted that they only have one or two human moderators. Runway, the video production company, said its own researchers did the work. Descript, which makes an audio cloning tool called Overdub, said it only checked a sample of the cloned voices to make sure they matched the consent statement customers read. A spokesperson for the startup claimed that reviewing their work would violate their privacy.

AI companies have unparalleled freedom to do their work in secret. But if they want to ensure the trust of the public, regulators and civil society, it is in their best interest to pull back the curtain more to show how exactly they are following their rules. Hiring people wouldn’t be a bad idea either. Too much focus on racing to make AI “smarter,” to make fake photos look more realistic, texts to be smoother, or cloned voices to be more convincing, risks driving us deeper into a dangerous, confusing world. Better mass up and disclose these safety standards now, before everything gets much more difficult.

Also read these top stories today:

Facebook mess? Facebook can’t copy or acquire its way to another two decades of prosperity. Can CEO Mark Zuckerberg do it? Facebook is like an abandoned amusement park of poorly implemented ideas, says the analyst. Interesting? Check it out here. Go ahead and share it with everyone you know.

Elon Musk’s purchase of Twitter is still in court! The court wants Elon Musk to testify to the US SEC about possible violations of the law in connection with the purchase of Twitter. Know where things stand here.

Does Tesla lack AI Play? Analysts emphasize this aspect, and for Tesla it is a problem. This article has some interesting details. Check it out here. If you enjoyed reading this article, please share it with your friends and family.

Related posts

Leave a Comment