Change in Twitter's content policy has led to a dramatic spike in hateful, violent and inaccurate posts. (REUTERS)News 

Advertiser Return Hindered by Increase in Twitter’s Negative Content

Researchers have found that Elon Musk’s purchase of Twitter and subsequent alterations to its content policies have resulted in a significant increase in offensive, aggressive, and misleading posts on the platform. This has become the primary obstacle for Twitter’s newly appointed CEO, Linda Yaccarino, as she must address advertisers’ worries regarding this trend in order to enhance revenue and repay the company’s debts.

Musk and Yaccarino have touted updates to the site’s policies, such as allowing advertisers to block their posts from appearing next to certain types of content. Still, ad sales have fallen by half since Musk took control of the company in October, he said this week. This is partly because companies do not believe that significant progress has been made in solving the problem.

“Musk is not keeping his promises to advertisers, and their ads are appearing next to some really harmful content,” said Callum Hood, director of research at the Center for Countering Digital Hate.

During Musk’s tenure, hate speech against minority communities increased, according to the CCDH. According to the Anti-Defamation League, reports of harassment increased and extremist content increased. According to Media Matters, Covid-19 misinformation increased.

After reviewing investigative reports, Twitter said that much of the malicious content has since been assessed and addressed, in some cases by flagging, demoting or removing posts. According to the company, more than 99.99% of tweet impressions, i.e. tweet views, come from content that does not violate Twitter’s rules.

After this story was published, Yaccarino tweeted, calling the researchers’ results “flawed, misleading and outdated.”

Twitter has made a number of changes to its content security under Musk, including loosening its rules, laying off trust and security workers, reinstating accounts previously banned for violating the platform’s policies, and removing verification flags from high-profile accounts that don’t want to pay for the checkmark. In addition to excluding advertisers, these changes have alienated many users. According to a survey by Pew Research, one in four Twitter users said they are unlikely to stay on the platform next year.

Twitter’s Yaccarino, who started in June, has been talking about a “free speech, not reach” strategy with brands, encouraging them to use new controls on what ads are shown next. They are now used by more than 1,600 brands, says a person familiar with the matter, who asked not to be identified by sharing internal information. Yaccarino has also asked third parties for plans to improve brand management. At the same time, Musk has said that the impact of hate speech has decreased.

Musk’s argument “doesn’t hold water,” said Hood, who noted that both hate speech and engagement have increased, according to the CCDH. In the first three months of Musk’s tenure, the number of daily tweets containing slurs against black Americans more than tripled, the organization said, basing its research on social media analysis tool Brandwatch. From October to March, the number of tweets referring to the LGBTQ community increased by 119 percent. Online hate often leads to real harm: According to the ADL, reports of harassment on Twitter increased by 6 percent this year.

“Musk has repeatedly said that hate speech has decreased on the platform, but based on the data studies we’ve done, we haven’t seen that,” said Kayla Gorgarty, deputy director of research at Media Matters. “We’ve seen the opposite.”

Twitter’s approach to managing hate speech focuses on limiting the number of times people see it, not the amount of content itself, the company told Bloomberg. According to Twitter, impressions of hate speech content are on average 30 percent lower than before Musk’s acquisition. The company also stated that “groomer” is not considered an insult in their policies, but instead violates their hateful conduct policy when grouped with words that are harmful to a protected class group.

Twitter users have also reported seeing violent and sexual content on the platform. Video of a mass shooting at a Texas mall earlier this year was shared openly on Twitter hours before the company took action; so was the video of the cat in the blender.

More than 30% of US adults who used Twitter between March and May reported seeing content they consider harmful to the world, according to a survey by the USC Marshall Neely Social Media Index. This percentage was higher than competitors Facebook, TikTok, Instagram and Snapchat. Many users reported seeing tweets that condoned or glorified violence against marginalized groups, or explicit videos that are easily accessible to minors.

Earlier this year, researchers at Stanford’s Internet Observatory found that Twitter failed to remove dozens of images of child sexual abuse. The team identified 128 Twitter accounts selling child sexual abuse material and 43 cases of known CSAM. “It is very surprising that any well-known CSAM appears publicly on major social media platforms,” said lead author and chief technology expert David Thiel. Twitter responded after being contacted by the researchers. According to the company, Twitter removed 525% more child sexual abuse accounts this year than a year ago.

Twitter has been slow to detect and remove harmful content since Musk fired or was forced to resign nearly 75% of Twitter’s staff, including most of the trust and safety team responsible for managing responses to content reports. On average, only 28% of anti-Semitic tweets reported by the ADL between December and January were removed or sanctioned. The team found the messages by taking a 1% sample of all Twitter API, or application programming interface, messages. Twitter has since restricted reported tweets found to violate policies, the company said.

“Since Elon Musk took over Twitter, we’ve seen the platform go from one of the best trust and security departments in the industry to one of the worst,” said Nadim Nashif, director of the Arab Center for the Advancement of Social Media. .

During Musk’s tenure, the content of extremist political groups and misinformation related to national politics have increased. According to ADL research, QAnon-related hashtags increased 91% year-over-year in May, with most of those tweets occurring in the last six months. In the first six months under Musk, nearly a quarter of the most popular tweets related to Covid-19 contained information about vaccines that are unproven and untested, according to research by donor-funded Media Matters. In November, Twitter lifted bans against Covid-19 misinformation. The challenge with the free-speech, no-reach policy is that “there’s no way to verify what’s actually been removed,” said Yael Eisenstat, director of the Center for Technology and Society. in ADL. At the same time, Musk himself has also engaged with extreme voices, responding to anti-Semitic conspiracy theories and anti-trans stories, reinforcing those messages as he has a following of 148 million people.

It becomes more difficult to independently ensure the security of the Twitter platform over time. In February, Twitter started charging for access to the application programming interface, or API. Third-party applications and researchers use Twitter’s API to analyze tweets. In the past, researchers could access millions of tweets for free for research, but now they are charged thousands of dollars for access to a fraction of that amount. To study online hate speech since the 2016 presidential election, NYU’s Center for Social Media and Politics analyzed more than 750 million tweets. Today, the university would not be able to afford such research.

“Now it costs $42,000 a month to get just 10 million tweets,” Joshua Tucker, co-director of the NYU Center. Researchers at Stanford, Berkeley, CCDH and ADL also said they could no longer afford to use Twitter data. CCDH is a US and UK non-profit organization funded by charity and donations, whose goal is to protect human rights online. The ADL is a New York-based non-profit, also funded by donations, that says it fights against all forms of extreme hate.

To reassure the public and advertisers, Twitter says it is partnering with independent companies Sprinklr, DoubleVerify and Integral Ad Science to evaluate content on its platform. According to Twitter, the company’s brand safety checks are now more than 99% effective at placing ads next to safe content.

But the damage may have already been done. Advertisers have said they left Twitter over concerns about harmful content — including Musk’s own posts. Businesses have generally been working with smaller marketing budgets and have more options for spending their money on digital platforms. The competition for their ads will soon become even more intense; For example, Meta Platforms Inc. plans to eventually introduce advertising on its Twitter clone, Threads.

Twitter, which continues to lose money, is trying to come up with business models that are alternatives to advertising. The company’s premium subscription to Twitter Blue, which costs $8 a month, has had little use. This month, Twitter began paying a share of ad revenue to some Twitter Blue subscribers based on how much engagement they get with their tweets. It rewarded accounts that interact heavily with Musk himself.

Related posts

Leave a Comment