The use of artificial intelligence (AI) for content moderation and filtering poses a serious threat to freedom of expression, (AP)AI 

What needs to be done about the threat AI poses to free speech

The focus of news articles on artificial intelligence (AI) threats often revolves around the notion of killer robots and the potential loss of human jobs. However, one significant risk that receives minimal media coverage is the potential impact of these emerging technologies on freedom of expression.

And specifically how they can undermine some of the most fundamental legal principles protecting free speech.

Every time a new communication technology sweeps through society, it disrupts the previously found balance between social stability and individual freedom.

This is what we are currently living through. Social media has enabled new forms of community networking, surveillance, and publicity, leading to increased political polarization, the rise of global populism, and an epidemic of online bullying and harassment.

In the midst of all this, freedom of speech has become a totemic issue in the culture wars, and its position is both strengthened and threatened by the social forces unleashed by social media platforms.

However, free speech debates tend to have arguments about “back culture” and “woke” mindsets. This may miss the impact of technology on the actual operation of free speech laws.

Specifically, the way AI enables governments and tech companies to censor expression more easily, at scale, and quickly. This is a serious question that I explore in my new book, The Future of Language.

The delicate balance of freedom of speech

Some of the most important protections for free speech in liberal democracies such as the UK and the US are based on the technicalities with which the law responds to the real-life actions of everyday citizens.

A central part of the current system is based on the fact that we, as autonomous individuals, have a unique ability to turn our thoughts into words and communicate them to others. This may seem like a rather insignificant point. But the way the law currently works is based on this simple assumption about human social behavior, and AI threatens to undermine it.

Free speech protections in many liberal societies oppose the use of “prior restraint”—that is, preventing speech before it is expressed.

For example, the government should not be able to prevent a newspaper from publishing a certain story, although it can prosecute it after publication if it believes the story violates some law. The use of prior restrictions is already widespread in China, for example, where there are very different attitudes to regulating expression.

This is significant because despite what tech libertarians like Elon Musk may claim, no society in the world allows absolute freedom of speech. There is always a balance to be found between protecting people from the real harm that language can cause (for example by defaming them) and safeguarding people’s right to express dissenting opinions and criticize those in power. Finding the right balance between these is one of society’s most challenging decisions.

Artificial intelligence and advance limitation

Because so much of our communication today is mediated by technology, it is now very easy to use AI assistance proactively and to do so at high speed and at massive scale. This would create conditions where the fundamental human ability to turn ideas into speech could be compromised at the whims of the government (or social media manager).

For example, the recent online safety law in the UK and plans in the US and Europe to use “download filtering” (algorithmic tools to prevent certain content from being downloaded) as a way to screen offensive or illegal messages, all encourage social media. platforms to censor AI at source.

The rationale given for this is practical. With a huge amount of content being uploaded every minute on a daily basis, keeping track of everything is very challenging for groups of people. AI is a fast and much cheaper option.

But it is also automated, unable to deliver real-world experience, and its decisions are rarely subject to public scrutiny. As a result, AI-based filters can often tend to censor content that is not illegal or offensive.

Freedom of speech as we understand it today is based on specific legal protection processes that have evolved over centuries. It is not an abstract idea, but is based on very specific social and legal practices.

Legislation that encourages content regulation through automation effectively dismisses these processes as technical. In doing so, it jeopardizes the entire institution of free speech.

Freedom of speech is always an idea maintained by continuous discussion. There is never an established formula for defining what should be banned and what should not. For this reason, the definition of acceptable and unacceptable must take place in an open society and it must be possible to complain about it.

While there are signs that some governments are beginning to acknowledge this when planning for the future of AI, it must be central to any such plans.

Whatever role AI plays in helping to police online content, it must not limit our ability to argue with each other about the kind of society we are trying to create.

Also read these top stories of today:

The risk of artificial intelligence in healthcare! “As LMMs become more widely used in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable,” the WHO warned. Also know some benefits. Check them out now here. If you enjoyed reading this article, please share it with your friends and family.

No ChatGPT Bang for Bing Buck! When Microsoft announced it was adding ChatGPT to its Bing search engine, emerging analysts hailed the move as an “iPhone moment.” But that doesn’t seem to have happened. You know what happened here.

People Trump AI! Sam Altman, CEO of OpenAi, says that people are smart enough to figure out what to use ChatGPT for and what not. “People have found ways to make ChatGPT very useful for them and figured out what not to use it for,” says Sam Altman. Read all about it here. If you enjoyed reading this article, please share it with your friends and family.

Related posts

Leave a Comment