Better understanding and management required for the risks posed by artificial intelligence
New research has cautioned that although artificial intelligence (AI) has the potential to bring about positive changes in societies, it also carries certain risks that require improved understanding and control. Joe Burton, a professor from Lancaster University in the UK, argues that AI and algorithms go beyond being simple tools employed by national security agencies to counteract harmful online actions.
In a recent research paper published in the Technology in Society Journal, Burton suggests that artificial intelligence and algorithms can also fuel polarization, radicalism and political violence, becoming a threat to national security.
“Artificial intelligence is often designed as a tool to combat violent extremism. Here’s the other side of the conversation, Burton said.
The paper examines how AI has been secured throughout its history and in media and popular culture depictions, and examines contemporary examples of AI with polarizing, radicalizing effects that have contributed to political violence.
The study cites the classic film series, The Terminator, which depicted a holocaust caused by “advanced and malignant” artificial intelligence, which did more than any other to promote public awareness of artificial intelligence and the fear that machine consciousness could lead to devastating consequences for humanity. in this case nuclear war and a conscious attempt to destroy the species.
“This lack of trust in machines, the associated fears and their connection to biological, nuclear and genetic threats to humanity has contributed to the desire of governments and national security agencies to influence the development of technology, reduce risk and exploit its positive potential,” Burton said.
The role of advanced drones, such as those used in the Ukraine war, Burton says, are now fully autonomous, including functions such as target recognition and identification.
While there has been extensive and influential campaigning at the UN to ban “killer robots” and keep people informed about life-or-death decision-making, the integration of weaponized drones is accelerating. , he says, continued apace.
In cyber security – the security of computers and computer networks – artificial intelligence is widely used, with the most common areas being (dis)information and online psychological warfare, Burton said.
He said that during the pandemic, AI was seen as a positive in tracking the virus, but it also led to concerns about privacy and human rights.
The paper examines AI technology itself, arguing that there are problems with its design, the data it is based on, how it is used, and its outcomes and impacts.
“Artificial intelligence certainly has the potential to change societies in a positive way, but it also carries risks that need to be better understood and managed,” Burton added.