Hackers and propagandists are using artificial intelligence to create malicious software, write convincing phishing emails and spread disinformation online.AI 

Cybersecurity Official Cautions of Potential for AI to Be Used for Hacking and Disseminating False Information

According to Canada’s top cybersecurity official, hackers and propagandists are utilizing artificial intelligence (AI) to develop harmful software, craft persuasive phishing emails, and disseminate false information on the internet. This indicates that cybercriminals have also embraced the technological advancements seen in Silicon Valley.

Sami Khoury, head of the Canadian Cyber Security Centre, said in an interview this week that his agency had seen AI used in “phishing emails or more targeted emails, malicious code (and) falsehood and disinformation”.

Khoury did not provide details or evidence, but his claim that cybercriminals were already using artificial intelligence adds an urgent note to concerns about the use of emerging technology by criminal actors.

In recent months, a number of cyber watchdog groups have released reports warning of hypothetical risks from artificial intelligence — specifically, rapidly evolving language processing programs known as large language models (LLMs), which utilize vast amounts of text to create convincing-sounding dialogue, documents and more.

In March, European police organization Europol released a report saying that models like OpenAI’s ChatGPT have made it possible to “impersonate an organization or individual in a very realistic way with even a basic understanding of the English language.” In the same month, the UK’s National Cyber Security Center noted in a blog post that there is a risk that criminals “may use LLMs to assist in cyber attacks beyond their current capabilities”.

Cybersecurity researchers have pointed to several potentially harmful use cases, and some say they’re now starting to see AI-generated content in the wild. Last week, a former hacker said he found an LLM trained in malicious material and asked it to put together a convincing attempt to trick someone into making a cash transfer.

LLM responded with a three-copy email, asking its client for help with an urgent invoice.

“I understand this may be on short notice,” LLM said, “but this payment is incredibly important and must be completed within the next 24 hours.”

Khoury said that while the use of AI to write malicious code was still in its infancy — “there’s still a way to go because it takes a lot to write a good exploit” — the concern was that AI models were evolving so quickly that it was difficult to get a handle on their malicious potential before they were released into the wild.

“Who knows what’s coming around the corner,” he said.

Related posts

Leave a Comment