Cyber Official Warns of AI-Assisted Hacking and Misinformation
According to Canada’s top cybersecurity official, hackers and propagandists are utilizing artificial intelligence (AI) to develop harmful software, craft persuasive phishing emails, and disseminate false information on the internet. This early evidence suggests that cybercriminals have also embraced the technological advancements seen in Silicon Valley.
Sami Khoury, head of the Canadian Cyber Security Centre, said in an interview this week that his agency had seen AI used in “phishing emails or more targeted emails, malicious code (and) falsehood and disinformation”.
Khoury did not provide details or evidence, but his claim that cybercriminals were already using artificial intelligence adds an urgent note to concerns about the use of emerging technology by criminal actors.
In recent months, a number of cyber watchdog groups have released reports warning of hypothetical risks from artificial intelligence — specifically, rapidly evolving language processing programs known as large language models (LLMs), which utilize vast amounts of text to create convincing-sounding dialogue, documents and more.
In March, the European police organization Europol published a report saying that models like OpenAI’s ChatGPT had enabled “another organization or individual in a very realistic way, even with a basic understanding of the English language.” That same month, the UK’s National Cyber Security Center noted in a blog post that there is a risk that criminals “may use LLMs to assist in cyber attacks beyond their current capabilities”.
Cybersecurity researchers have pointed to several potentially harmful use cases, and some say they’re now starting to see AI-generated content in the wild. Last week, a former hacker said he found an LLM trained in malicious material and asked it to put together a convincing attempt to trick someone into making a cash transfer.
LLM responded with a three-copy email, asking its client for help with an urgent invoice.
“I understand this may be on short notice,” LLM said, “but this payment is incredibly important and must be completed within the next 24 hours.”
Khoury said that while the use of AI to write malicious code was still in its infancy — “with a long way to go because it takes a lot to write a good exploit” — the concern was that AI models were evolving so quickly that it was difficult to capture their malicious potential before they were released into the wild.
“Who knows what’s coming around the corner,” he said.