Microsoft Claims Nation-State Hackers are Utilizing OpenAI’s ChatGPT to Enhance Cyber Operations
A recent report from Microsoft Corp. reveals that nation-state hackers are incorporating artificial intelligence into their cyberattacks. The study identified Russian, North Korean, Iranian, and Chinese-backed adversaries who were observed utilizing advanced language models, such as OpenAI’s ChatGPT, during the early stages of their hacking activities. Researchers discovered that these groups were leveraging AI technology to enhance their phishing emails, gather intelligence on vulnerabilities, and address technical problems they encountered.
It is the biggest indication yet that state-sponsored cyberespionage groups that have haunted companies and governments for years are improving their tactics based on publicly available technologies such as large language models. Security experts have warned that such developments would help hackers gather more intelligence, increase their credibility when trying to trick targets and break into victim networks more quickly. OpenAI announced on Wednesday that it had terminated accounts linked to state-sponsored hackers.
We are on WhatsApp channels. Click to join.
“Threat actors, like defenders, are looking to AI, including LLMs, to improve their productivity and leverage accessible platforms that can advance their goals and attack techniques,” Microsoft said in the report.
According to the company, significant attacks have not included the use of LLM technology. Policy researchers warned in January 2023 that hackers and other bad actors online would find ways to misuse emerging AI technology, including to help write malicious code or spread influence.
Microsoft has invested $13 billion in OpenAI, the buzzy startup behind ChatGPT.
Hacking groups that have used AI in their cyber operations included Forest Blizzard, which Microsoft says is linked to the Russian government. North Korea’s Velvet Chollima group, which has posed as NGOs to spy on victims, and China’s Charcoal Typhoon hackers, which are mainly focused on Taiwan and Thailand, have also used such technology, Microsoft said. An Iranian group linked to the country’s Islamic Revolutionary Guard Corps has exploited LLMs by creating fraudulent emails, one used to lure famous feminists and another masquerading as an international development organization.
Microsoft’s findings come amid growing concern among experts and the public about the serious risks posed by artificial intelligence to the world, including disinformation and job losses. In March 2023, more than a thousand people, including leaders of major technology companies, signed an open letter warning of the risks posed by artificial intelligence to society. More than 33,000 people have signed the letter.
Another suspected Russian hacking group, Midnight Blizzard, has previously compromised the emails of Microsoft executives and cybersecurity staff, the company said in January.