State-sponsored hackers are having a blast with LLMs — Microsoft and OpenAI warn new tactics could cause more damage than ever before

Holographic silhouette of a human. Conceptual image of AI (artificial intelligence), VR (virtual reality), Deep Learning and Face recognition systems. Cyberpunk style vector illustration.
(Image credit: Shutterstock)

Hackers are increasingly turning to LLMs and AI tools to refine their tactics, techniques and procedures (TTP) in their campaigns, new reports have warned.

A new research paper released by Microsoft in collaboration with OpenAI has revealed how threat actors are using the latest technical innovations to keep defenders on their toes.

 AI refines hackers edge

State-backed hackers have been abusing the built in language support mechanics to refine their ability to target foreign adversaries, and make them seem more legitimate when conducting social engineering campaigns. They are able to use this language processing to establish seemingly legitimate professional relationships with their victims.

Google also says that they have observed hackers performing intelligence gathering by using LLMs to garner information about the industries and locations their victims live and work in, alongside learning more about their personal relationships.

In one example, Microsoft and OpenAI observed the Russian GRU Unit 26165-linked Forest Blizzard group using LLMs to gather information on how satellites operate and communicate in very specific detail. They have also been observed using AI to refine their scripting abilities, most likely to automate or increase the efficiency of their technical operations.

North Korean linked group Emerald Sleet has been observed using LLMs to learn how to exploit critical software vulnerabilities that are publicly reported, generate content to use in spearphishing campaigns, and identify organizations that gather information about North Korean nuclear and defense capabilities.

In all of these cases, Microsoft and OpenAI identified and disabled all the accounts used by these threat actors, with Microsoft stating, “AI technologies will continue to evolve and be studied by various threat actors. 

“Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.”

More from TechRadar Pro

Benedict Collins
Senior Writer, Security

Benedict has been with TechRadar Pro for over two years, and has specialized in writing about cybersecurity, threat intelligence, and B2B security solutions. His coverage explores the critical areas of national security, including state-sponsored threat actors, APT groups, critical infrastructure, and social engineering.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the Centre for Security and Intelligence Studies at the University of Buckingham, providing him with a strong academic foundation for his reporting on geopolitics, threat intelligence, and cyber-warfare.

Prior to his postgraduate studies, Benedict earned a BA in Politics with Journalism, providing him with the skills to translate complex political and security issues into comprehensible copy.