Microsoft has released a threat intelligence report revealing that hackers are increasingly using artificial intelligence in their operations, from reconnaissance to post-compromise activity, making large-scale cyberattacks more accessible.
The company described AI as a “force multiplier” that reduces technical barriers and accelerates execution for attackers of varying skill levels. The findings indicate that AI adoption is lowering the technical proficiency required to conduct large-scale cyberattacks.
North Korean threat groups Jasper Sleet and Coral Sleet are using generative AI to power elaborate fake employment schemes targeting Western companies. Jasper Sleet actors use AI tools to generate culturally appropriate name lists, tailor fake resumes to specific job postings, and craft professional communications to sustain long-term employment once hired.
Microsoft observed Jasper Sleet using the AI application Faceswap to insert North Korean IT workers’ faces into stolen identity documents and create polished headshots for resumes. The group also deployed voice-altering technology during virtual interviews to disguise accents, enabling operatives to pose as Western candidates.
“Jasper Sleet employs AI throughout the entire attack process to secure employment, maintain employment, and exploit access on a large scale,” Microsoft stated. Coral Sleet used AI coding tools to generate and refine malware components, create fake company websites, provision remote infrastructure, and rapidly test payloads.
The group jailbroken large language models to generate malicious code that bypasses built-in safety controls, according to Microsoft. Microsoft noted early experimentation by threat actors with agentic AI, where models support iterative decision-making and task execution, though this has not yet been observed at scale.
Other major tech companies have also reported on the use of AI by threat actors. Google reported in February that its Threat Intelligence Group observed threat actors using AI to gather information, create phishing campaigns, and develop malware.
Amazon documented a campaign in which a Russian-speaking hacker used generative AI services to breach more than 600 FortiGate firewalls across 55 countries in five weeks, demonstrating how AI enabled an attacker with limited skills to operate at a scale previously requiring a larger, more capable team.
Microsoft advised organizations to treat AI-powered IT worker schemes as insider risks and to focus on detecting abnormal credential use, hardening identity systems against phishing, and securing AI systems that may themselves become targets.




