In Short:
– North Korea’s Kimsuky group used ChatGPT to create deepfake military IDs for a phishing campaign against defence organisations.
– Genians Security Centre detected the attack in July 2025, raising concerns about AI misuse in national security.
North Korea’s Kimsuky hacking group has utilised ChatGPT to forge sophisticated deepfake South Korean military identification cards in a phishing campaign targeting defence organisations.
It marks a notable advancement in the regime’s cyber espionage tactics. South Korean cybersecurity firm Genians Security Center first detected the attack in July 2025.
Researchers confirmed that the fake IDs were generated using OpenAI’s GPT-4o model, following metadata analysis.
The hackers bypassed safeguards by requesting ID “mock-ups” rather than actual documents. Malicious emails impersonated South Korean defence communications, misleading recipients into downloading malware disguised as ID cards.
Exploitation Monitoring
North Koreans have been increasingly using AI for intelligence gathering and evading sanctions. According to Anthropic’s recent report, North Korean workers have exploited the Claude AI model to obtain fraudulent employment at technology firms.
Genians’ director highlighted that AI is instrumental in planning attacks, crafting malware, and impersonating recruiters, demonstrating its implication in global intelligence missions.
The phishing primarily affected South Korean journalists and activists, with the sophisticated deepfake technology introducing serious concerns regarding AI misuse in national security.
Genians urged the implementation of enhanced security measures to combat these AI-driven threats.