← 返回 Avalaches

AI尚未在现实中立刻创造出人人可用来武器化漏洞的网路超能力,但它已协助技术一般的北韩行为者以规模化方式执行高效恶意软体攻击。根据Expel星期三的报告,与国家有关联的团队HexagonalRodent使用OpenAI、Cursor与Anima的AI工具,几乎全程自动化整个入侵流程:撰写恶意代码、建立网路钓鱼基础设施,以及构建假的企业网站。该行动向超过2,000(约2.0×10^3)台供加密货币、NFT与Web3开发者使用的电脑投放凭证窃取型恶意软体,并透过虚假求职面试和代码测试窃取钱包凭证。调查人员估计三个月内盗取金额最高可达12 million美元(约1.2×10^7美元)。

Marcus Hutchins指出,值得注意的重点不是技术精密度,而是可及性。他表示这些操作者缺乏核心程式编写与基础设施建置能力,而AI使他们能完成先前做不到的工作。该团队的作业安全也相当松散:在ChatGPT与Cursor中用于撰写恶意程式的提示词被泄露,且受害者钱包追踪资料库外流,使Expel得以估算损失。该恶意软体充满英语注解,甚至夹杂表情符号,正如Hutchins所述,这更像大型语言模型生成痕迹,而非传统北韩资深骇客写法。虽然其行为模式符合典型恶意软体,可在许多企业透过端点侦测与回应工具被挡下,HexagonalRodent却锁定大量缺乏此类防护的单一受害者。Expel估计案件中涉及多达31名骇客,且由于可直接使用AI,该体系持续不断扩充低技能作业者人数。

HexagonalRodent只是北韩庞大网路犯罪体系的一个节点。这套体系常被描述为支持国家、旨在资助核相关计划并规避制裁的犯罪集团。近年来,北韩团队在简历、网站、漏洞测试、假身份证件与面试deepfake等环节大量采用AI,既加快招聘诈欺,也方便伪装身份;与此同时,据称成立于Reconnaissance General Bureau下的Research Center 227亦在制度化地发展AI导向攻击工具。OpenAI与Anthropic都表示已封锁疑似北韩帐户;Anthropic亦观察到有骇客计划使用Claude强化相似的恶意软体样本并制作含恶意代码的技能测验。OpenAI认为AI主要提供的是速度与规模而非全新能力;Hutchins则认为真正威胁在于可直接落地的AI赋能作战,而非对假想未来全自动化骇客能力的想像。

AI has not yet delivered an immediate universal cyber superpower that lets everyone weaponize vulnerabilities, but it is already helping average North Korean actors run efficient malware campaigns at scale. In a Wednesday report, Expel said the state-linked group HexagonalRodent used AI tools from OpenAI, Cursor, and Anima to automate almost the entire intrusion chain, including malicious code, phishing infrastructure, and fake company websites. The operation installed credential-stealing malware on more than 2,000 computers used by cryptocurrency, NFT, and Web3 developers, then used fake job interviews and coding tests to steal wallet credentials. Investigators estimated theft reached up to 12 million USD in three months (about 1.2×10^7 USD).

Marcus Hutchins said the key point is not sophistication but accessibility. He said the operators lacked core coding and infrastructure skills, and AI enabled work they could not previously do. Their operational security was weak: prompts used in ChatGPT and Cursor leaked, and an exposed wallet-tracking database allowed Expel to estimate losses. The malware was heavily annotated in English and even emoji-littered, a pattern Hutchins described as consistent with large language model generation rather than professional North Korean coding habits. Although the behavior was standard and detectable by endpoint detection and response in many organizations, HexagonalRodent targeted many individual victims without such defenses. Expel estimated up to 31 hackers involved, and the group appears to keep adding more low-skill operators because AI removes much of the traditional development barrier.

HexagonalRodent is only one node in a much larger North Korean cyber machine, described as a state-backed criminal syndicate funding sanctions evasion and nuclear goals. Over years, groups have used AI in resumes, websites, exploit testing, fake IDs, and interview deepfakes to recruit and spoof identity at speed. At the same time, Research Center 227 under the military structure is reportedly formalizing AI-focused offensive tooling. OpenAI and Anthropic reported suspending suspected North Korean accounts; Anthropic also observed plans to use Claude to enhance similar malware strains and build malicious skill tests. OpenAI said AI mainly adds speed and scale, not novel capability, while Hutchins argued the real threat is practical AI-enabled operations, not speculative future fully autonomous cyberattacks.

2026-04-24 (Friday) · cc15161099c2694150b7f3e0d13ee7f68eda1db1