Security & privacy

Google: AI-Powered Zero-Day Exploit Targets Web Admin Tool

At a glance:

  • Google Threat Intelligence Group (GTIG) identifies AI-generated zero-day exploit targeting an unnamed open-source web admin tool.
  • The exploit could bypass two-factor authentication (2FA) in the affected tool.
  • Evidence suggests AI models are increasingly used by threat actors for vulnerability discovery and exploitation.

Google's Findings on AI-Generated Exploit

Researchers at Google's Threat Intelligence Group (GTIG) have reported that a zero-day exploit, likely generated using AI, targeted a popular open-source web administration tool. The exploit was designed to bypass two-factor authentication (2FA) protection in the tool, which remains unnamed. Although the attack was prevented before widespread exploitation, Google's findings highlight a concerning trend: threat actors are increasingly relying on AI to discover and weaponize vulnerabilities. The structure and content of the Python exploit code, including educational docstrings and a hallucinated CVSS score, strongly indicate the use of an AI model. While the specific LLM used for the malicious task is unclear, Google has ruled out the involvement of Gemini.

Rising Use of AI in Cyber Threats

Google notes that Chinese and North Korean hackers, including APT27, APT45, UNC2814, UNC5673, and UNC6201, have been using AI models for vulnerability discovery and exploit development. Russia-linked actors have also been observed using AI-generated decoy code to obfuscate malware, such as CANFAIL and LONGSTREAM. Additionally, Google has highlighted a Russian operation codenamed "Overload," where social engineering threat actors used AI voice cloning to impersonate journalists in fake videos promoting an anti-Ukraine narrative. This trend underscores the growing sophistication of cyber threats, with AI playing a significant role in their development.

Google's Response and Warnings

Google has notified the software developer of the significant threat and has taken timely action to disrupt the attack. The company has also identified an autonomous agent module named "GeminiAutomationAgent" that uses a hardcoded prompt to enable malware to interact with devices in an automated way. The prompt is designed to assign a benign persona to bypass the LLM's safety features. Google warns that threat actors are industrializing access to premium AI models using automated account creation, proxy relays, and account-pooling infrastructure. This development raises concerns about the security of AI systems and the need for robust defenses against AI-powered cyber threats.

The Future of AI in Cybersecurity

The incident at hand demonstrates that AI is not only a tool for good but also a potent weapon in the hands of malicious actors. As AI technology continues to advance, it is crucial for cybersecurity professionals to stay ahead of the curve and develop effective countermeasures. Google's findings serve as a wake-up call for the industry to recognize the evolving nature of cyber threats and to invest in research and development to safeguard against AI-generated exploits. The future of cybersecurity will likely hinge on the ability to anticipate and neutralize the novel tactics employed by AI-powered threat actors.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What is the nature of the zero-day exploit identified by Google?
The exploit targets a popular open-source web administration tool and is designed to bypass two-factor authentication (2FA) protection. It is believed to have been generated using AI, as evidenced by its structure and content, including educational docstrings and a hallucinated CVSS score.
Which threat actors are using AI for vulnerability discovery and exploit development?
Chinese and North Korean hackers, such as APT27, APT45, UNC2814, UNC5673, and UNC6201, have been using AI models for these purposes. Russia-linked actors have also been observed using AI-generated decoy code to obfuscate malware.
How has Google responded to the identified threat?
Google has notified the software developer of the significant threat and has taken timely action to disrupt the attack. The company has also highlighted the use of an autonomous agent module named "GeminiAutomationAgent" that uses a hardcoded prompt to enable malware to interact with devices in an automated way. Google has issued a warning about the industrialization of access to premium AI models by threat actors.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article