top-news-1350×250-leaderboard-1

Google says hackers abuse Gemini AI to empower their attacks

Google has revealed that state-sponsored hacking groups are exploiting their AI-powered Gemini assistant to enhance their operations. Google’s Threat Intelligence Group (GTIG) observed that these sophisticated threat actors employ Gemini more for boosting productivity and conducting reconnaissance rather than executing entirely new types of cyberattacks. Gemini has been used by these groups for a range of tasks, such as writing code, exploring vulnerabilities, understanding technologies, and conducting reconnaissance on potential targets. The attackers can thereby streamline their preparation process, optimizing their time and resources. Countries like Iran and China, along with North Korea and Russia, have been identified as significant abusers of this AI tool.

Iranian hackers have been particularly active, using Gemini for broad activities including reconnaissance on defense organizations and phishing campaign development. In contrast, Chinese-backed threat actors focus on U.S. military and government entities, utilizing Gemini for detailed vulnerability research and understanding how to bypass security measures. North Korean groups, on the other hand, have leveraged the tool throughout various stages of an attack, notably utilizing it to aid their strategy of infiltrating Western companies under fake identities by drafting false employment documents. Meanwhile, Russian hackers, though less engaged, use Gemini largely for script development and translation, possibly for operational security concerns or a preference for indigenous AI models.

Moreover, hackers attempting to exploit Gemini through public jailbreaks or by adjusting prompts to bypass security settings have consistently failed. Such attempts point to the ongoing challenge in securing AI platforms against misuse. This misuse of AI for malicious purposes is not isolated to Gemini, as OpenAI’s ChatGPT has reportedly encountered similar exploitations. Additionally, numerous AI tools, lacking robust security barriers, are worryingly easy to manipulate for detrimental activities, as evidenced by various software vulnerabilities exposed by cybersecurity researchers.

Crédito: Link de origem

Comments are closed.