top-news-1350×250-leaderboard-1

AI security blunders have cyber professionals scrambling

In recent times, cyber professionals have been stretched thin due to the explosion of AI security incidents. The rapid integration of generative AI into businesses has shifted it from a mere novelty to a critical utility, significantly challenging the cybersecurity landscape. According to a new report by Palo Alto Networks, there was an astonishing increase of over 890% in AI traffic in the year 2024 alone. The primary uses of this technology include writing assistance, conversational agents, and enterprise search, with applications like ChatGPT and Microsoft 365 Copilot leading the way.

However, this technological surge is closely followed by heightened security vulnerabilities. Incidents involving data loss prevention tied to generative AI have more than doubled at the beginning of 2025, now making up 14% of all data security issues in SaaS traffic. Organizations, on average, have around 66 generative AI applications in place, 10% of which are flagged as high-risk due to the unsanctioned nature of their use and the absence of robust AI policies. This lack of visibility, compounded by shadow AI activities, poses additional monitoring challenges for security teams, leading to unauthorized data access and manipulation.

Moreover, the proliferation of plugins, AI agents, and manipulated models fuels security concerns as they introduce ‘side doors’ for malicious exploitation. Alongside regulatory demands, non-compliance with emergent AI laws could result in severe penalties. To counter these threats, firms need a more streamlined approach, such as implementing zero-trust frameworks and condition-based access to generative AI tools, to protect sensitive information robustly.

The explosive growth of generative AI, while driving robust innovation, also amplifies risks related to data leakage and compliance failures. Organizations must recognize this dual-edged sword and preemptively adopt strategies to safeguard against its inadvertent vulnerabilities.

Crédito: Link de origem

Leave A Reply

Your email address will not be published.