top-news-1350×250-leaderboard-1

AI for security and the security of AI

Jack Chapman, SVP of Threat Intelligence at KnowBe4, speaking at the ITWeb Security Summit 2025.

Cyber criminals are now exploiting weaknesses in the AI embedded into some security technologies to breach organisations, leaving them more exposed than ever, while falsely believing they’re protected.

This is according to Jack Chapman, SVP of Threat Intelligence at KnowBe4, who was speaking at the ITWeb Security Summit in Sandton this week.

Chapman said while AI can be a force for good in cyber security, there are also myriad new risks associated with it.

“Among the positives are that AI is not reliant on static rules to enable dynamic security; it can find threats and trends that humans would have missed; it can free up resources for other objectives; and it enables us to be more proactive,” he said.

However, he noted that risks associated with using AI include that the ‘garbage in – garbage out’ rule applies exponentially when it comes to AI, so companies must carry out regular testing to evaluate drift and robustness of the model. “The bad news for security professionals is we also have to become experts in yet another thing – AI,” he said.

Chapman cited research carried out over the past year, which indicated that 81.6% of phishing attacks use AI, a 53.5% year-on-year increase. Attackers are also using polymorphic techniques, with AI speeding up the process, he said.

“It’s all about speed from the criminal ecosystem point of view, and AI increases the speed and sophistication of attacks, as well as enabling ‘as a service’ business models in the cyber crime ecosystem,” he said. “Open source intelligence – OSINT – has been a game-changer for attackers, with OSINT-focused LLMs and fraudAI making the research stage more effective, and helping them automate more targeted attacks.”

Chapman highlighted how attackers are harnessing AI to improve deepfakes by analysing how people communicate in e-mails, and cloning their voices, videos and photographs. “Credibility is key, and attackers are using AI to improve the credibility of phishing attacks and drive personalisation,” he said.

He outlined how malicious actors are also using more mature AI technologies to auto-generate malicious links, code and attachments, and launch polymorphic attacks at scale.

“As the cyber crime ecosystem and malicious AI agents mature, we need to become AI experts and use AI against them. We must enable the human layer, the policy layer and the technology layer together in a more heuristic approach,” he said.

Crédito: Link de origem

Leave A Reply

Your email address will not be published.