Webinar • Brightalk: Akamai Security and Performance Insights

How Hackers Target AI: A Look Inside Today’s LLM ExploitsAgéndalo en tu calendario habitual ¡en tu horario!

Martes, 16 de septiembre de 2025, de 08.00 a 09.00 hs Horario de Ohio (US)
Webinar en inglés

AI models — especially LLMs — are now prime targets for a new generation of threats engineered to exploit their unpredictability. From prompt injection and AI jailbreaking to data exfiltration and model theft, attackers are rapidly evolving tactics to manipulate model behavior, access sensitive data, and disrupt business-critical AI systems. In this second episode of the Firewall for AI webinar series, we’ll break down how these attacks work, what makes them difficult to detect, and how defenders can get ahead of adversaries targeting AI and LLMs. You'll learn: - The anatomy of prompt injection attacks and how they bypass input filters - How AI jailbreaking leads to toxic output, policy evasion, and reputational risk - Real-world risks of AI-driven data leaks, model theft, and training set poisoning - Why AI-specific DoS attacks don’t look like traditional DDoS Whether you're securing customer-facing apps, internal copilots, or proprietary models, this session will arm you with the knowledge to spot — and stop — these threats early. Akamai is an approved ISC2 CPE Submitter Partner. Earn CPE Credits by watching our Webinar and providing us with your ISC2 Member ID number either in 'Question' or 'Rate this' section.

¿Le gustaría hacer webinars o eventos online con nosotros?
Sponsors
No hay sponsors para este webinar.


Cerrar