Episode 45 — AI Threats: Model Manipulation, Poisoning, and Prompt Injection
This episode introduces AI security threats at a Security+ level, focusing on how attackers may manipulate models, poison data, or use prompt injection to influence outputs. Model manipulation can involve attempts to change behavior, extract sensitive information, or cause unsafe responses. Poisoning can affect training data, reference data, retrieval sources, or business content used by an AI system. Prompt injection attempts to override instructions, redirect behavior, or make the system reveal or misuse information. For exam scenarios, students should understand the risk, the basic attack path, and the controls that may reduce exposure, such as input validation, data governance, access control, monitoring, output review, and separation between AI tools and sensitive systems. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with. And dont forget Cyberauthor.me for the companion study guide and flash cards!