Episode 93 — AI in SecOps: Agentic AI, Chatbots, Predictive Analysis, AI-Augmented Baselines, and CI/CD (4.6)
In this episode, we look at how Artificial Intelligence (A I) can support security operations, including agentic A I, chatbots, predictive analysis, A I augmented baselines, and integrations with Continuous Integration and Continuous Delivery (C I C D) workflows. This is an important topic because security teams are dealing with more alerts, more systems, more cloud services, more identities, and more data than people can comfortably review by hand. A I can help sort, summarize, compare, recommend, and sometimes take action when the workflow is well designed. At the same time, A I can make mistakes, misunderstand context, expose sensitive information, or act too broadly if access is not controlled. The right way to think about A I in security operations is not as a replacement for people. It is a support layer that can make people faster and more consistent when it is governed carefully.
Before we continue, a quick note. This audio course is part of our companion study series. The first book is a detailed study guide that explains the exam and helps you prepare for it with confidence. The second is a Kindle-only eBook with one thousand flashcards you can use on your mobile device or Kindle for quick review. You can find both at Cyber Author dot me in the Bare Metal Study Guides series.
Security operations is the part of a security program that watches for problems, investigates activity, responds to alerts, manages findings, and helps the organization keep risk under control. In that setting, A I is valuable because it can help process large amounts of information quickly. A security analyst may need to review logs, tickets, vulnerability data, endpoint alerts, identity activity, cloud changes, and network events. Each source may be useful, but the combined volume can be overwhelming. A I tools can help summarize a long alert history, group similar events, suggest possible causes, or highlight details that deserve closer review. That support can save time, especially during repetitive analysis. The benefit is not that the tool has perfect judgment. The benefit is that it can reduce the amount of manual searching and organizing needed before a person makes a decision.
Agentic A I refers to A I systems that can pursue a goal by taking steps, using tools, making intermediate decisions, and adjusting based on results. In security operations, an agentic tool might gather evidence for an alert, check threat intelligence, search related logs, open a ticket, recommend a severity level, or start a predefined response workflow. This is different from a tool that only answers one question and stops. Agentic A I is more active. That makes it useful, but it also makes control more important. If an agent can use tools, then the organization must decide which tools it can use, what data it can access, what actions it can take, and where it must pause for human approval. The more action the A I can take, the more carefully its permissions and guardrails must be designed.
A practical way to understand agentic A I is to imagine a security alert that says a user signed in from an unusual location and then accessed sensitive files. A limited tool might summarize the alert. An agentic tool might do more. It could check the user’s normal sign-in history, look for recent password reset activity, review Multi-Factor Authentication (M F A) events, search for similar file access by the same account, check whether the device was managed, and prepare a short investigation summary. If the workflow allows it, the tool might also recommend revoking sessions or escalating the case. This can be helpful because the analyst receives a more complete starting point. The risk is that the tool might misread the situation or overreact if it is allowed to act without enough evidence or approval.
Chatbots are another common way A I appears in security operations. A chatbot gives people a conversational interface for asking questions, searching documentation, summarizing alerts, or getting help with routine workflows. A security analyst might ask for a summary of recent activity tied to an account. A help desk worker might ask for the correct process to report suspected phishing. A manager might ask for a plain-language explanation of an incident ticket. Chatbots can make security information easier to reach because people do not have to remember where every procedure, dashboard, or query is located. They can also help new team members learn how to navigate internal processes. The value comes from making information easier to use, but the chatbot must be grounded in trusted sources and limited to information the user is allowed to see.
Chatbots need strong boundaries because a conversational tool can accidentally become a shortcut around access control. If a person is not allowed to view sensitive incident details in the ticketing system, the chatbot should not reveal those details just because the person asks nicely. If a user should only see information about their own department, the chatbot should not summarize organization-wide vulnerabilities. This means the chatbot needs to respect identity, role, permissions, and data classification. It should also avoid exposing secrets, credentials, personal information, investigation notes, or confidential business details to people who do not have a need to know. A chatbot that ignores access control can become a data leakage problem. The friendly interface does not reduce the need for authorization. It actually makes careful authorization more important because asking questions becomes easier.
Predictive analysis uses data, patterns, and models to estimate what may happen next or where risk may increase. In security operations, predictive analysis might help identify systems more likely to be attacked, users more likely to be targeted by phishing, vulnerabilities more likely to be exploited, or alerts more likely to become real incidents. It may combine historical activity, asset importance, exposure, known threat behavior, and current conditions. Predictive analysis can help teams focus attention where it may matter most. This is valuable because security teams rarely have enough time to treat every issue as equally urgent. If a model can help rank risk more effectively, the team may fix the most dangerous problems sooner. The caution is that prediction is not certainty. It is a guide for prioritization, not proof that an event will occur.
Predictive analysis can also create problems when people trust it without understanding its limits. A model may be trained on old data that no longer reflects the current environment. It may miss new attacker behavior because the pattern has not appeared before. It may overemphasize events that were common in the past while underestimating a new risk. It may also reflect bias in the data, such as labeling certain departments as risky because they produce more logs, not because they are actually less secure. Security teams should treat predictive outputs as decision support. The model can help point attention, but people still need to ask whether the recommendation makes sense. Good use of predictive analysis includes feedback, testing, tuning, and review. The model should improve as the organization learns from real outcomes.
A I augmented baselines help security teams understand what normal activity looks like by using models to learn patterns across users, systems, applications, and networks. A baseline might include normal sign-in times, typical data transfer amounts, common administrative actions, usual application behavior, or expected traffic between systems. Traditional baselines can be simple, such as a fixed threshold for failed logins or network traffic. A I augmented baselines can be more flexible because they may learn that normal varies by role, device, time, location, or business process. For example, a finance system may show different normal behavior at month end. A cloud deployment pipeline may create bursts of activity during release windows. A more intelligent baseline can reduce noise by recognizing expected variation while still noticing activity that truly stands out.
A I augmented baselines still need careful oversight because normal is not always safe. If an attacker remains in an environment long enough, malicious activity may start to appear normal to a system that only watches repeated patterns. If a department has weak practices, a baseline may learn those weak practices rather than challenge them. If a cloud environment is already misconfigured, baseline learning may treat risky exposure as expected behavior. That is why baselines need to be connected to policy, asset knowledge, and security judgment. The organization should know which activities are acceptable, not just which activities are common. A I can help identify unusual behavior, but it cannot decide every business rule on its own. Human review helps make sure the baseline supports security goals rather than simply preserving whatever already happens.
C I C D integrations bring A I assisted security into software delivery workflows. C I C D is the process of continuously building, testing, and delivering application changes through a pipeline. Security can be built into that pipeline by checking code, dependencies, secrets, container images, infrastructure templates, and configuration before changes reach production. A I can support this by summarizing findings, explaining likely impact, grouping duplicate issues, helping developers understand security warnings, and prioritizing what needs attention first. This is useful because development teams often face many automated findings, and not all of them are equally urgent. A I can help translate technical results into clearer guidance, which may help teams fix issues earlier. Earlier fixes are usually less disruptive than emergency fixes after deployment.
A I in C I C D workflows must be controlled because software delivery affects real systems and users. If an A I tool recommends insecure code, hides a serious finding, approves a risky deployment, or exposes secrets while analyzing a pipeline, the organization may introduce new risk quickly. It is usually safer for A I to assist with explanation, prioritization, and evidence gathering than to make high-impact release decisions by itself. The organization should define when a human must approve, which findings can block a build, which exceptions are allowed, and how exceptions are documented. Access control matters here too. An A I tool connected to a pipeline may see source code, credentials, architecture details, vulnerability reports, and deployment information. That access should be limited, monitored, and justified.
Productivity benefits are real when A I is used for the right tasks. It can summarize long tickets, draft incident timelines, classify alerts, suggest next investigative questions, translate technical logs into plain language, identify similar past incidents, and help route work to the right team. It can reduce the time spent on repetitive searching and formatting. It can help newer analysts understand what they are looking at and help experienced analysts move faster through routine cases. A I can also help reduce alert fatigue by grouping related events and highlighting context that might otherwise be missed. These benefits matter because security teams often work under time pressure. When A I removes some of the repetitive load, people can spend more time making judgment calls, coordinating response, and improving controls.
Human oversight is still required because A I does not truly understand responsibility, business impact, legal obligations, or organizational risk the way accountable people do. A tool may produce a confident summary that is wrong. It may miss a critical detail. It may recommend a response that interrupts business operations. It may combine unrelated events into a misleading story. It may also produce different answers when asked the same question in different ways. Oversight means people review important outputs, verify evidence, approve high-impact actions, and remain responsible for decisions. The organization should be especially careful when A I is connected to actions such as disabling accounts, isolating devices, deleting messages, changing firewall rules, modifying access, or approving deployments. Assistance is helpful. Unchecked authority can be dangerous.
Access control for A I tools should follow the same security principles used for other powerful systems. The tool should have the least privilege needed for its job. It should access only approved data sources. It should log what it reads, what it changes, and what actions it recommends or takes. Sensitive data should be protected, and the tool should not be allowed to freely move information from restricted environments into less controlled places. Users should only receive A I responses based on information they are authorized to view. Administrative features should be limited to trusted personnel. Integrations should use secure credentials, monitored service accounts, and clear ownership. A I does not deserve special trust just because it sounds helpful. It should be governed like any other system that can affect security decisions.
The main takeaway is that A I can make security operations faster, more consistent, and easier to manage, but only when it is used with clear limits. Agentic A I can gather evidence and coordinate steps, but it needs guardrails and approval for risky actions. Chatbots can make security information easier to access, but they must respect permissions and data sensitivity. Predictive analysis can help prioritize risk, but it should not be treated as certainty. A I augmented baselines can improve detection, but normal behavior must still be compared against policy and security expectations. C I C D integrations can help find and explain issues earlier in software delivery, but release decisions and access to code must be controlled. A I is most useful when it supports human judgment, not when it quietly replaces it.