Episode 49 — Serverless, Multicloud, and Infrastructure as Code
In this episode, we continue with security architecture by looking at serverless computing, multicloud environments, and Infrastructure as Code (I A C). These ideas can sound advanced at first, but the security concern is very familiar. You are still asking where systems run, who controls them, how they are changed, and what could go wrong if access or configuration is weak. Serverless does not mean there are no servers. It means the cloud provider manages more of the underlying server layer for you. Multicloud means an organization uses services from more than one cloud provider. I A C means infrastructure is described in files and deployed through automation instead of being built only by hand. Each model can help an organization move faster, scale more easily, and repeat designs more consistently. Each one can also multiply mistakes if permissions, review, and monitoring do not keep up with the speed of change.
Before we continue, a quick note. This audio course is part of our companion study series. The first book is a detailed study guide that explains the exam and helps you prepare for it with confidence. The second is a Kindle-only eBook with one thousand flashcards you can use on your mobile device or Kindle for quick review. You can find both at Cyber Author dot me in the Bare Metal Study Guides series.
Serverless computing is a cloud model where the provider handles much of the underlying infrastructure, and the organization focuses more on the function, event, or service being delivered. Instead of managing a full server, the team may define a small piece of code that runs when something happens. That event might be a file upload, a message arriving, a scheduled task, a web request, or a change in a data store. The provider handles many details behind the scenes, such as provisioning, scaling, and maintaining the platform. This can be powerful because the organization does not have to manage every operating system, patch every server, or guess the exact amount of capacity needed in advance. The security tradeoff is that less direct server management does not mean less security responsibility. The organization still controls code, permissions, data access, event triggers, and how the serverless function interacts with other services.
A common misunderstanding is that serverless removes the attack surface. It changes the attack surface, but it does not make it disappear. The attacker may not target a traditional server login or an exposed operating system service. Instead, the attacker may look for weak permissions, unsafe input handling, exposed event triggers, insecure Application Programming Interface (A P I) connections, secrets stored in the wrong place, or functions that can reach more data than they need. A serverless function may be small, but it can still have powerful access. If a function processes uploaded files and also has broad permission to read storage, write records, or call other services, a flaw in that function can become serious. You should think about serverless as many small doors rather than one large door. Each function needs a clear purpose, limited access, logging, and careful connection to the rest of the environment.
Serverless scaling can also create security and cost concerns. Because the provider can run many copies of a function as demand increases, the environment may respond quickly to legitimate traffic. That is helpful during busy periods. It can also be risky if the trigger is abused. An attacker may send large numbers of events, causing many function executions, higher cost, noisy logs, or pressure on connected systems. A function that writes to a database, calls an external service, or processes files may create downstream effects if it runs thousands of times unexpectedly. Rate limits, input validation, monitoring, and clear event design help reduce this risk. You do not need to think of serverless as fragile. You do need to remember that automation and scaling can amplify both good behavior and bad behavior. When something runs automatically, you must understand what starts it and what it is allowed to do.
Multicloud means an organization uses services from multiple cloud providers rather than relying on only one. This may happen for business, technical, cost, resilience, performance, regulatory, or acquisition reasons. One team may use one provider for analytics, another provider for productivity services, and another for application hosting. A company may acquire another company that already uses a different provider. Some organizations choose multicloud to avoid depending too heavily on a single vendor. Others use it because different providers offer different strengths. The security challenge is that every provider has its own identity model, network concepts, logging features, permission language, service names, and management tools. A control that seems familiar in one environment may work differently in another. Multicloud can offer flexibility, but it also increases the need for consistent governance. Without that consistency, each cloud can become its own isolated security problem.
Provider sprawl is one of the biggest multicloud risks. Sprawl happens when cloud accounts, services, subscriptions, projects, identities, storage locations, and network paths grow faster than the organization can manage them. People may create resources for a project, test a service, connect a tool, or start a new environment without fully documenting it. Over time, the organization may lose track of what exists, who owns it, what data it holds, and whether it is still needed. This is dangerous because unknown assets are hard to protect. An old storage location may still contain sensitive data. A forgotten account may still have active credentials. A test environment may be connected to production information. A cloud service may be exposed publicly because nobody remembers who created it. Multicloud makes sprawl easier because the inventory is spread across more than one provider and more than one control plane.
Misconfiguration is a major security concern in both serverless and multicloud environments. A misconfiguration is a setting that exposes something, weakens protection, or allows behavior the organization did not intend. In cloud environments, a single setting can make a storage location public, allow broad network access, grant excessive permissions, disable logging, or leave data unencrypted. The risk is not always caused by careless people. Cloud platforms are complex, and teams may move quickly under pressure. A person may misunderstand a provider option, copy an old pattern, or assume a default setting is safe. Multicloud adds complexity because safe defaults and risky defaults may differ by provider. Serverless adds complexity because each function, trigger, role, and connection may have its own settings. The control idea is to make secure configuration repeatable, reviewed, and monitored instead of relying on every person to remember every setting every time.
Permissions deserve special attention because modern cloud environments are heavily identity-driven. Identity and Access Management (I A M) controls decide which users, services, functions, and automation processes can access which resources. In a serverless environment, a function usually runs with an assigned identity or role. That role may allow it to read a queue, write to storage, access secrets, or call another service. If the role is too broad, the function may become a path to data or actions far beyond its purpose. In a multicloud environment, permissions can become even more difficult to compare because each provider expresses roles and policies differently. Least privilege is the main principle. A person, service, or function should have only the access it needs. Broad administrator access may feel convenient, but convenience can become compromise when credentials are stolen or code is abused.
Automation changes the security conversation because it can create, modify, and remove infrastructure very quickly. In older environments, a server might be built manually over time by several administrators. In modern cloud environments, a full set of networks, storage, compute resources, permissions, and security controls may be deployed automatically. That speed is useful, especially when teams need consistent environments for development, testing, recovery, or scaling. The risk is that an error in the automation can be repeated everywhere. If an automated deployment includes an open network rule, weak storage setting, or overly powerful role, every environment built from that pattern may inherit the same weakness. Automation does not make a design good by itself. It makes the design repeatable. That means secure review must happen before the automation is trusted, because repeated mistakes can spread faster than manual mistakes.
Infrastructure as Code means the desired infrastructure is written in files that automation can read and apply. Instead of clicking through many screens to create networks, servers, storage, and permissions, teams describe the intended environment in a version-controlled form. This has real security advantages. The files can be reviewed before changes happen. Changes can be tracked over time. Teams can compare what was intended against what exists. Approved patterns can be reused. Disaster recovery can improve because the environment can be rebuilt from known definitions. I A C also supports consistency, which is important because inconsistent manual work often creates security gaps. The caution is that I A C files become sensitive operational assets. They may describe network paths, permissions, service names, data stores, and sometimes secrets if handled poorly. Protecting the code that builds infrastructure becomes part of protecting the infrastructure itself.
Version-controlled infrastructure means infrastructure definitions are stored in a system that tracks changes. This can help security because you can see who changed a file, when it changed, what changed, and sometimes why it changed. That history supports review, troubleshooting, auditing, and recovery. If a risky permission appears, teams may be able to trace when it was introduced. If an outage happens after an infrastructure change, the change history can help narrow the cause. Version control also supports peer review, where another person checks a change before it is applied. The risk is that version control can become a place where sensitive information leaks. If secrets, keys, tokens, passwords, or private configuration values are placed in infrastructure files, they may be exposed to anyone who can read the repository. Once a secret is stored in history, simply deleting it from the latest version may not be enough.
I A C also changes how security teams think about drift. Drift happens when the real environment no longer matches the approved infrastructure definition. Someone may make a manual change during an emergency, a service may be updated outside the normal process, or an old exception may never be removed. Drift is risky because the code may say one thing while the actual environment does another. A firewall rule may be open even though the approved file says it should be closed. A permission may remain in place even after the role definition changed. A storage location may be created manually and never added to inventory. Detecting drift helps organizations keep architecture honest. The written design, the deployed environment, and the security policy should line up. When they do not, the mismatch becomes a clue that control has weakened.
Secrets management is closely tied to serverless, multicloud, and I A C. A secret is sensitive information used to prove identity or access, such as a password, token, key, certificate, or connection string. Automated deployments and serverless functions often need to access other services, which means they need some way to authenticate. The unsafe shortcut is placing secrets directly in code, configuration files, images, scripts, or version-controlled infrastructure. That can expose the secret to developers, logs, backups, repositories, or attackers who gain read access. A safer approach is to store secrets in a controlled secrets management service, limit who and what can retrieve them, rotate them when needed, and avoid printing them in logs. At a beginner level, the main idea is simple. Automation still needs identity, and any identity material used by automation must be protected as carefully as a human password.
These architecture choices also affect monitoring and incident response. In a serverless environment, there may not be a traditional server for an analyst to inspect. The useful evidence may be function logs, event records, permission changes, data access logs, and provider activity trails. In a multicloud environment, logs may live in several platforms with different formats and retention settings. If teams do not plan ahead, they may discover during an incident that the needed records were never enabled, were stored too briefly, or are difficult to connect across providers. With I A C, the change history may become part of the investigation because it shows when infrastructure changed and who approved it. Good architecture includes visibility from the beginning. You cannot respond well to an incident if you cannot see what ran, what changed, what data was touched, and which identities were involved.
Serverless, multicloud, and I A C all promise speed and flexibility, but security has to travel at the same speed. Serverless shifts attention from server maintenance to functions, events, permissions, and data access. Multicloud expands options, but it can also create provider sprawl, inconsistent controls, and visibility gaps. I A C makes infrastructure repeatable and reviewable, but it also means infrastructure definitions must be protected, tested, and monitored for drift. Automated deployments can help enforce secure patterns, yet they can also repeat mistakes at scale. Permissions can make small services powerful, so least privilege is not optional. Version control can improve accountability, but it must not become a place where secrets are stored. As you think about modern architecture, keep asking what is automated, who can change it, what permissions it has, where the definitions live, and how you would know if the real environment no longer matches the secure design.