Episode 42 — Indicators of Compromise: Hashes, Domains, Timestamps, Log Manipulation, and Impossible Travel
In this episode, we look at indicators of compromise, which are clues that something may have gone wrong inside a system, account, network, or application. An indicator of compromise (I O C) is not always final proof by itself. It is more like a footprint, a broken lock, a strange receipt, or a door left open when it should have been closed. You use it to ask better questions and connect separate pieces of activity into a clearer picture. A suspicious domain, an unfamiliar file hash, a strange login time, a modified log, or an impossible travel alert may not tell the whole story alone. Together, those clues can show that an attacker gained access, ran a tool, touched data, tried to hide activity, or moved through an environment. Your goal at this level is to recognize what these indicators are, why analysts care about them, and how they help turn scattered evidence into a security investigation.
Before we continue, a quick note. This audio course is part of our companion study series. The first book is a detailed study guide that explains the exam and helps you prepare for it with confidence. The second is a Kindle-only eBook with one thousand flashcards you can use on your mobile device or Kindle for quick review. You can find both at Cyber Author dot me in the Bare Metal Study Guides series.
A hash is a fixed-length value created from data, such as a file, message, or piece of software. You can think of it as a kind of digital fingerprint, because the same file should produce the same hash when the same hashing method is used. If even a tiny part of the file changes, the hash should change too. That makes hashes useful when analysts want to identify known malware, verify whether a file was altered, or compare a suspicious file against threat intelligence. A hash does not tell you what a file does just by looking at it. It tells you whether that exact file matches something already known or whether it differs from a trusted copy. If a system contains a file whose hash matches a known malicious sample, that is a strong indicator. If a trusted file suddenly has a different hash, that may suggest tampering, corruption, or replacement.
Internet Protocol (I P) addresses are another common indicator because they show where network traffic appears to come from or go to. An I P address may point to a workstation, server, cloud service, attacker-controlled system, proxy, or temporary infrastructure. Analysts look at I P addresses to understand communication patterns. A device reaching out to an unknown external address at unusual times may be worth checking. A login from a country where the user has never been may be suspicious. A server repeatedly connecting to a known command-and-control address could indicate malware communication. Still, I P addresses can be messy evidence. Attackers can use shared hosting, compromised systems, virtual private networks, proxies, and cloud services. That means an I P address should be treated as a useful clue, not a complete conclusion. The real value often comes from combining it with user activity, timestamps, process data, and network behavior.
Domains are human-readable names used to reach services, websites, and systems. A domain can be legitimate, suspicious, newly created, misspelled, or intentionally designed to imitate a trusted brand. Attackers often use domains for phishing pages, malware downloads, credential collection, command-and-control communication, and redirects. A suspicious domain may contain a small spelling change, extra words, strange punctuation, or a brand name placed in a misleading way. Analysts also care about when a domain was created, what I P addresses it resolves to, and whether it appears in known threat reports. Domain Name System (D N S) activity can be especially useful because systems often look up a domain before they connect to it. If many internal machines suddenly query an unfamiliar domain, that may point to a phishing campaign, malware infection, or unauthorized software. A domain does not have to look strange to be dangerous, but strange domain behavior deserves attention.
Malicious processes are programs or running activities that may indicate an attacker has code executing on a system. A process can be a normal application, a system service, a script engine, or a background task. Analysts look for process names, parent and child relationships, command arguments, timing, and resource use. A familiar process name is not automatically safe, because attackers sometimes name malware after legitimate system tools to blend in. A normal tool can also be used for harmful purposes if an attacker uses it to download files, run scripts, collect credentials, or move laterally. One useful clue is whether the process behavior matches what the process normally does. A document viewer launching a scripting engine may be suspicious. A user workstation running a tool normally seen only on servers may be suspicious. A process is most meaningful when you can connect it to the file that started it, the account that ran it, and the network connections it opened.
File system artifacts are traces left behind on storage. These may include files, folders, shortcuts, temporary files, recently opened items, prefetch records, scheduled tasks, startup entries, and configuration changes. They help analysts understand what existed on a system, what may have run, and what changed over time. An attacker may drop a tool into a temporary directory, create a hidden folder, place a script in a startup location, or modify a file so malware runs after reboot. Not every strange file is malicious, and not every attack leaves obvious files behind. Still, file system artifacts can help reconstruct a path. You may see that a downloaded file appeared first, then a script ran, then a new service was created, then a connection went out to an unfamiliar domain. Each artifact may seem small by itself, but together they help tell the story of what happened on the machine.
Timestamps matter because security investigations depend heavily on time. A timestamp can show when a file was created, modified, accessed, downloaded, executed, or deleted. Logs can show when a user signed in, when a process started, when a connection was made, or when a setting changed. Analysts use timestamps to build timelines. A clear timeline helps separate cause from coincidence. For example, a suspicious login may occur at two fifteen in the morning, followed by a password change, then a new mailbox rule, then a large data download. Those events become more meaningful because of their order. Timestamps also help analysts compare activity across different systems. If an endpoint event, firewall event, and identity event all line up, confidence increases. Time can be tricky because systems may use different time zones, clocks may drift, and logs may store time in different formats. Even with those challenges, timing remains one of the most valuable clues.
Log manipulation is a serious indicator because attackers often try to remove or weaken the evidence that would reveal their actions. Logs record activity, and that makes them valuable to defenders. If logs are cleared, disabled, edited, rotated unexpectedly, or missing for a key period, analysts should pay close attention. A gap in logging may be innocent, such as a storage issue or misconfiguration, but it may also mean someone tried to hide access. Log manipulation can show up as missing authentication events, sudden changes in audit settings, stopped logging services, deleted event records, or inconsistent records between systems. One system may have no record of an action, while another system still shows related network traffic or identity activity. That mismatch can be useful. Attackers may control one host, but they often cannot erase every trace across every log source. When logs disagree in suspicious ways, the disagreement itself becomes evidence.
Excessive resource consumption can also be an indicator of compromise. A compromised system may suddenly use unusual amounts of Central Processing Unit (C P U) power, memory, disk input and output, network bandwidth, or storage space. This can happen when malware encrypts files, mines cryptocurrency, scans the network, compresses stolen data, runs unauthorized services, or participates in a larger attack. A single spike does not always mean compromise, because legitimate updates, backups, reports, and busy applications can also consume resources. The key is whether the resource use fits the system’s normal role and timing. A quiet workstation sending large amounts of outbound traffic overnight is different from a backup server moving data during its approved backup window. Excessive resource consumption is often a starting clue. It tells an analyst where to look more closely for processes, files, connections, accounts, and scheduled activity that explain the behavior.
Plaintext strings are readable pieces of text found inside files, memory, logs, scripts, network traffic, or command history. They can be useful because attackers and malware sometimes leave behind names, paths, commands, domains, I P addresses, user agents, error messages, passwords, tokens, or configuration values in readable form. Analysts may inspect plaintext strings to understand what a suspicious file might contact, what files it might create, or what commands it may try to run. Plaintext evidence can also reveal careless handling of sensitive data. If passwords, keys, or session tokens appear in logs or scripts, that creates risk even if no attacker has used them yet. Plaintext strings should be handled carefully, because seeing a word or domain inside a file does not automatically prove the file is malicious. It may be a reference, a decoy, or harmless text. The value comes from connecting those strings to actual behavior.
Account lockouts are identity-related indicators that can point to credential attacks, user mistakes, system misconfiguration, or automated processes using old passwords. When an account locks after too many failed attempts, the lockout is a defensive control doing its job. The pattern around the lockout matters. One user mistyping a password a few times is normal. Many accounts locking at the same time may suggest password spraying, where an attacker tries a small number of common passwords across many accounts. One account failing repeatedly from many sources may suggest brute force attempts or a misconfigured service. Lockouts tied to unusual locations, odd hours, or unfamiliar devices deserve extra attention. Analysts also look at whether successful logins occur after failures. A successful login after a long series of failures may mean the attacker guessed or obtained the password. Account lockouts become more useful when viewed with source addresses, timestamps, device details, and authentication method.
Impossible travel is an indicator that appears when the same account seems to sign in from two locations that a person could not realistically travel between in the time available. For example, an account might authenticate from Dallas and then a short time later from another continent. This does not automatically prove compromise, because virtual private networks, proxies, mobile networks, cloud services, and location mapping errors can create confusing results. Still, impossible travel should be investigated because it may show that credentials are being used by someone other than the real user. Analysts look at the device, authentication method, application, time, location confidence, and user behavior. A login from a known managed device through a normal access path is different from a login from an unmanaged device in an unfamiliar region. Impossible travel is powerful because it connects identity, location, and time. It asks whether the account activity makes human sense.
Concurrent sessions are another useful identity clue. A concurrent session means the same account has more than one active session at the same time. That can be normal in many cases. You may be signed in on a laptop, phone, and browser session at once. Service accounts may also support multiple connections by design. The warning appears when concurrent sessions do not match expected behavior. One user account may be active from two distant places, two different device types, or two applications with very different risk levels. A session may remain active after a password change, or an attacker may use a stolen token instead of signing in normally. Analysts look at whether sessions were created through expected authentication flows, whether Multi-Factor Authentication (M F A) was used, whether the device is known, and whether the account actions line up with the user’s role. Session evidence can show that access continued even after the initial login.
The real skill is connecting indicators instead of treating each clue as separate. A suspicious hash may identify a known malicious file, but the process list can show whether it actually ran. A domain lookup may show where the system tried to connect, but timestamps can show whether that happened right after a phishing email was opened. Account lockouts may suggest a password attack, while impossible travel may show that one attempt succeeded. Log manipulation may explain why the local machine seems quiet even though network records show activity. Excessive resource use may point analysts toward the exact time a malicious process began encrypting files or staging data. Each indicator adds context. Analysts build confidence by asking whether the clues support the same story. When several independent sources point in the same direction, the investigation becomes stronger and the response can become more focused.
Indicators of compromise help you move from vague concern to informed investigation. A hash can identify a suspicious file. An I P address or domain can reveal communication. A malicious process can show execution. File system artifacts can show what changed. Timestamps can place events in order. Log manipulation can suggest concealment. Excessive resource consumption can point to harmful activity. Plaintext strings can expose clues inside files or logs. Account lockouts, impossible travel, and concurrent sessions can reveal identity abuse. None of these clues should be handled carelessly or in isolation, because normal systems can produce strange-looking events too. The stronger approach is to combine indicators, compare sources, and ask whether the activity fits the user, device, application, and time. When you learn to read these signs together, compromise becomes less invisible. You may not know the whole story immediately, but you know where to look next.