Threat hunting

Best Practices for Threat Hunting in Large Networks

Daniel Goldberg
November 1, 2018 by
Daniel Goldberg

When we think of modern threat hunting, we think about proactively looking for exceptional situations across the network. Rather than waiting for an incident to occur, threat hunters work proactively, assuming attackers are already inside the network and attempt to track them down. Threat hunters make educated assumptions, such as “PowerShell remoting is used to compromise machines,” then write scripts to detect it, analyze the results and leave sensors to alert them to future use of this technique.

Modern data centers are tangled webs, typically consisting of multiple generations of software architectures and leftovers from acquisitions. The lack of documentation combined with the sheer amount of data available makes effective threat-hunting challenging. This, along with a “the show must go on” mentality where security cannot impact operations, forces us to find scalable methodologies that work within real world networks.

Become a certified threat hunter

Become a certified threat hunter

Learn how to find, assess and remove threats from your organization — and become a Certified Cyber Threat Hunting Professional, guaranteed!

We start from a baseline, a “known good state,” then detect anomalies and classify them as either part of the environment or security incidents. This process allows defenders to get work done. By starting from a baseline, we detect deviations that might be indicators of attacker activity, while at the same time, we harden existing systems and turn the baseline into a trusted base.

Visualizations Are Key

It’s hard to tell what’s really going on in any large network. Analyzing what assets exist and who communicates with whom is an open challenge. But threat hunters should build simple tools to give them partial answers. Free tools such as ss, sysmon and sysdig, combined with graphviz can help defenders build maps that track network activity.

The goal is to construct an accurate map of the network.

(Image taken from GuardiCore Centra) [click to enlarge]

Using maps, defenders can start analyzing what typical network traffic looks like and set up alerts with different tools to detect abnormal traffic. Communication from a standalone Linux server to the local AD server? Alert. A web server behind a load balancer starts communicating directly with Internet users? Alert.

These alerts are not necessarily indicators of attackers! In fact, many will be undocumented behavior or IT misconfigurations. These are worthy of investigation, fixing dormant issues that attackers frequently abuse to hide within the network. Over time, threat hunters working in a specific network build up a collection of alerts that rarely trigger and indicate truly suspicious activity.

To be effective, threat hunters need an accurate feed for what happens inside machines and across the network. An accurate map is just part of that visibility puzzle. The laundry list of running images, their hashes, domain lookups and so forth is a crucial component for building up sensors. This feed is frequently built using the same tools mentioned before, such as sysmon, capable of efficiently logging events to centralized servers.

However, once this feed is set up, a new problem emerges. Handling the deluge of data that comes through the feed is more challenging than collecting it. Instead of trying to find a trace in a firehose of data, threat hunters must use their collective years of security and IT experience.

Don’t Ignore Your Experience

Threat hunters recognize that applications aren’t replaced on a daily basis, new servers don’t just appear, and organizations don’t suddenly start communicating with new third-party services. Threat hunters define the current state of affairs as valid and look for deviations. Examples of deviations could be a binary executing from a temporary folder (potential attacker), a server communicating with a cloud service it typically doesn’t (possible data exfiltration) or traffic to an IP address that isn’t matched with a DNS query.

Monitoring artifacts that rarely change is an additional method for hunting threats. Networks frequently rely on a handful of configuration files that control DNS resolving, database access, user management and other key systems. In addition, these files are rarely changed. These files are classic locations to monitor for attacker activity, whether by locking down access to the files or monitoring for changes. The concept of File Integrity Monitoring (FIM) enables threat hunters to define baselines and recognize deviations.

Trust, but Verify

Over time, organizations build up a large set of sensors and alerts. However, many of these sensors go untested outside the occasional pentest. Threat hunters should also make sure existing security software works and sensors and systems should be routinely tested, using different breach and attack simulation tools such as the Infection Monkey to verify systems.

The overall process of threat hunting is to make attackers stand out from the noise. To do this successfully, there is no silver bullet that works for every network. Using these methods, however, threat hunters can work efficiently in complex networks. By visualizing network traffic, processing machine activity feeds for deviations, testing existing systems and monitoring for changes to sensitive files, threat hunters can find suspicious activity inside complex networks.

Become a certified threat hunter

Become a certified threat hunter

Learn how to find, assess and remove threats from your organization — and become a Certified Cyber Threat Hunting Professional, guaranteed!

Sources

ss(8) – Linux man page, die.net

Sysmon v8.0, Microsoft

Sysdig

Graphviz

Infection Monkey, GitHub

Daniel Goldberg
Daniel Goldberg

Daniel is a security research expert at GuardiCore, where he is responsible for tracking the latest security intelligence, including detailed analysis of hackers' methodologies, for use in implementing advanced countermeasures into GuardiCore products and services. Daniel has more than seven years of cyber security research experience. Prior to GuardiCore, he served as a captain in the Israel Defense Forces (IDF).