Tag Archives: ESM

Sysmon & Security Onion, Part 3: Enterprise Security Monitoring

This is part three of a series of posts that contain key excerpts of my paper, Using Sysmon to Enrich Security Onion’s Host-Level Capabilities.

As NSM practitioners become blind to the majority of the traffic entering and exiting their networks and require the ability to generate quality indicators in both the network and host space, there needs to be a shift to include more than just network-centric data in detection and response strategies. Hosts on the network can be an extremely rich repository of data that can be extracted and used in detection and response in conjunction with NSM data. In essence, this is applying the same type of NSM mindset to host-level data. In fact, this concept has been coined “Enterprise Security Monitoring” by David Bianco. (Bianco, Enterprise Security Monitoring, 2013) ESM integrates intelligence-driven CND principles. As such, a notable point of ESM is the ability to locate relevant indicators pertinent to where an intrusion might be in relation to the kill chain. Because these indicators span both the network and host, there is a need to be able to have access to both categories of data.

Though many tools can generate both NSM and host data, the confounding issues typically revolve around how to efficiently collect the data and present it in a way that makes it usable for alerting, analysis and decision-making. This is where Security Onion brings it all together.

References

Bianco, D. (2013, September 14). Enterprise Security Monitoring. Retrieved February 12, 2015, from speakerdeck.com: https://speakerdeck.com/davidjbianco/enterprise-security-monitoring

Tagged , , , ,

Sysmon & Security Onion, Part 2: Rise of Intelligence-Driven Computer Network Defense

This is part two of a series of posts that contain key excerpts of my paper, Using Sysmon to Enrich Security Onion’s Host-Level Capabilities.

Unfortunately, it is not just encrypted traffic that harries NSM practitioners – the persistence of advanced adversaries continues unabated. This has given rise to intelligence-driven CND, which is a threat-centric risk management strategy. (Hutchins, Cloppert, & Amin) Simply put, as the defender gathers intelligence about intrusions and the adversary behind them, the defender is able to use this information in future detection cycles against the adversary. Indicators are a key part of this intelligence. From the formative paper, Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains: “By completely understanding an intrusion, and leveraging intelligence on these tools and infrastructure, defenders force an adversary to change every phase of their intrusion in order to successfully achieve their goals in subsequent intrusions. In this way, network defenders use the persistence of adversaries’ intrusions against them to achieve a level of resilience.” (Hutchins, Cloppert, & Amin)

A crucial part of this methodology is the ability to gather quality indicators. Quality indicators are extractable (“Can I find this indicator in my data?”), purposeful (“To what use will I put this indicator?”), and actionable (“If I find this indicator in my data, can I do something with that information?”). (Bianco, Enterprise Security Monitoring, 2013) Without these quality indicators, defenders will not be able to efficiently detect further intrusions by the same adversary. Various forms of indicators have differing values. Consider David Bianco’s Pyramid of Pain:

Pyramid of Pain

David Bianco’s Pyramid of Pain

 

It can be seen that Hash Values and IP Addresses are on the bottom of the pyramid. This indicates that though these types of indicators can be useful, they are very easy for the adversary to cycle through, hence the probability of seeing the same indicator used in multiple campaigns is much lower than tools that the adversary uses (which is much higher on the pyramid). The key point is that as the defender is able to build up their detection strategy around higher quality indicators, this will require the adversary to change their Tools, Tactics, and Procedures (TTPs), which is very costly in terms of time and resources. This does not negate the fact that the lower indicator types are still useful.

Though there are different types of indicators (Atomic, Computed and Behavioral), it is clear that the defender must have indicators that span the gamut of both network and host-level, as an adversary carries out operations in both spaces. (Hutchins, Cloppert, & Amin)

References

Hutchins, E. M., Cloppert, M. J., & Amin, R. M. (n.d.). Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains. Retrieved February 12, 2015, from lockheedmartin.com: http://www.lockheedmartin.com/content/dam/lockheed/data/corporate/documents/LM-White-Paper-Intel-Driven-Defense.pdf

Bianco, D. (2013, September 14). Enterprise Security Monitoring. Retrieved February 12, 2015, from speakerdeck.com: https://speakerdeck.com/davidjbianco/enterprise-security-monitoring

Bianco, D. (2014, January 17). The Pyramid of Pain. Retrieved from Enterprise Detection and Response: http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html

Tagged , , ,

Sysmon & Security Onion: Monitoring Key Windows Processes for Anomalies

One of the sections of my recently released paper works through how to use Sysmon logs to monitor key windows processes for anomalous behavior. One issue that I wanted to make clear–The current OSSEC ruleset on Github is based on this document that I maintain: http://DefensiveDepth.com/Windows-Processes, which itself is based on a few different sources, namely the SANS Know Normal… Find Evil, Know your Windows Processes or Die Trying,  as well as my own experience. Any feedback that you can give to tweak the documentation and/or rules would be much appreciated.

From the paper, Using Sysmon to Enrich Security Onion’s Host-Level Capabilities:

So as to be able to maintain persistence, both targeted and opportunistic threats use certain techniques to attempt to blend into the background of a busy system. One of the primary ways of doing this is by emulating and/or abusing legitimate Windows processes. For instance, malware named svhost.exe instead of svchost.exe, which is a legitimate process. Another example would be the Poweliks class of malware, which hollows out a legitimate process and runs its malicious threads from there. In fact, in the case of Poweliks, there is no binary downloaded to the system itself, as it runs entirely in memory.

Using the host data generated by Sysmon, detection of these techniques can become commonplace. The crux of the idea is that it is well known how critical legitimate Windows processes should be running. Let us take a closer look at this detection strategy. The current iteration of Poweliks hollows a legitimate Windows process, dllhost.exe, to perform its malicious tasks. (Harrell, 2014) When the author ran a copy of Poweliks on a system with Sysmon installed, the following pertinent data was generated:

 

         Image: C:\Windows\syswow64\dllhost.exe

         CommandLine: C:\Windows\syswow64\dllhost.exe

 

         ParentImage: C:\Windows\syswow64\windowspowershell\v1.0\powershell.exe

         ParentCommandLine: “C:\Windows\syswow64\windowspowershell\v1.0\power shell.exe” iex $env:a

 

Typically dllhost.exe’s parent process would be svchost.exe, and at runtime, dllhost.exe would be passed the following parameter: /Processid:{}. As can be seen, the dllhost.exe that is started by Poweliks falls outside the norm, and would have set off some alerts.

Based on this concept, the I wrote more than ten OSSEC rules that cover normal behavior for a number of critical Windows processes. These rules can be found on Github. Keep in mind that the rules were written with the corresponding OSSEC decoder for Sysmon logs, so they may need to be edited if used outside of that particular context. When writing the rules, there were a number of ways to alert on abnormal behavior: Image Location, User Context, Parent Process Image, and finally, how many instances should be running on the system. For simplicity, the ruleset was designed to alert on one abnormal attribute. The most immutable attribute would seem to be the parent image, which is why the ruleset only looks at the parent image for abnormalities. Within this attribute, two abnormalities are checked for. The first is whether the parent process image is known-good. For example, the parent image of svchost.exe should only ever be C:\Windows\System32\services.exe. The second abnormality is that there are a couple processes that should never spawn a child process—lsm.exe and lsass.exe. With this being the case, there are a few rules that look for these particular images as the parent process image and alert if found.

References:

SANS. (n.d.). Know Abnormal… Find Evil. Retrieved February 12, 2015, from sans.org: http://digital-forensics.sans.org/media/poster_2014_find_evil.pdf

Harrell, C. (2014, December 17). Prefetch File Meet Process Hollowing. Retrieved from Journey Into Incident Response: http://journeyintoir.blogspot.com/2014/12/prefetch-file-meet-process-hollowing_17.html

Tagged , , , ,

Sysmon & Security Onion, Part 1: Rise of the Encrypted Web

This is part one of a series of posts that contain key excerpts of my paper, Using Sysmon to Enrich Security Onion’s Host-Level Capabilities.

In the eleven years since Richard Bejtlich wrote his seminal book on Network Security Monitoring, practitioners have seen a number of issues in the last few years that have shown some of the limitations of network-centric monitoring. The rise of encrypted-by-default web traffic, which blinds defenders to most NSM data types is one of those issues.

The collection of NSM data is typically through a TAP or SPAN on a strategic chokepoint in the network. If the network data between the client and server is encrypted, a number of types of NSM data will be useless to the analyst—full content, extracted content, and certain types of alerts. With the revelations of the past few years that a number of governments around the world have been intercepting their citizen’s unencrypted communications, there has been significant interest in encrypting most, if not all of the web traffic around the world. In 2014, CloudFlare, which hosts a content delivery network (CDN) and security services for two million websites, enabled free SSL for all of their customers. They stated, “Having cutting-edge encryption may not seem important to a small blog, but it is critical to advancing the encrypted-by-default future of the Internet. Every byte, however seemingly mundane, that flows encrypted across the Internet makes it more difficult for those who wish to intercept, throttle, or censor the web.” (Prince, 2014)

From a recent study, The Cost of the “S” in HTTPS,  twenty-five thousand residential ADSL customers saw HTTPS usage in uploads accounting for 80% of traffic compared to 45.7% in 2012. (Naylor, et al.) This trend is expected to continue for the foreseeable future.

This increase of encryption will typically be seen in north – south traffic, not necessarily east – west traffic, which means NSM sensors deployed to monitor internal traffic may not be so readily affected. However, sensors deployed at network egress points will certainly be affected unless some type of mitigations is put into place. These mitigations would include proxying the SSL traffic so that the network data could be read, though this solution is limited in practice due to performance, privacy, and liability concerns.

References

Prince, M. (2014, September 29). Introducing Universal SSL. Retrieved February 12, 2015, from Cloudflare.com: https://blog.cloudflare.com/introducing-universal-ssl/

Naylor, D., Finamore, A., Leontiadis, I., Grunenberger, Y., Mellia, M., Munafò, M., . . . Steenkiste, P. (n.d.). The Cost of the “S” in HTTPS. Retrieved February 12, 2015, from cs.cmu.edu: http://www.cs.cmu.edu/~dnaylor/CostOfTheS.pdf

Tagged , , ,