Cisco Blogs


Cisco Blog > Security

To SIEM or Not to SIEM? Part II

The Great Correlate Debate

SIEMs have been pitched in the past as “correlation engines” and their special algorithms can take in volumes of logs and filter everything down to just the good stuff. In its most basic form, correlation is a mathematical, statistical, or logical relationship between a set of different events. Correlation is incredibly important, and is a very powerful method for confirming details of a security incident. Correlation helps shake out circumstantial evidence, which is completely fair to use in the incident response game. Noticing one alarm from one host can certainly be compelling evidence, but in many cases it’s not sufficient. Let’s say my web proxy logs indicate a host on the network was a possible victim of a drive-by download attack. The SIEM could notify the analysts team that this issue occurred, but what do we really know at this point? That some host may have downloaded a complete file from a bad host – that’s it. We don’t know if it has been unpacked, executed, etc. and have no idea if the threat is still relevant. If the antivirus deleted or otherwise quarantined the file, do we still have anything to worry about? If the proxy blocked the file from downloading, what does that mean for this incident?

This is the problem that correlation can solve. If after the malware file downloaded we see port scanning behavior, large outbound netflow to unusual servers, repeated connections to PHP scripts hosted in sketchy places, or other suspicious activity from the same host, we can create an incident for the host based on our additional details. The order is important as well. Since most attacks follow the same pattern (bait, redirect, exploit, additional malware delivery, check-in), we tie these steps together with security alarms and timestamps. If we see the events happening in the proper order we can be assured an incident has occurred.

 

Read More »

Tags: , , , , , , , , , ,

To SIEM or Not to SIEM? Part I

Security information and event management systems (SIEM, or sometimes SEIM) are intended to be the glue between an organization’s various security tools. Security and other event log sources export their alarms to a remote collection system like a SIEM, or display them locally for direct access and processing. It’s up to the SIEM to collect, sort, process, prioritize, store, and report the alarms to the analyst. It’s this last piece that is the key to an effective SIEM deployment, and of course the most challenging part. In the intro to this blog series I mentioned how we intend to describe our development of a new incident response playbook. A big first step in modernizing our playbook was a technology overhaul, from an outdated and inflexible technology to a modern and highly efficient one. In this two-part post, I’ll describe the pros and cons of running a SIEM, and most importantly provide details on why we believe a log management system is the superior choice.

Deploying a SIEM is a project. You can’t just rack a new box of packet-eating hardware and expect it to work. It’s important to understand and develop all the proper deployment planning steps. Things like scope, business requirements, and engineering specifications are all factors in determining the success of the SIEM project. Event and alarm volume in terms of disk usage, and retention requirements must be understood. There’s also the issue of how to reliably retrieve remote logs from a diverse group of networked devices without compatibility issues. You must be able to answer questions like: Read More »

Tags: , , , , , , ,

Security Is Pervasive in the Cisco Blog Community

As we pass the halfway point of National Cyber Security Awareness Month (NCSAM), I wanted to call attention to some of our colleagues over on the Cisco Government Blog. Patrick Finn and Peter Romness have been busy this month espousing the need for security and we thought it would be beneficial to expose our readers to their thoughts on security that have been published on the Cisco Government Blog space. Read More »

Tags: , , ,

Defensive Security: The 95/5 Approach

Many organizations make the error of thinking that basic defensive software is sufficient to protect critical data and infrastructure. When in reality, in order for government and enterprise organizations to keep their data protected from increasingly advanced cyber threats, comprehensive defensive security approaches are critical. And even with advanced, comprehensive solutions, there are still risks.

No organization is ever going to be able to protect 100 percent of its assets 100 percent of the time, which is why I work on the 95/5 principle. No matter how many security solutions are deployed, if attackers are determined enough, they will find a hole. Humans make mistakes and without fail, attackers will take advantage of them.

With comprehensive security approaches, we can regularly block at least 95 percent of threats—but there is always going to be a margin of error—the other 5 percent. A proactive, continuous approach can help ensure the vast majority of offensive moves are rejected.

Read More »

Tags: , , ,

Getting a Handle on Your Data

When your incident response team gets access to a new log data source, chances are that the events may not only contain an entirely different type of data, but may also be formatted differently than any log data source you already have. Having a data collection and organization standard will ease management and analysis of the data later on. Event attributes must be normalized to a standard format so events from disparate sources have meaning when viewed homogeneously. In addition to normalization, log events must be parsed into fields and labeled in a consistent way across data sources. Ensuring that log data is organized properly is a minimum requirement for efficient log analysis. Without digestible and flexible components, it’s extremely difficult to comprehend a log message. If you have ever paged through screen after screen of log data with no filter, you know what I’m talking about.

Normalization

Data normalization is the process of transforming a log event into its canonical form, that is, the accepted standard representation of the data required by the organization consuming the data. If the same data can be represented in multiple formats, each possible iteration of the data can be considered a member of an equivalence class. To allow proper sorting, searching, and correlation, all data in the equivalence class must be formatted identically.

As an example, let’s consider timestamps. The C function strftime and its approximately 40 format specifiers give an indication of the potential number of ways a date and time can be represented. The lack of an internationally recognized standard timestamp format, combined with the fact that most programming libraries have adopted strftime’s conversion specifications, means that application developers are free to define timestamps as they see fit. Consuming data that includes timestamps requires recognizing the different formats and normalizing them to an organization’s adopted standard format. Other data contained in logs that may require normalization includes MAC addresses, phone numbers, alarm types, IP addresses, and DNS names. These are examples of equivalence classes, where the same data may be represented by different applications in different formats. In the case of an IP address or a DNS name, the CSIRT may find it beneficial not to normalize the data in-place, but rather to create an additional field, the labels of which are standardized across all data sources where possible.

Read More »

Tags: , , , , , ,