Cisco Blogs


Cisco Blog > Security

Operational Security Intelligence

Security intelligence, threat intelligence, cyber threat intelligence, or “intel” for short is a popular topic these days in the Infosec world. It seems everyone has a feed of “bad” IP addresses and hostnames they want to sell you, or share. This is an encouraging trend in that it indicates the security industry is attempting to work together to defend against known and upcoming threats. Many services like Team CymruShadowServerThreatExpertClean MX, and Malware Domain List offer lists of known command and control servers, dangerous URIs, or lists of hosts in your ASN that have been checking-in with known malicious hosts. This is essentially outsourced or assisted incident detection. You can leverage these feeds to let you know what problems you already have on your network, and to prepare for future incidents. This can be very helpful, especially for organizations with no computer security incident response teams (CSIRT) or an under-resourced security or IT operations group.

There are also commercial feeds which range anywhere from basic notifications to full-blown managed security solution. Government agencies and industry specific organizations also provide feeds targeted towards specific actors and threats. Many security information and event management systems (SIEMs) offer built-in feed subscriptions available only to their platform. The field of threat intelligence services is an ever-growing one, offering options from open source and free, to commercial and classified.  Full disclosure: Cisco is also in the threat intelligence business

However the intent of this article is not to convince you that one feed is better than another, or to help you select the right feed for your organization. There are too many factors to consider, and the primary intention of this post is to make you ask yourself, “I have a threat intelligence feed, now what?”  Read More »

Tags: , , , , , , , ,

Using a "Playbook" Model to Organize Your Information Security Monitoring Strategy

CSIRT, I have a project for you. We have a big network and we're definitely getting hacked constantly. Your group needs to develop and implement security monitoring to get our malware and hacking problem under control.

 

If you've been a security engineer for more than a few years, no doubt you've received a directive similar to this. If you're anything like me, your mind probably races a mile a minute thinking of all of the cool detection techniques you're going to develop and all of the awesome things you're going to find.

I know, I'll take the set of all hosts in our web proxy logs doing periodic POSTs and intersect that with…

STOP!

 

You shouldn't leap before you look into a project like this. Read More »

Tags: , , , , , , , ,

To SIEM or Not to SIEM? Part II

The Great Correlate Debate

SIEMs have been pitched in the past as "correlation engines" and their special algorithms can take in volumes of logs and filter everything down to just the good stuff. In its most basic form, correlation is a mathematical, statistical, or logical relationship between a set of different events. Correlation is incredibly important, and is a very powerful method for confirming details of a security incident. Correlation helps shake out circumstantial evidence, which is completely fair to use in the incident response game. Noticing one alarm from one host can certainly be compelling evidence, but in many cases it's not sufficient. Let's say my web proxy logs indicate a host on the network was a possible victim of a drive-by download attack. The SIEM could notify the analysts team that this issue occurred, but what do we really know at this point? That some host may have downloaded a complete file from a bad host - that's it. We don't know if it has been unpacked, executed, etc. and have no idea if the threat is still relevant. If the antivirus deleted or otherwise quarantined the file, do we still have anything to worry about? If the proxy blocked the file from downloading, what does that mean for this incident?

This is the problem that correlation can solve. If after the malware file downloaded we see port scanning behavior, large outbound netflow to unusual servers, repeated connections to PHP scripts hosted in sketchy places, or other suspicious activity from the same host, we can create an incident for the host based on our additional details. The order is important as well. Since most attacks follow the same pattern (bait, redirect, exploit, additional malware delivery, check-in), we tie these steps together with security alarms and timestamps. If we see the events happening in the proper order we can be assured an incident has occurred.

 

Read More »

Tags: , , , , , , , , , ,

To SIEM or Not to SIEM? Part I

Security information and event management systems (SIEM, or sometimes SEIM) are intended to be the glue between an organization's various security tools. Security and other event log sources export their alarms to a remote collection system like a SIEM, or display them locally for direct access and processing. It's up to the SIEM to collect, sort, process, prioritize, store, and report the alarms to the analyst. It's this last piece that is the key to an effective SIEM deployment, and of course the most challenging part. In the intro to this blog series I mentioned how we intend to describe our development of a new incident response playbook. A big first step in modernizing our playbook was a technology overhaul, from an outdated and inflexible technology to a modern and highly efficient one. In this two-part post, I'll describe the pros and cons of running a SIEM, and most importantly provide details on why we believe a log management system is the superior choice.

Deploying a SIEM is a project. You can't just rack a new box of packet-eating hardware and expect it to work. It's important to understand and develop all the proper deployment planning steps. Things like scope, business requirements, and engineering specifications are all factors in determining the success of the SIEM project. Event and alarm volume in terms of disk usage, and retention requirements must be understood. There's also the issue of how to reliably retrieve remote logs from a diverse group of networked devices without compatibility issues. You must be able to answer questions like: Read More »

Tags: , , , , , , ,

Getting a Handle on Your Data

When your incident response team gets access to a new log data source, chances are that the events may not only contain an entirely different type of data, but may also be formatted differently than any log data source you already have. Having a data collection and organization standard will ease management and analysis of the data later on. Event attributes must be normalized to a standard format so events from disparate sources have meaning when viewed homogeneously. In addition to normalization, log events must be parsed into fields and labeled in a consistent way across data sources. Ensuring that log data is organized properly is a minimum requirement for efficient log analysis. Without digestible and flexible components, it's extremely difficult to comprehend a log message. If you have ever paged through screen after screen of log data with no filter, you know what I'm talking about.

Normalization

Data normalization is the process of transforming a log event into its canonical form, that is, the accepted standard representation of the data required by the organization consuming the data. If the same data can be represented in multiple formats, each possible iteration of the data can be considered a member of an equivalence class. To allow proper sorting, searching, and correlation, all data in the equivalence class must be formatted identically.

As an example, let’s consider timestamps. The C function strftime and its approximately 40 format specifiers give an indication of the potential number of ways a date and time can be represented. The lack of an internationally recognized standard timestamp format, combined with the fact that most programming libraries have adopted strftime’s conversion specifications, means that application developers are free to define timestamps as they see fit. Consuming data that includes timestamps requires recognizing the different formats and normalizing them to an organization’s adopted standard format. Other data contained in logs that may require normalization includes MAC addresses, phone numbers, alarm types, IP addresses, and DNS names. These are examples of equivalence classes, where the same data may be represented by different applications in different formats. In the case of an IP address or a DNS name, the CSIRT may find it beneficial not to normalize the data in-place, but rather to create an additional field, the labels of which are standardized across all data sources where possible.

Read More »

Tags: , , , , , ,