Cisco Blogs


Cisco Blog > Security

To SIEM or Not to SIEM? Part I

Security information and event management systems (SIEM, or sometimes SEIM) are intended to be the glue between an organization’s various security tools. Security and other event log sources export their alarms to a remote collection system like a SIEM, or display them locally for direct access and processing. It’s up to the SIEM to collect, sort, process, prioritize, store, and report the alarms to the analyst. It’s this last piece that is the key to an effective SIEM deployment, and of course the most challenging part. In the intro to this blog series I mentioned how we intend to describe our development of a new incident response playbook. A big first step in modernizing our playbook was a technology overhaul, from an outdated and inflexible technology to a modern and highly efficient one. In this two-part post, I’ll describe the pros and cons of running a SIEM, and most importantly provide details on why we believe a log management system is the superior choice.

Deploying a SIEM is a project. You can’t just rack a new box of packet-eating hardware and expect it to work. It’s important to understand and develop all the proper deployment planning steps. Things like scope, business requirements, and engineering specifications are all factors in determining the success of the SIEM project. Event and alarm volume in terms of disk usage, and retention requirements must be understood. There’s also the issue of how to reliably retrieve remote logs from a diverse group of networked devices without compatibility issues. You must be able to answer questions like: Read More »

Tags: , , , , , , ,

Getting a Handle on Your Data

When your incident response team gets access to a new log data source, chances are that the events may not only contain an entirely different type of data, but may also be formatted differently than any log data source you already have. Having a data collection and organization standard will ease management and analysis of the data later on. Event attributes must be normalized to a standard format so events from disparate sources have meaning when viewed homogeneously. In addition to normalization, log events must be parsed into fields and labeled in a consistent way across data sources. Ensuring that log data is organized properly is a minimum requirement for efficient log analysis. Without digestible and flexible components, it’s extremely difficult to comprehend a log message. If you have ever paged through screen after screen of log data with no filter, you know what I’m talking about.

Normalization

Data normalization is the process of transforming a log event into its canonical form, that is, the accepted standard representation of the data required by the organization consuming the data. If the same data can be represented in multiple formats, each possible iteration of the data can be considered a member of an equivalence class. To allow proper sorting, searching, and correlation, all data in the equivalence class must be formatted identically.

As an example, let’s consider timestamps. The C function strftime and its approximately 40 format specifiers give an indication of the potential number of ways a date and time can be represented. The lack of an internationally recognized standard timestamp format, combined with the fact that most programming libraries have adopted strftime’s conversion specifications, means that application developers are free to define timestamps as they see fit. Consuming data that includes timestamps requires recognizing the different formats and normalizing them to an organization’s adopted standard format. Other data contained in logs that may require normalization includes MAC addresses, phone numbers, alarm types, IP addresses, and DNS names. These are examples of equivalence classes, where the same data may be represented by different applications in different formats. In the case of an IP address or a DNS name, the CSIRT may find it beneficial not to normalize the data in-place, but rather to create an additional field, the labels of which are standardized across all data sources where possible.

Read More »

Tags: , , , , , ,

A Culture of Transparency

Many Cisco customers with an interest in product security are aware of our security advisories and other publications issued by our Product Security Incident Response Team (PSIRT). That awareness is probably more acute than usual following the recent Cisco IOS Software Security Advisory Bundled Publication on September 25. But many may not be aware of the reasoning behind why, when, and how Cisco airs its “dirty laundry.”

Our primary reason for disclosing vulnerabilities is to ensure customers are able to accurately assess, mitigate, and remediate the risk our vulnerabilities may pose to the security of their networks.

In order to deliver on that promise, Cisco has has made some fundamental and formative decisions that we’ve carried forward since our first security advisory in June 1995.

Read More »

Tags: , , , , , ,

Making Boring Logs Interesting

In the last week alone, two investigations I have been involved with have come to a standstill due to the lack of attribution logging data. One investigation was halted due to the lack of user activity logging within an application, the other from a lack of network-based activity logs. Convincing the asset owners of the need for logging after-the-fact was easy. But ideally, this type of data would be collected before it’s needed for an investigation. Understanding what data is critical to log, engaging with the asset owners to ensure logs contain meaningful information, and preparing log data for consumption by a security monitoring organization are ultimately responsibilities of the security monitoring organization itself. Perhaps in a utopian world, asset owners will engage an InfoSec team proactively and say, “I have a new host/app. To where should I send my log data which contains attributable information for user behavior which will be useful to you for security monitoring?” In lieu of that idealism, what follows is a primer on logs as they relate to attribution in the context of security event monitoring. Read More »

Tags: , , , , , , , ,

Big Security—Mining Mountains of Log Data to Find Bad Stuff

Your network, servers, and a horde of laptops have been hacked. You might suspect it, or you might think it’s not possible, but it’s happened already. What’s your next move?

The dilemma of the “next move” is that you can only discover an attack either as it’s happening, or after it’s already happened. In most cases, it’s the latter, which justifies the need for a computer security incident response team (CSIRT). Brandon Enright, Matthew Valites, myself, and many other security professionals constitute Cisco’s CSIRT. We’re the team that gets called in to investigate security incidents for Cisco. We help architect monitoring solutions and strategies and enable the rest of our team to discover security incidents as soon as possible. We are responsible for monitoring the network and responding to incidents discovered both internally by our systems or reported to us externally via csirt-notify@cisco.com.

Securing and monitoring a giant multinational high-speed network can be quite a challenge. Volume and diversity, not complexity, are our primary enemies when it comes to incident response. We index close to a terabyte of log data per day across Cisco, along with processing billions of NetFlow records, millions of intrusion detection alarms, and millions of host security log records. This doesn’t even include the much larger data store of authentication and authorization data for thousands of people. Naturally, like all large corporations, dedicated attackers, hacking collectives, hacktivists, and typical malware/crimeware affect Cisco. Combine these threats with internally sourced security issues, and we’ve got plenty of work cut out for us.

Read More »

Tags: , , , , , , , , , ,