Cisco Blogs


Cisco Blog > Government

Security and NetFlow: The time is now!

For those of you that have been around the networking world for a while, NetFlow is far from a new technology. Cisco developed NetFlow years ago and it has become the industry standard for generating and collecting IP traffic information. NetFlow quickly found a home within network management providing valuable telemetry for overall network performance and management. Nine versions later NetFlow is growing in popularity not solely due to its value to network management but as a critical component of security operations. Over the past 12 months I have encountered more and more large enterprises that view NetFlow as one of their top tools for combating advanced threats within their perimeters.

The dynamic nature of the cyber threat landscape and growing level of sophistication and customization of attacks are requiring organizations to monitor their internal networks at a new level. IP flow monitoring (NetFlow) coupled with security focused NetFlow collectors like Lancope’s StealthWatch is helping organizations quickly identify questionable activity and anomalous behavior. The value that NetFlow provides is unsampled accounting of all network activity on an IP flow enabled interface. I bring up unsampled because of its importance from a security perspective. While flow sampling is a valid method for network management use cases sampling for the sake of security leaves too much in question. An analogy would be having two different people listen to the same song. One person gets the song played in its entirety, unsampled, and the other only hears the song in 30-second intervals. While neither may be musically inclined the person who had the advantage of listening to the song in its entirety would be able more accurately hum or sing back that song than the person that only heard 30 second snippets of the song. Furthermore the ability to identify that song during radio airplay would be in favor of the individual that was able to listen to the song in its entirety. This holds true for IP flow information when leveraging the information for detecting malicious or anomalous traffic. Some malicious code will only send a single packet back to a master node, which would most likely be missed, in a sampling scenario.

Further increasing the value of IP flow monitoring is Cisco’s recent release of Flexible NetFlow (FnF). FnF introduces two new concepts to flow monitoring. The first is the use of templates and the second expands the range of packet information that can be collected as well as monitor more deeply inside of a packet. This allows greater granularity in the information that is to be monitored as well a providing different collector sources for different sets of information. You can search for Flexible NetFlow on Cisco’s main website to get more technical details.

Are you using NetFlow for security operations? I welcome any feedback, good or bad regarding your experience and opinions on the value that IP flow information provides for detecting this ever-changing threat landscape.

Tags: , , , ,

World IPv6 Day results positive

The much anticipated World IPv6 Day is now behind us. Almost 400 vendors came together on June 8, 2011 by enabling IPv6 for their content and services for 24 hours. Cisco was one of them. The goal of the test was to demonstrate the viability and potential caveats of a large-scale IPv6 deployment in the real world, as IPv6 has been steadily gaining more and more traction and interest recently due to the gradual IPv4 address exhaustion.

Internally, Cisco, as most organizations, was preparing for the 24 hours to go smoothly for its own IPv6-served content. At the same time, considering the large deployment of Cisco devices throughout networks everywhere, precautions were taken to address any issues that could arise during the dry run. Fortunately, activities concluded successfully with no major issues, showing that an IPv6 future could be closer than initially thought.

There already are and will be many reports created on results, statistics and lessons learned during testing. Among those, we would like to stress a few key-points taken from Cisco Distinguished and Support Engineers Carlos Pignataro, Salman Asadullah, Phil Remaker and Andrew Yourtchenko, who were all engaged in the project, which give a general feel on how the day went:

  • Vendor coordination was made possible, showing that even competitors can work together when it comes to a common goal that will benefit everyone.
  • There were no support cases related to the World IPv6 Day activities, which indicated a good level of both IPv6 preparedness and product readiness.
  • IPv6 adoption could happen smoothly, avoiding major technical issues when done methodically.
  • AAAA DNS records that are used for IPv6 do not automatically “break” the Internet, as it was often argued. There are certain challenges with providing an IPv6-enabled DNS infrastructure, but these can be addressed.
  • User experience feedback was positive. That was based on an IPv6-only approach. Due to the implementations in a dual-stack environment, user experience could deteriorate based on IPv6 and/or IPv4 performance. In such environments, solutions that track IPv6 and IPv4 performance can alleviate help. As the transition is taking place for years to come, dual-stacked environments will be the way to go, and solutions like Happy Eyeballs can certainly make the experience more transparent for users. The Chrome browser already implements a similar fall-back mechanism, which had documented benefits for some of its users.

Concluding, it is important to note that the successful World IPv6 Day exercise proved that transition to IPv6 would probably not be nearly as scary as many had originally thought some time ago. Careful and gradual adoption is easier than it was believed, and it is already happening. Product concerns, improvements and caveats here at Cisco are aggressively being worked on, and the future will only include positive developments.

Tags: ,

Even Security Administrators Deserve a Break – Part 2 of 2

In my last post on this topic, I highlighted just how true the words “Work is no longer a place you go, but what you do” really are. We now have the ability to work anytime, anywhere, using any device. As easy as this has made the lives of workers all over the world, it’s made the lives of security administrators immensely difficult. Providing secure access to the corporate network in a borderless world, while still somehow keeping out the bad stuff, has caused traditional security policies to become increasingly difficult to configure, manage, and troubleshoot – the source of inordinate amounts of pain for security administrators.

That’s why Cisco has introduced identity-based firewall security as a new capability of the ASA platform. As the first installation of what will soon become full context-aware security, identity-based firewall security enables security administrators to utilize the plain language names of users and groups in policy definitions. Rather than authoring and managing the growing list of IP addresses to cover every possible location, device, or protocol that may be required for secure access to the network, identity-based firewall security enables security administrators to grant access to “Jeff.” Regardless of where I am or what I’m using for access, I’m still Jeff… so in the simplest case, my administrator can literally write one policy to provide “Jeff” access to the corporate network, rather than six different IP addresses for all the instantiations of Jeff.

Read More »

Tags: , , , ,

Cisco Partner Invite Reminder: Warehouse Management with Intermec

As part of the Manufacturing Impact Series, here’s a reminder not to miss the Cisco and Intermec Mobile Warehouse Management Webinar set for Thursday 23rd June, 2011. It’s essential viewing and listening if you’re a Cisco resale or systems integration partner, or a partner looking to build a Manufacturing Practice and provide solutions to Manufacturing Industry Customer Care-abouts.

To be competitive, warehouse managers must deliver a high level of performance while reducing costs. Learn how the Cisco and Intermec Mobile Warehouse Management Solution delivers the benefits of mobility to industrial environments, helping warehouse managers to stay connected with their mobile workforce, increasing asset visibility across warehouse operations, providing access to information at the point of work, and delivering intelligence to mobile workers. This is a solution webcast in the “Manufacturing Impact” partner enablement series.
.
There will be speakers from Intermec: Dan Albaum, Senior Director Marketing and Bruce Stubbs, Director Industry Marketing. Jeff Rodawald, Partner Relationship Executive will be the speaker from Cisco with me, Peter Granger, as a panelist. Should be a great event with lots of folks already registered. If you’d like to register click the link: 

Click here to register for the Cisco and Intermec Mobile Warehouse Management Webinar

Date: Thursday 23rd June, 2011; Time: 11.00 am — 12.00 pm Eastern Time; (8:00 am Pacific) Place: Online.  Read More »

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Criticism Abounds, but Cloud Computing Is Here to Stay

This blog was originally published on: http://blogs.forbes.com/tomgillis/2011/05/24/criticism-abounds-but-cloud-computing-is-here-to-stay/

Wow! Lots of outrage over the colossal cloud computing outage at Amazon! With big sites such as Reddit, Foursquare, and Heroku taken down by the issues with Amazon Web Services (AWS), there’s brouhaha brewing about a black eye on Amazon—and the entire cloud computing industry.

“The biggest impact from the outage may be to the cloud itself,” said Rob Enderle, an analyst with the Enderle Group, in ComputerWorld. “What will take a hit is the image of this technology as being one you can depend on, and that image was critically damaged today…If the outage continues for long, it could set back growth of this service by years and permanently kill efforts by many to use this service in the future.”

So the cloud might be a little beat up, but is cloud computing dead? Not even close.

Cloud computing is here to stay, not only because the model is more efficient and more cost effective than the traditional IT infrastructure, but because it promotes the promise of specialization—a value that gives companies an edge and consumers a better product.

What’s AT&T Got to Do With It?

Remember the days when AT&T was the only phone company around, and their phone was the only one you could buy? First it was rotary, and then it was push-button. AT&T made every single part of the phone. It made the screws that held the phone together. The whole machine was incredibly durable, but it was also heavy, clunky, and incredibly inefficient—not to mention expensive.

It didn’t stay that way, however. Boom! Deregulation hit the industry and the price of a phone went from a hundred dollars to a hundred pennies. Everything changed, and today we see the result: throwaway phones. Now phones are ubiquitous, they’re incredibly inexpensive, and they can do more than ever before.

IT infrastructure is moving down the same path. Until now, every company has built its own expertise into its proprietary IT systems. Every company has been (metaphorically speaking) fabricating its own screws, making its own hammers, and toiling over its own infrastructure. There’s been massive duplication of efforts, and the approach is filled with gross inefficiencies.

Now that’s all changing with cloud computing. It has gained rapid adoption exactly because it recognizes the inefficiencies and complications of traditional IT infrastructure, which is built on large, complex systems that require specialized skill sets to implement and deploy. The most interesting form of cloud computing is Infrastructure as a Service, or IaaS. Instead of tilting up the servers and fabricating the screws yourself, you look to a specialist—a large service provider with a deeper level of expertise, greater economies of scale, and the ability to provide the infrastructure on which you can run your apps. Another upshot: by removing a massive noncore task from the organizational to-do list, a new wave of efficiencies and innovation can be unleashed. (Pretty soon, traditional security will look no different from that rotary phone I saw on eBay for $9.99: a charmingly clunky reminder of a long-gone era.)

Build a Plan, Don’t Pray for Perfection

Cloud computing—or anything in computing—is not perfect. Data centers, whether they are public or private, go down. Outages happen in-house as well as to the industry’s leading cloud-hosting providers.

What Amazon’s outage truly demonstrates is just how hard this job is. It’s not an argument against AWS or the cloud industry; it’s a reminder that we need to have specialists handle this complex technology. Specialists can, and will, run into problems, but their ability to respond will be better than the ability of a soap company or a car maker or a media empire to respond. As the Heroku team, one of the sites crippled by the outage, put it: “Amazon employs some of the best infrastructure engineers in the world: if they can’t make it work, then probably no one can.”

What we must all recognize is that we need solutions to better insulate companies against inevitable outages. The question we should be asking is not how can we trust the cloud, but rather how can we make enterprise applications more robust? What should the failover plan look like? (Because things fail.)

The answer is portability. We must have the ability to move apps from one infrastructure to another so that if one bursts, the whole world doesn’t come to a screeching halt. That’s Internet 101. Instead of just one web server, have two web servers in different locations and roll the load between them. Contingency plans that included having two data centers from two different providers and different availability zones kept sites such as the business audience marketing platform company Bizo running during the Amazon outage. By similarly designing systems that took potential failures into account, Netflix was largely unaffected.

The current tools available for virtual data center don’t provide good portability and rollover ability from private to public data centers. Technology vendors need to address how to move a data center workload from one cloud computing provider to another, so they can provide the resiliency and efficiency needed to deal with the occasional bad hair day. With that investment we’ll all come out looking a lot better.

Tags: , ,