I’ve had some recent discussions with colleagues in the armed forces regarding cyber security and how they consider “cyber” to be the fourth warfighting domain along with land, air, and sea. They describe how cyber has its own terrain made up of computing resources. As I further thought through this concept I saw a striking resemblance between the network and air warfare. To elaborate on this thought I must first set the context around the concept of air supremacy.
There are probably many different variations of the definition of air supremacy but let’s just use “the degree of air superiority wherein the opposing air force is incapable of effective interference” for the purpose of this blog. I borrowed this definition from NATO. There are two key words in the definition, “degree” and “effective.” Prior to achieving supremacy one must first move from parity, through superiority to eventually supremacy. Air parity is the lowest degree in which a force can control the skies above friendly units. In other words, prevention of opposing air assets from overwhelming land, air, and sea units. Read More »
Sometimes it is interesting to take a look at darknet data and see what you come across. If you are not familiar with the term “darknet,” I am using the definition used by some in the service provider community where a darknet is a set of address space which contains no real hosts. That means no client workstations to initiate conversations with servers on the Internet. It also means no advertised services from those ranges, such as a webserver, a DNS server, or a database server. There is really no reason to see any traffic destined for addresses within those ranges. From a network point of view, it should be as desolate and deserted as the town of Pripyat in the Ukraine, within the evacuation zone due to the Chernobyl disaster back in the 1980s. However, in practice, you do see traffic to those address ranges, which is what makes that traffic somewhat interesting. Traffic destined to those ranges could be the result of malware attempting to locate machines to infect, part of a research project or it could be as simple as a misconfiguration or a typographical error. One example of traffic resulting from a typo would come from attempting to ping a host and typing the wrong address in. However, it would be hard to believe that all of the traffic seen in a darknet is the result of a mistake.
Setting up a darknet does not have to be hard to do. If your organization has address space that is not being used, then all that you need to do is advertise a route for those addresses and leave them unused. In our case, we have advertised several ranges and we collect Netflow data for the traffic destined to them from a nearby Cisco router. That Netflow data is exported to a collector, such as nfcapd, where it is aggregated for further analysis.
For those of you that have been around the networking world for a while, NetFlow is far from a new technology. Cisco developed NetFlow years ago and it has become the industry standard for generating and collecting IP traffic information. NetFlow quickly found a home within network management providing valuable telemetry for overall network performance and management. Nine versions later NetFlow is growing in popularity not solely due to its value to network management but as a critical component of security operations. Over the past 12 months I have encountered more and more large enterprises that view NetFlow as one of their top tools for combating advanced threats within their perimeters.
The dynamic nature of the cyber threat landscape and growing level of sophistication and customization of attacks are requiring organizations to monitor their internal networks at a new level. IP flow monitoring (NetFlow) coupled with security focused NetFlow collectors like Lancope’s StealthWatch is helping organizations quickly identify questionable activity and anomalous behavior. The value that NetFlow provides is unsampled accounting of all network activity on an IP flow enabled interface. I bring up unsampled because of its importance from a security perspective. While flow sampling is a valid method for network management use cases sampling for the sake of security leaves too much in question. An analogy would be having two different people listen to the same song. One person gets the song played in its entirety, unsampled, and the other only hears the song in 30-second intervals. While neither may be musically inclined the person who had the advantage of listening to the song in its entirety would be able more accurately hum or sing back that song than the person that only heard 30 second snippets of the song. Furthermore the ability to identify that song during radio airplay would be in favor of the individual that was able to listen to the song in its entirety. This holds true for IP flow information when leveraging the information for detecting malicious or anomalous traffic. Some malicious code will only send a single packet back to a master node, which would most likely be missed, in a sampling scenario.
Further increasing the value of IP flow monitoring is Cisco’s recent release of Flexible NetFlow (FnF). FnF introduces two new concepts to flow monitoring. The first is the use of templates and the second expands the range of packet information that can be collected as well as monitor more deeply inside of a packet. This allows greater granularity in the information that is to be monitored as well a providing different collector sources for different sets of information. You can search for Flexible NetFlow on Cisco’s main website to get more technical details.
Are you using NetFlow for security operations? I welcome any feedback, good or bad regarding your experience and opinions on the value that IP flow information provides for detecting this ever-changing threat landscape.
With an ever growing mobile and distributed workforce, application developers are being tasked to develop applications that can also be remotely accessed by this global workforce. Application developers, with a very basic understanding of networking, assume the network has no boundaries and applications perform optimally regardless of the mode of access. At the same time, cloud computing is enabling applications to be consolidated into centralized and virtualized data centers, further increasing the distance from where the applications are being accessed. Network architects are also being challenged with current network designs for this application deployment and delivery model. The available bandwidth is being taxed as the ever growing applications portfolio competes for network resources to provide a satisfying user experience across the network without boundaries. This application delivery model also demands capabilities for better visibility and control, WAN optimization, and agility of the network to rapidly deploy and manage enterprise applications.
The Cisco Application Velocity solution addresses all the challenges associated with the delivery and consumption of enterprise applications over the network without boundaries. It is one of the five services in Cisco’s Borderless Network Architecture and is composed of innovative Cisco technologies that help IT professionals meet or exceed business SLAs, maximize user experience, optimize resource utilization, and increase reliability and user expectations.
Welcome to the shownotes for TechWiseTV 78: Borderless Networks: Optimizing Application Velocity. Have you seen the show yet? It is live starting 10 AM PST November 11. All the talk about ‘cloud’ and ‘virtual this and that…’ from your servers to your desktops…its the renaissance we have all been told about before it seems. What is the most important ‘make or break’ reality ALL of us have to live with? Three Areas: (1) User Experience, (2) Resource Utilization, (3) Application Reliability.