As digital transformation sweeps across the world, there is a driving need for more effective logging and data recording for incident response. In today’s IT world, your agency’s Computer Incident Response Team (CIRT) must have the capability to quickly determine the source and scope of an attack on its network in order to effectively mitigate it. In an attempt to do this, most administrators will build an audit trail of information collected from network traffic using either NetFlow or packet capture (PCAP). In reality, the best solution is to leverage both to your advantage.
It is important to realize that effective incident response it is all about size. For example, if you are collecting only PCAP, then you may have too much data over too short a time. Using PCAP to find out who one machine was connected with on a busy segment of the network is, at best, a lengthy query and, at worst, the TCP reconstruction can be computationally impossible.
With NetFlow, it’s a quick and speedy query over a lengthy forensic record. This is because the space that could hold hours of PCAP could hold 2-3 months of NetFlow records. With full PCAP and NetFlow, it’s definitely an “and,” not an “or,” proposition. So the best approach for organizations is to use NetFlow first (due to the ease of collection and queries) then complement with PCAP later, as resources allow.
Here’s a good example: imagine you had a time window reflecting both NetFlow and PCAP. First, you would use NetFlow to know what and where to query, then you could filter those results down to the network for more precise capture, ending up with something that could be realistically returned. In comparison, if you use a week of PCAP with the best of breed full packet commercial solutions (on a busy enterprise point of presence), with the query “show me everything the computer gavin.reid-machine did on the network”, it will never return that query. In order to get something returned using PCAP, you must carefully define your query. The more specific the better, such as “on June 12th between 10:15-10:30, over port 80, show me what gavin.reid-machine did on the network”. This type of query would return usable data.
In essence, with PCAP you need a more precise and focused query to achieve an optimal return, while NetFlow enables you to find out the “what and where” to query with. Plus, with flow data, you can easily and quickly query for everything gavin.reid-machine did on the network, and do so while covering a much longer period of time. So don’t be fooled into thinking your organization needs only one of the two. The reality is you need both since they support and feed off each other. And as digital transformation continues to push rapid change in IT, it is even more critical that NetFlow and PCAP – working together – become a significant piece of your CIRTs detection arsenal.
agreed. Great post.
Reading through your article I get the sense that I may need two products. Flow tool to give me “know what and where to query” and PCAP for detailed analysis.
There is a middle ground here with tools that capture metadata from network packets. An example would be a tool that can report on the flow data going between a client and file server but also provides the drilldown to see the names of the files accessed.
The two tool approach can sometimes scare people off as it adds complexity and costs.
Good points, I would offer up it is again an AND not an OR. Sometimes the metadata is all you need and other times you may need the whole thing. An application that your parser can’t decode or understand could be an example. The plus in all of this is that the tool capabilities are converging. As a consumer, to help with complexity, I would seek out tool-sets that use or benefit from a common platform based approach focused on interoperability
Nice write up signifying the importance of netflow and pcap.
Pcap is taking full information, maybe with Cisco NAM module you can storage Gigabytes for some time.
Sending pcap data through the mpls is like sending the information twice I would not recommend it. Just do a tcpdump when you need it.
like the linkage here between flow data and digital transformation. would be nice to see some examples on how this linkage drives business outcomes.
Also given most transformations leverage public cloud, how would you do this across cloud’s i.e Private and Public? User view, Workload view for example?
Like the idea’s – would be easily enough materials for a whole new blog. On the cloud side we are pulling together & testing plans on how to best use cloud security services like Amazon’s CloudTrail and how to consume flows generated by these cloud services as they begin to support flow generation