Until the introduction of flow technologies like NetFlow and the standard called IPFIX, companies relied largely on two technologies. The first was SNMP which allowed customers to trend different performance metrics for long periods of time. Metrics included interface utilization, interface errors, CPU, memory and much more. The problem with SNMP however, is that it couldn’t provide details on who and what was causing the traffic, making it nearly useless for isolating network performance problems and investigating security issues. An extension to SNMP called RMON was incorporated into SNMP but, it failed for several reasons.
The 2nd technology companies relied on is called packet analysis. For over 30 years now, this technology has provided the greatest visibility into network traffic but, it is nearly impossible to scale across the enterprise. Although the details they provide are excellent when investigating malware insurgencies, packet capture probes have to be deployed in targeted locations which can be cumbersome and costly. The maintenance demands of probes also limit the deployments on a large scale. As a result, it is usually impossible to gain enterprise wide visibility using packet analyzers by themselves. We need packet capture but, it needs something to complement it. As a result, NetFlow was born.
Today, flow collection delivers the most important details offered by SNMP while providing over 90% of the visibility most IT professionals used to turn to packet analysis for. Adoption has grown and now all major router and firewall vendors support NetFlow or IPFIX. Because of this, security administrators can leverage flows to baseline normal behaviors and trigger for suspicious events that are often indicative of today’s unwanted infiltrations.
In the Cisco 2016 Annual Security Report I read “Tools that rely on payload visibility, such as full packet capture, are becoming less effective. Running Cisco NetFlow and other metadata-based analyses is now essential.” See page 30.
When searching for a specific host in large scale networks, distributed flow collection systems can pour through massive amounts of flow data collected from remote areas of the world and serve up exact matches in seconds. You just can’t do this with SNMP and usually not with packet analysis tools. This is why flow collection is the incident response technology of choice when chasing down network anomalies related congestion problems or when performing initial investigations on malware traffic patterns. SNMP and certainly not packet analysis aren’t going to disappear but, their usefulness is being slightly displaced by NetFlow and IPFIX.
Since 2010, several vendors have introduced flow export details on round trip time, URLs, packet loss, TCP window size, jitter, codec and more. These new metrics are starting to rival those traditionally provided by packet analysis. In fact, Cisco routers can even capture packets and export them off in datagrams! It just goes to show that flow technology continues to demonstrate new innovations however, there is still a strong market and need for packet capture. Although flows contain great details, there are plenty of trouble shoots where we simply need access to all of the packets. In nearly 100% of the environments we go into, packet capture is already an important part of the customers tool chest and this will not change in the near future.
Gartner stated a few years ago that flow analysis should be done 80% of the time and that packet capture with probes should be done 20% of the time. Source.
The challenge for flow collector seeking customers is to select a scalable NetFlow collection system which will meet their current and growing thirst to save more flow data for longer periods of time. Archiving combined with rich reporting and filtering is also paramount. Thorough evaluations of competing systems is critical to ensure that IT selects the ideal solution.