Big data is popping up in all things IT these days.  It impacts a good percentage of business applications across all verticals from government to healthcare to media, and no surprise, it can have a significant impact on network performance as well.  More and more companies are collecting huge volumes of data—terabytes or even petabytes—from their network every day, and these volumes are expected to continue to increase.  IT teams need to figure out not only how to store big data, but how to recall and analyze it so that it can be put to good use.

Big Data on the Network

According to a recent Enterprise Management Associates (EMA) survey, 45% of companies have big data serversexperienced an increased network traffic load due to big data collection, and 46% experienced an increased network traffic load for backup of big data repositories.  This is where network traffic analysis comes in—it becomes more and more important to have a reliable method for monitoring the network in areas such as application performance, latency, and bandwidth monitoring so that big data collection doesn’t bring the network to a halt.

Furthermore, EMA found that 71% of companies were currently using big data environments to collect and analyze IT infrastructure monitoring data, and 18% were planning to implement big data environments in the next 12 months.   According to Gartner’s definition of big data, which indicates a lack of sampling, this implies that more companies are putting greater emphasis on total visibility within the network.

Total visibility is vital for both network monitoring and network security.  It removes the guesswork inherent in sampling network traffic—if you can see everything, you are more likely to spot problems.

Using NetFlow with Big Data

Many companies collect flow data, such as NetFlow or IPFIX, to monitor all traffic passing through the network.  It’s a great choice; they are comparatively lightweight and are rich with useful information, such as user activity, application performance, and threat detection patterns.  Furthermore, flow data can capture every conversation that happens on your network, providing total accountability.

In some cases, NetFlow itself is stored in big data environments, and companies use traffic analysis software to retrieve and analyze the data.  Plixer’s Scrutinizer Incident Response System delivers in four key areas:

  • Duration of storage
  • Speed of data retrieval
  • Granularity of displayed information
  • Advanced data analytics

Scrutinizer can store all flow records for decades, but retrieve information within seconds and display it in 1-minute intervals.  At the core is Flow Analytics, which enables IT teams to identify threatening communication patterns by maintaining baselines of end system behaviors.  Furthermore, Scrutinizer is able to monitor cloud services, helping IT teams troubleshoot performance issues and aiding in capacity planning efforts.

To start evaluating Scrutinizer, download the free trial or contact us for a demo.

Alienor

Alienor is a technical writer at Plixer. She especially enjoys writing about the latest infosec news and creating guides and tips that readers can use to keep their information safe. When she’s not writing, Alienor spends her time cooking Japanese cuisine, watching movies, and playing Monster Hunter.

Related

Big Data Security
Big Data

Big Data Security Analytics

For the last few decades, security teams have taken a “point product” and best-of-breed approach to securing their environments.  Although…