Last month, we were faced with another attack. It’s funny, isn’t it? The only thing that we are assured of these days is that an attack will happen. To be truthful, last month’s logjam vulnerability was a bit more of an issue than most. I was talking about the attack with one of our larger European customers and they supported a position that I have had since I started with Plixer over 15 years ago. In a critical situation, similar to the log4j attack, leveraging contextual metadata is the most efficient way to contain the incident. Here are a few reasons why metadata is the better choice for your visibility needs.

Reason 1—It’s about visibility 

Our world changed in the past two years! Whether it’s on-prem, hybrid, or in the cloud the lines that define a companies’ network have gone from being black and white to a squiggly grey mess. No matter what the incident is, having visibility into all aspects of your network’s conversations is key. In today’s world, you have two options: you can insert probes in all the places you need to see traffic or you can grab metadata, like NetFlow or IPFIX, from your current networking infrastructure. As you might have guessed, many of the people that I work with go with the latter. Utilizing metadata over a packet capture solution is much easier, far more scalable, cost-effective, and provides most, if not all, the conversation metrics you need. That is why metadata has become the preferred engine for today’s Network Detection and Response engines (NDR’s).

“Once you’ve identified a compromise on your network, the next step is to contain and remediate the threat. To do that, you need your NDR solution to integrate with your existing response toolset. This ensures that your NDR solution works seamlessly with your existing playbooks, rather than forcing you to rethink them.”

Jeff Lindholm, CEO of Plixer
Containing a cyberattack: How NDR strengthens your response

How to grab a report JSON call from Scrutinizer

Detect log4j vulnerabilities on your network

Learn more about the log4j vulnerability and how to spot it on your network

Reason 2—It’s about history. 

With the log4j incident, we had the luxury of knowing about the compromise pretty quickly but most of the time that isn’t the case. The truth is, many of today’s infections breach the network undetected and sit dormant until called on to do whatever they do. Here is where metadata will be one of your most valuable assets.  

As I mentioned before, at its core, metadata gives you the information needed when trying to resolve a network incident but it’s also important to consider the collector itself. Specifically, the scale and performance of its data retention engine. You need to start asking questions like, how much of this data can be saved and for how long?  How easy is it to drill down into this data over a long timeframe? How fast can you get the answers you need?  

When you need to put on your detective hat and start investigating the assets that were involved with an incident in the last 30/60/90 days, do you want it to take days or minutes? Many of today’s exporters support metrics that extend well past the traditional IP accounting that things like NetFlow were built on. A strong collector needs to be able to support these elements while still maintaining the performance harmony that I mentioned earlier. Lastly, if the collector can inject other telemetry information, like usernames or DNS information, into the flow stream you are giving you and your team an edge in today’s cyberwarfare.  

Reason 3—It’s About Time to Resolution 

The idea behind Time to Resolution is simple. It’s a metric that teams use to measure how long it takes between the time the incident started and when it was resolved. It’s not rocket science and it might not be something that you and your team are directly measured on but it does matter. Maybe it’s engrained in your internal or external Terms of Service? Maybe your boss or manager just looks at you funny when poo hits that fan, or maybe you have an ego as big as mine and want to be a rockstar? Having the right data at the right time helps resolve those issues—and reach rockstar status! 

Ask yourself how would you deal with an incident today? How would you find out who, on your vast network, was talking to a suspicious host, group of IPs, or on a specific port? How are you going to deal with zero-day threats?  

As threats become more and more prevalent, granular data that quickly gives you insight as to what was going on and who was involved becomes necessary. One of the issues is that gaining this level of visibility tends to have a heavy lift and in many cases isn’t scalable. That is why a majority of companies are leveraging metadata to better deal with today’s zero-day attacks. Adding context to an incident and not having to click through multiple applications to find it saves time and money and also improves the ROI on all of your tools. Are you looking for conversation-rich visibility along with the flexibility to integrate that data into your current environment? Why not evaluate Scrutinizer?  

James Dougherty

I have worn many hats in my professional life. Support engineer, developer, network admin and manager are all points on my resume, but the one common thread with all of these jobs is that I enjoy working with people; that is what I do here at Plixer. I make sure that everyone understands our product and can get the most out of it. It's just simple 'no bull' support!

Let me know if you have any questions, I would be happy to help.

- Jimmy D

Related

Leave a Reply

Your email address will not be published.