Blog :: Security Operations

5 Lessons Learned at RSA 2018

bobn

Last week I traveled with the Plixer team to San Francisco for the RSA 2018 Conference. The show was bigger than ever, with nearly 50,000 security professionals in attendance. Assumably, attendees were there to learn about the latest trends in cybersecurity, hear from thought leaders on the current and future state of security, and wander through two massive halls of vendors to see what was new. Like last year, I’ll discuss my experience at the show, share insights gained, and offer my perspective on what the coming year will bring.

1. It’s A Never Ending Game Of Cat and Mouse

Cat and mouse

There is a constant game of cat and mouse being played in the world of cybersecurity, and the stakes are extremely high. Just when the good guys think they have the upper hand, cybercriminals find new attack vectors to exploit.  In a time where many technology shows are declining, RSA continues to get bigger year over year.  As the global economy becomes more digital, the attack surfaces grow and the bad guys find more and more ways to monetize their unscrupulous behavior.

2. Threat Intelligence Data Is Coming from Three Main Vectors

Threat intelligence platforms are gathering data from three main places:

Endpoints – More and more technology solutions are putting agent software on mobile devices to monitor and report on their status. Mobile devices typically sit outside the traditional “security perimeter” of corporations, placing them into a higher risk device category. In addition, they are often bring-your-own devices (BYOD), with a variety of applications that fall outside of the business’s control. Organizations are becoming more willing to place agent software on devices to reduce their risks.

Packet Inspection from network wires – Packet capture technologies are becoming more widely deployed, and packet capture functionality is finding its way into more products. Due to the high cost of storing packet data, many solutions are offering a rolling packet capture capability. The platform maintains data for a given amount of time (e.g. 8 hours) and then overwrites the data on the disk. In these scenarios, organizations must become aware of a breach within this time window if they want to have the data they need. I believe that the industry needs to move to an orchestration model, where packet capture can be triggered through the detection of an event via an external method (network traffic analytics, SIEM, IPS, etc.). This reduces the cost of maintaining historical data, while allowing for the real time capture of data relevant to a specific breach.

Metadata sent from physical infrastructure – Network infrastructure devices (switches, routers, wireless controllers, firewalls, and sensors) are natively capable of sending summarized details on all the traffic they see to collectors. The format of this data exchange can be NetFlow, sFlow, IPFIX, JSON, and others. Failing to collect and report on this data from the existing infrastructure is an oversight that leaves organizations blind to critical information that already exists.

3. There Are Four Main Methods of Threat Hunting

Lists – Threat feeds from third parties are available as subscriptions to vendors who place them into their products. These feeds are used to evaluate traffic and alert on any ”hits.” Lists are valuable for known attacks, but are generally not very efficient at identifying zero-day attacks.

Machine Learning (ML) / Artificial Intelligence (AI) – ML/AI is heralded as the best thing since sliced bread. I believe it does provide tremendous potential value; however, many organizations appear to be overselling what it can do in practical terms. For machine learning to work, it must have a baseline of what is considered normal. Network traffic, as a whole, simply cannot be baselined. There are too many moving parts and variables. Every network is a snowflake, supporting unique devices, applications, and physical locations. I believe the short-term value of ML/AI is in a more narrow approach. Application and protocol behavior is something that can and should be baselined so that behavior deviations can be identified. In addition, applying machine learning to alarms and alerts can do a lot to reduce the number of false positives that SecOps teams waste time running down.

Machine learning / Artificial Intelligence

Signatures – Signatures are a time-tested and effective mechanism to identify known attacks. Although they provide little protection from zero-day attacks, hackers regularly recycle older attacks to look for organizations that remain vulnerable.

Thresholds – Normal patterns of device, protocol, and application behavior can be understood and thresholds aligned to that normal behavior can be set. If a device, protocol, or application suddenly begins to demonstrate activity that exceeds the threshold, alarms and alerts can be sent to the SecOps team for further investigation. Thresholds are good mechanisms to identify zero-day attacks, as they consistently change the behaviors.

4. Evolving Licensing Models

When it comes to how companies purchase security technology, the industry is evolving quickly. For a very long time, technology (either software or hardware) has been purchased under a perpetual license through a capital expense model (CapEx). As I wandered about the show floor at RSA, I struggled to find any vendors still selling their technology under a perpetual model. Now there are two primary mechanisms to purchase. The first is an on-premise (or private cloud) deployment consumed as an annual subscription. The other model is to consume the technology in a software-as-a-service (SaaS) model in the public cloud, which moves expense from a CapEx to an OpEx model.

5. The Need for Correlation Between Vulnerability Assessment And Traffic Analytics

Patch velocity is out of control. There were over 20,000 vulnerabilities disclosed in 2017. If you divide 20,000 by 260 (the number of working days in a year), you get approximately 77 new vulnerabilities per workday, which is simply an unmanageable number. This is one of the primary reasons why events like WannaCry successfully breached organizations long after Shadow Brokers leaked the attack and Microsoft provided a patch for the SMBv1 vulnerability. Furthermore, applying a patch often requires downtime, which is not a trivial thing for businesses that operate on a 7X24 basis. IT teams must regularly wait for upgrade windows to apply patches. This means that the business is taking a measured risk with a known vulnerability, hoping that they will not be attacked against it. In these periods, organizations should be using network traffic analytics to monitor the traffic to and from these vulnerable servers to ensure they know if they’re attacked.

Vulnerability assessment and traffic analytics

I had a great time at RSA, and walking the floor only made me more steadfast in my opinion that network traffic analytics platforms like Scrutinizer are necessary for correlating data across your entire ecosystem and are critical to the security strategy of every organization. Scrutinizer supports fast and efficient incident response for inevitable breaches. The solution allows you to gain visibility into cloud applications, security events, and network traffic. It delivers actionable data to guide you from the detection of network and security events all the way to root-cause analysis and mitigation. Network and security incidents are inevitable. When they occur, Plixer is there to help you quickly return to normal and minimize business disruption.

If you haven’t tried Scrutinizer yet, you can download an evaluation copy to see how it will better prepare your organization for inevitable security breaches.