While investigating our SD-WAN value proposition with customers, I worked with one client who has Cisco IWAN for 250 branches uses Scrutinizer to monitor it all. I learned from the customer that they had to have the following SD-WAN performance reports.
Despite the hundreds of billions spent by companies over the last several years, malware continues to infect our networks, sabotage our systems and steal our intellectual property. Even with repeated failures, the investment continues to climb. Read more
Correlating NetFlow with RADIUS Usernames to improve context security awareness is something we have done for several vendors including Cisco ISE, Microsoft Network Policy Server, Forescouts CounterACT and others. Even with all of these supported, we still get approached with yet another RADIUS system that the customer wants us to pull usernames from and of course they want them correlated with the IP addresses found in the NetFlow and IPFIX we collect.
The good news is that we can do it. The other good news is that it’s relatively easy to do but, it does require some work. The caveat is that it will have to be assessed on a case by case basis. Most of the time, free solutions like FreeRADIUS and OpenRADIUS and commercial solutions like Cisco Prime will log the data to a file which is a big help.
In the logs we have reviewed, we’ve noticed that most are in a unique format. Some are one line in quoted CSV, others are multiple lines. And others like Cisco Prime don’t have all the details we need unless the log is set to trace mode. The kind of log we prefer to work with is called the “accounting log”. Which needs the following data in order for us to add username support more broadly:
- The client’s IP Address, FRAMED-IP-ADDRESS: Typically, the IP address of the dial-in host is not communicated to the RADIUS server until after successful user authentication. Communicating the device IP address to the server in the RADIUS access request allows other applications to take advantage of that information.
- User-Name: This is the username making the request regardless of whether the authentication is pass or fail. If we want to narrow it down, we could only export a flow if authentication passes via the Account-Response event which should be in the log.
- The Log must contain both the request and response requests
How it will Work
We use software that provides a modified tail mode which will watch an accounting log. It provides the intelligence to understand variations in the log format (E.g. single vs. multiline). It then extracts the data out. This process has to be done on a case by case basis in order to add support for a customer’s unique RADIUS solution.
Questions we get Asked
Q: Can Scrutinizer integrate with our RADIUS server?
A: Most likely. Can we have a copy of your RADIUS accounting log?
Q: Can Scrutinizer support the syslogs our RADIUS server can send?
A: Most likely. Can we get a sample of these syslogs?
Q: Our solutions sends data as Microsoft eventlogs. Can Scrutinizer support it?
A: Yes, our current release recognizes those eventlogs.
If you don’t need all of the technical detail, the bottom line is if you are ready to start correlating NetFlow with RADIUS usernames, we can help. Reach out to our team and be ready to share some sample data if we are being asked to support something we haven’t seen before.
It’s pretty safe to say that most users are well aware that companies like Google, Facebook, LinkedIn and hundreds of others are harvesting data out of their customer’s end user devices. What many aren’t aware of is that you don’t even need to be visiting their web sites or actively using their services for them to be constantly streaming data from your Internet connected device. Read more
When many of us think about malware, words like ransomware and key loggers immediately come to mind. Although these types of contagions can certainly be disruptive, an even bigger concern is an advanced persistent threat (APT). These types of insurgencies are not looking for the quick buck turnaround. In contrast to mom and pop malware, the APT goal is generally to get in, setup camp and spread by moving laterally within the organization. Rather than making the host suffer for a onetime event, the APT is in it for the long haul.
Advanced Persistent Threats
The reason the APT wants to stay inside an organization indefinitely is to perform reconnaissance for the command and control servers. They might search for files locally that contain certain names. They might log key strokes, read emails and look for intellectual property that may provide value to someone on the black market. Generally and APT isn’t interested in disrupting business as usual but, rather they want to compromise the intellectual property that makes the company valuable.
In order to find desired information, the infection needs to spread to other machines that can assist in the overall information gathering effort. To do this, the malware may take advantage of mapped drives or reach out to other machines the local host commonly connects to and this generally requires login credentials.
In the Verizon “2017 Data Breach Investigations Report” it was reported that 81% of hacking-related breaches leveraged either stolen and/or weak passwords.
Tracking Malware Lateral Movement
With a very high volume of lateral movements requiring authentication credentials, it became obvious to us that we needed to somehow monitor for authentications that appear out of the norm. As a result, we started maintaining a baseline of every username in the company as well as the corresponding machines that it authenticates to. Before we started triggering for changes in what employees were authenticating to we decided to allow for moderate changes over time. This lead to a baseline structure that can evolve as behaviors change however, for variances that are much larger than allowed thresholds, we can trigger events that lead to alarms and notifications.
Below is an example of the hosts that a single username has authenticated to:
Building in the above functionality into our flow collector was a logical progression for our Flow Analytics behavior monitoring system. Since we already integrate with Cisco ISE, Microsoft Active Directory, CounterACT, LDAP, Radius and others to gather usernames to IP address pairs, keeping track of who is authenticating to what over time was a relatively simple value add. It also brings significant value to our customer base.
Building a Behavior Baseline
By learning over a period of days or weeks – who is authenticating to what on a fairly regular basis, we can then start to recognize authentication behaviors that appear irregular or beyond a threshold of tolerance which of course triggers events. Now you can begin to see that once we have the data, we can start discovering what could be malware movement within the company.
Start Discovering Malware Movement
With the lions share of the most insidious forms of infections using stolen credentials for malware lateral movement, it seems obvious that corporations need to move toward some sort of authentication name and IP address pair behavior monitoring. Reach out to our team to learn more about this progressive strategy for uncovering how 81% of the malware on the market is spreading on internal networks.
If your company has a couple of SIEMS or maybe more than one NetFlow collector, you could probably benefit from a UDP Packet Forwarding system. Here’s the reason: many syslog and flow exporting devices can only export to one or two devices but, when you have hundreds of exporters that need to be updated to send to a second device, it can be a tedious error prone process even with automated scripts. Not to mention, some hardware can only send syslogs or flows to one location.
A UDP packet forward appliance sits in front of a SIEM or the legacy flow collector. In some cases, it assumes the IP address of the SIEM or flow collector and the SIEM is given a new IP address. When the appliance acting as the UDP forwarder receives the syslog and flow packets it will forward them on by modifying the destination IP address but, leaving the source IP address unchanged. This means the SIEM and legacy flow collector believe they are receiving the UDP packets directly from the source. A UDP forwarder can also multiply the UDP datagrams and forward a single UDP stream to multiple destinations as explained in the video below.
A UDP forwarding appliance provides several benefits when it is placed locally to the SIEM and flow collection systems.
- Reduces the amount of traffic on the network, especially over the WAN
- Reduces the load on routers and switches as they only have to send UDP messages to one location
- Lessens the configuration work load when hundreds or thousands of routers suddenly need to send NetFlow, sFlow, IPFIX or syslogs to a different IP address
- Eases the burden trying to reconfigure hardware from different vendors and helps reduce the likelihood of mistakes
- Provides management station redundancy by sending logs to multiple destinations simultaneously
- Allows both network and security administrators to receive the same log messages while maintaining separate systems.
There are several solutions on the market that act as a UDP director for forwarding UDP packets.
However, the best commercial solutions provide the following additional features:
- Detect when the destination hosts are offline and stop forwarding traffic
- Maintain counters that allow admins to identify top UDP datagram producers
- Allow the configuration of policies that will except UDP from entire subnets and send them to the correct destinations
- Provide fault tolerance and redundancy in case of a failure
If you need to duplicate udp datagrams try the flow replicator. It is ideal for UDP Packet Forwarding.
More than ever before, the applications installed on our hand held and laptop devices are sending data off to the cloud. This means the volume of traffic leaving the company is growing at a faster rate. The impetus behind this is the application developers who are rushing to collect big data from their users which can be mined for behavior patterns. The end user characteristics uncovered are then used to sell and market additional services. All of this gathering is causing an increase in traffic which can stress the infrastructure in many ways including the people supporting it. At times it can even make it more difficult to use Network Traffic Analytics to find the spots that ideally would normally receive immediate attention. Proactive measures become difficult when the log and flow consumption systems are receiving a fire hose of data.
Malware Traffic Resembles Normal Traffic Patterns
To add to the complexity involved with finding blind spots is the behavior mimicking that malware developers write into their contagions. In order to evade best detection methods, exploits often behave in ways that have communication patterns which are nearly identical to business and social media applications. Vendors like McAfee use DNS tunneling to extract data out of companies. Microsoft uses proprietary encryption to upload data to the cloud from Windows 10 computers. The miscreants behind infections study and learn about these tactics from trusted vendors. They then ensure that their exploit exhibits similar traits when removing information from our devices. This can include sending the data to an AWS or Akamai hosted domain. For these reasons, no security appliance can protect an organization from all infection variants. It simply isn’t possible.
Kevin Beaumont noted in SC Magazine that, a prolific cyber-security commentator on Twitter pointed out that a vendors website changed from saying: The NHS is totally protected with Sophos” to “Sophos understands the security needs of the NHS”
Since the onslaught of infections will keep coming and statistically many won’t be stopped, security teams have to monitor for behaviors that provide potential indicators of compromise. At the same time, they also have to prepare for the aftermath of an inevitable data theft.
Network Traffic Analytics
Because, communication behaviors are constantly evolving and being copied by insurgents, network traffic analytics needs to be configured to monitor for odd patterns that are outside of preapproved characteristics. IPFIX and NetFlow collection systems provide the best way to ingest big data while simultaneously pattern matching on connections that are not normal. Rogue connections to unapproved NTP, FTP, SSH, SMTP and DNS servers are often a great way to uncover machines exhibiting the telltale signs of a problem. Odd ping communications or long lived flows to unrecognizable sites can be additional indicators.
By accumulating events and establishing an overall score per end system, suspicious hosts rise to the top that generally merit further investigation. Only flow data can provide the enterprise wide visibility needed to track 100% of the communications on the network. When malware is discovered after the theft. Both NetFlow and IPFIX provide the postmortem forensic details needed to paint a full picture of what happened.
Flows Provide 100% Accountability
If the destination host in flow data is found to be hosted by AWS or Akamai, IPFIX exports from vendors like Cisco and Gigamon can include the Fully Qualified Domain Name (FQDN) of the targeted host. Ultimately, this additional context provides the network forensics needed to help streamline efforts and determine how the malware got in, where the data went, how often they uploaded and from where. It is all available through the all-seeing eyes of the flow collection system.
Only by probing the data from all corners of the network can we both uncover strange connection patterns and prepare for the next inevitable infection. NetFlow and IPFIX are the single best resource for making sure that the company is prepared for the next forensic investigation. Checkout the Forensic Investigation Kit.
Before I get into what a NetFlow Analyzer is, lets go back and understand a bit of history regarding network traffic analysis. Almost since the inception of setting up LANs and WANs, business managers alike have wanted to know who was using the network and what the top applications were on the connections. Almost immediately packet analyzers immerged to provide some of the insight needed but, historical detail was lacking and the cost to maintain them every place they were needed on the network was and still is cost prohibitive. Read more
Despite continued improvements in malware prevention, the success rate of infections still out paces the industries best detection methods. This is true even though signature matching picks up on many types of viruses however, it seems the nastiest contagions still penetrate our defenses. What is long overdue is the practice of better behavior monitoring and today i want to focus on user authentication monitoring.
The volume of traffic on our networks in the last year has exploded. More than ever before we are seeing every kind of computer and handheld device make Internet connections to send information out of the company. IoT devices are some of the worst offenders of this privacy taking. We can’t call it theft because we all agreed to it in the End User License Agreement (EULA) that we didn’t read and agreed to when we installed the application. What are they taking and why?
Companies like Microsoft, McAfee, Apple, Plantronics and hundreds of others are often taking details from our devices such as the contacts we have stored. This includes full names, email addresses, phone numbers and anything else they could find useful. In some cases they grab your calendar, your pictures, and details about each telephone call that you make. Other vendors scan your device to see what other applications you have installed and possibly take the data that they collect as well. Other vendors grab details on the websites you visit, what you click on and could even be downloading your keyboard strokes.
The big data demand to collect all of this information is a new way to push profits in a world where software by itself is diminishing in value. The ability to learn a person’s behavior is worth potentially even more than the software being used by the end user. If these companies can mine that and learn a person’s behaviors, they can sell the ability to reach an individual buyer with targeted advertising. For example, perhaps a company learns that a person is a runner—that they like to compete in races and that they prefer Nike shoes. Since they have their calendar and see that they have a race coming up, Adidas might be particularly interested in reaching that consumer with an ad for new running sneakers.
If a company has invested in hundreds of Plantronics’s headsets and they learn from collecting the employee data that the organization is running an antiquated CRM for customer management, they could potentially sell this information to Salesforce, who may have their business development team target the organization to make a sale. This rush to collect information from customers is not only exposing details about our lives, it is creating massive amounts of network traffic.
The volume of Internet uploads from a single application can be as frequent as every minute or more. This increase in traffic volume is putting additional overhead on firewalls and routers. In some cases, certain types of Deep Packet Inspection (DPI) can’t be turned on due to the sheer volume of traffic. This can be unfortunate because DPI is used to look inside encrypted traffic to verify the safety of connections. It is also creating more logs for the SIEM and more NetFlow or IPFIX for the Network Traffic Analytics system. Ultimately, the IT operational costs are going up due to big data collection. What can we do?
In some cases, vendors allow the data collection to be turned off. However, beware of upgrades that turn it back on, as in the case of Plantronics, where they removed the ability to turn off the data collection. Blocking the domains these devices upload to may not work as they can default to 2nd and 3rd alternative domains and what’s worse is that the software could stop working if the phone home isn’t allowed to occur.
The best recourse is testing the software the business depends on and using network traffic analytics to confirm its Internet-related behavior. Asking the vendors what information they are taking is another good practice. Unfortunately, end users that are allowed to put personal devices on the corporate network will still cause a significant increase in traffic volumes. Due to the diversity of applications and their widespread deployment, investigation into every application simply isn’t feasible.
One solution is to prevent employees from installing applications on any type of device that gets on the corporate network. Company-owned laptops and mobile phones would fall into this category. Without some sort of throttling in place, 2017 will easily prove to be the biggest year in network traffic volumes and the following years to come will continue to be record breakers. Companies need to be ready to bear the cost to support it. Network Traffic Analysis solutions will at least allow IT teams to investigate problems as they arise.