Is cloud computing right for your business?  Will it save your company money?  Gartner Inc.  predicts that by 2012, 80 percent of Fortune 1000 enterprises will pay for some cloud computing service, while 30 percent of them will pay for cloud computing infrastructure.  Inevitably your company will eventually consider cloud services for one or more of the business applications it depends on.  Ask yourself:

  • What is the yearly cost to operate the data center for the application(s)?
  • Can the cloud properly service your mission critical application(s)?
  • Can you measure the service if it is outsourced to the cloud?
  • How will you troubleshoot connection performance issues?

As economic forces close in, the savings offered by this value added service will start to make sense for many applications, but probably not all of them. “The cloud is not going to replace the datacenter.  The two will continue to coexist. So from a management standpoint it will become the challenge for IT to oversee both the physical network infrastructure and the infrastructure in the cloud.”
Tracy Corbo


The data center is not going away however, with the introduction of the cloud, the tools to troubleshoot the network need to evolve.  The internal helpdesk and network support team will still be contacted first when application performance becomes an issue.   With cloud computing saving us money, perhaps troubleshooting performance issues will become more difficult.  Why?

In cloud computing, where the application is physically located is considered a moot point as long as performance is good. Why worry about performance issues especially if you have service level agreements?  Ask yourself this question: If a user has a performance issue, how will you know if the problem is in the cloud, within the local LAN or with the remote users internet connection?  You will need to figure it out because service providers typically don’t raise their hand every time a performance issue becomes apparent nor will they help you resolve the problem beyond their network.  Local Network Admins will need to keep service providers on their toes.  How will they do it?

Cloud computing could take network troubleshooting to a higher tier and require greater traffic analysis expertise. Here’s why:

  1. Encryption of the data packets is expected and could limit the amount of insight one gains through the use of a packet analyzer
  2. It is nearly impossible to locate a packet analyzer in a position where it will collect all the data
  3. Latency will be tough to measure, cloud service providers may point fingers when outages occur and Cisco’s IP SLA technology won’t help much as the bottle neck in the cloud can’t be identified

The solution: NetFlow.  I’ve run across two products that can help with this:

  1. The Altor NetFlow agent exports NetFlow Stats and detailed information about network traffic from virtualized servers like VMware ESX.  “It improves performance, stability and security by a factor of 10,” Reidy says. With the Altor virtual firewall, Reidy’s team can also see, for the first time, what traffic is flowing between which virtual machines, including protocols and data volume. “That’s a challenge in the virtual cloud space–traditional products won’t capture that,” he says. “We’re able to tighten our security more because we can see what’s flowing and write rules based around what we see versus what we think is going on.”  At Flushing Bank in New York, CIO Allen Brewer turned to the cloud for data backup after getting fed up with on-site tape backup. The prime concern for the bank was data encryption and finding a provider that could accommodate the bank’s already-developed encryption algorithm. “Some rely on the vendor to supply encryption, but we do our own,” Brewer says. “Everything we send and store is encrypted at the vendor site.”  Matt Reidy, director of IT operations at Snag¬  [source]  The problem is that the Altor solution only addresses the above issues 1 & 2.
  2. The nprobe NetFlow agent can be installed on servers and like the Altor agent it addresses issues 1 and 2.  It addresses issue 3 by providing client, server and application latency details via NetFlow.   By expanding flow monitoring deeper into the actual packets, the nProbe™ can provide greater detail on database performance, latency, e-mail and URL activities. The nProbe collects the data and transfers it to the NetFlow collector via NetFlow v9 or IPFIX for reporting and archiving. Customers can use a NetFlow Analyzer like Scrutinizer to drill down on conversations to determine client round-trip time and server processing latency. If the communication involves HTTP, the complete URL is provided, as well as the ability to click and actually view the page accessed by the client.  No other NetFlow probe today can do this however, it think this is the beginning of a trend.  Imagine performing cloud service moniting and having access to something that is effective at detecting latency issues all with NetFlow or in the case IPFIX.
Cloud Service Moniting with NetFlow

Utilizing NetFlow agents on servers hosting business applications within the cloud allow LAN administrators to monitor performance issues.  For example, any transaction that sees excessive latency could start to raise flags.  Excessive flags lead to alerts.  Of course, latency can mean a lot of things.  Ultimately client and server latency monitoring tells us which end is causing the slow down.  If the problem is the server, is it really the servers fault?  It could be the latency of the application or more to the point, a specific transaction or URL that caused the slow down?  NetFlow from the nprobe is evolving to solve these issues.

Once the new nprobe is released, customers may want to consider deploying the nprobe on servers participating in a cloud solution.  Cloud service providers can benefit from the collection of this type of latency information on every unique connection.  When a user calls with an excessive latency issue, the NetFlow reporting tool can be used to display details on the connection.

Ease into cloud services.  Some suggest trying less critical applications first that will still see an ROI.  And when reviewing contracts, look for detailed SLA guarantees, the repercussions of not meeting the SLA and look for loop holes.  How are “acts of god” addressed.

NetFlow is an important part of the troubleshooting process in cloud computing.  Make sure the service provider is willing to send you the flows for your mission critical application. Help get them familiar with the command:  ip route-cache flow.  They will be grateful!

Mike Patterson author pic


Michael is one of the Co-founders and the former product manager for Scrutinizer. He enjoys many outdoor winter sports and often takes videos when he is snowmobiling, ice fishing or sledding with his kids. Cold weather and lots of snow make the best winters as far as he is concerned. Prior to starting Somix and Plixer, Mike worked in technical support at Cabletron Systems, acquired his Novell CNE and then moved to the training department for a few years. While in training he finished his Masters in Computer Information Systems from Southern New Hampshire University and then left technical training to pursue a new skill set in Professional Services. In 1998 he left the 'Tron' to start Somix which later became Plixer.


Big Data

Sankey Flow Graph

One of the greatest benefits of NetFlow collection for traffic analysis, is we’re provided with the ability to visualize the…

Leave a Reply