Cloud service adoption rates are showing no signs of slowing down. Recently, Gartner wrote that over the course of 2016, the global cloud application services market would grow by 21.7 percent, reaching $38.9 Billion. IBM predicts that in 2017 public cloud will become the primary delivery vehicle for most cloud adoption and that this will create struggles for traditional IT organizations. In this blog, I will discuss traditional methods used for wide area network (WAN) capacity planning. I will then consider the impact to the traditional WAN caused by public, private, and hybrid cloud adoption. I will review SD-WAN’s ability to meet the WAN optimization needs of a cloud-first model and, finally, I will share my thoughts on the impact SD-WAN will have on the business’ ability to visualize and manage traffic flow.
Traditional Capacity Planning Methods
When managing traditional WAN capacity, hope is not a strategy. You need a methodology to determine how much bandwidth is enough. An article I recently read did a nice job of outlining traditional best practice approaches to WAN design and capacity planning. You must have a clear understanding of your current state of application usage and you need the opportunity to collaborate with business operations to consider the impact of yet to be deployed applications. You also need to be flexible to adapt to the ever-evolving changes in network and application deployment strategies.
Existing applications – Every environment is unique with its own set of business and productivity applications. It is important for administrators to have an understanding per application of throughput, bandwidth, and latency parameters. Over time, application usage will tend to fall into a pattern of “normal”. Administrators should baseline normal traffic patterns and monitor them for pattern deviations. Consistent pattern changes could be an indicator of the need to adjust WAN capacity.
Future application deployments – Close alignment between business operations and IT is imperative when considering the deployment of new business critical applications. WAN links deliver significantly less bandwidth than the local area network (LAN). When not properly sized, they can quickly become a problem for latency-sensitive applications like voice and video.
Centralized vs. decentralized – During the 20 years I’ve been in networking, I’ve seen the industry continually vacillate between centralized and decentralized computing and application architectures. For example, centralized mainframes gave way to highly distributed servers deployed close to the users. Then servers running client/server applications were physically collapsed into the data center. Next, virtual desktop deployments brought the applications from the desktop into the data center, reducing network traffic by only representing keystrokes. Client/server applications are now the norm once again, but are often deployed as virtual instances in private, public, and hybrid clouds. These market transitions, among many others, have significantly affected the WAN and what the business has needed from it.
Public, Private, and Hybrid Cloud Impact on the WAN
Virtualized public, private, and hybrid application deployments are the latest assault on those responsible for WAN design and planning. No longer is the physical location of an application static. Applications can now dynamically move across any rack, any row, any data center, and any public cloud. Gone forever are the days of having a predictable baseline of traffic patterns around which capacity planning is possible. Workload mobility means traffic loads across any link can suddenly, and unpredictably, spike.
Businesses choose cloud for speed, agility, elasticity, and cost reduction benefits. Static LAN and WAN network infrastructure inhibits these business benefits, which is a real problem. Luckily, there is a new game-changing technology called software-defined WAN (SD-WAN) that shows tremendous promise. Although WAN design will still require a human element, many of the traditional WAN capacity planning processes are automated with an SD-WAN deployment.
SD-WAN Provides Capacity Agility and Elasticity for Remote Locations
Cloud providers like Amazon, Google, and Microsoft all offer organizations the flexibility to quickly spin up and deploy new applications, as well as dynamically “burst” on premise capacity into the cloud. Retail organizations, for example, don’t want to incur the expense of building their own infrastructure to accommodate the load of Black Friday. They want to build what they need for the other 364 days of the year (or 365 for those leap years), and burst capacity as a short term and cost-effective solution to meet the needs of a single day. SD-WAN delivers the same type of agility and flexibility to use capacity across wide area links. Both high- and low-cost WAN links can be used to build a meshed topology to connect remote locations with the central office. SD-WAN provides the ability to pool these WAN resources in real time based on the needs of the applications. At any given time, WAN capacity lays dormant. Through a software-defined approach, this dormant capacity can be leveraged to ensure the elastic needs of the applications are met.
Managing and Monitoring Application Traffic across SD-WAN
SD-WAN’s dynamic pooling of capacity across WAN links delivers elasticity, saves money, and ensures a good user experience. To accomplish this, however, it completely changes the forwarding and filtering paradigms of traditional WAN traffic. When you abstract the hardware from the control plane, WAN administrators gain the advantage of a software-defined approach, but lose the visibility, monitoring, and troubleshooting capabilities to which they are accustomed. To regain WAN traffic visibility, organizations need to implement data collection systems capable of consuming SD-WAN NetFlow and metadata exports. To get an idea of how this can work, check out this blog on Cisco’s IWAN NetFlow support.