Could it be true? The Internet is so full that brownouts will start occurring within 2 years?!

Where is the Internet congested?
Well, the decentralized nature of the ‘Net means that it will probably never be completely full. However, one of the links between your PC and the server you need to connect to may be congested.  Don’t be too alarmed, contention like this is already occurring today on the periphery.  Most of us have experienced a web page behind a congested connection that locks up the browser.  What some experts are saying is that the Internet is getting too full on trunk connections and that brownouts are likely to occur more and more in the coming years.

How can Scrutinizer help?
If your company’s Internet connection is suffering from congestion, use the top ‘Hosts’ tab in Scrutinizer to view which hosts are creating traffic at the highest rate on a given interface.

Internet Getting Too Full

Using Cisco NetFlow, IPFIX, Jflow or NetStream, Scrutinizer can tell you what the traffic rate is for the top machines on a given interface, for a specific timeframe. Using this information, rate limiting can be put in place for specific hosts or subnets.

What will ISPs do to fix the looming congestion problem?
Internet Service Providers dealing with large traffic volumes created by applications such as BitTorrent, YouTube, and others, are scrambling to develop a business model that will address this consumer demand. Service providers implementing bandwidth restrictions may use the following tactics:

1.    Restrict bandwidth for accounts with an increased level of traffic.
2.    Restrict bandwidth at certain intervals during the day.
3.    Restrict BitTorrent bandwidth.
4.    Don’t allow seeding.

How do service providers know what to restrict?
Knowing what data to restrict can be a tricky process. Arris Group Inc. (Nasdaq: ARRS) and Camiant Inc. are among  vendors that have developed new bandwidth management systems. Due to the fact that many applications use the same ports, application recognition is no simple process (e.g. Skype ‘VoIP’ looks like BitTorrent ‘data’).  My guess is that accurate application awareness is a highly dynamic enigma necessitating frequent updates. Why? Software developers of non-time sensitive applications are likely to study the traffic patterns of prioritized time sensitive applications and make their applications act and behave like them.

The U.S. government is getting involved

The FCC plans to roll out a new Internet traffic management system that delays only some kinds of content during moments of congestion. During the occasional times the network is congested, this new technology automatically ensures that all time-sensitive Internet traffic—such as web pages, voice calls, streaming videos and gaming—moves without delay. Less time-sensitive traffic, such as file uploads, peer-to-peer and Usenet newsgroups, may be delayed momentarily, but only when the local network is congested.

Cox Communications, Inc. is apparently the first company to try out the new system.  It has published a list of what it considers as “time-sensitive” data.

Long-term solution
Some feel that the largest single reason for most congestion on networks today is caused by the connectionless nature of communications. Although some would argue that TCP is connection oriented I’m talking about a more than a TCP ‘Flag’ hand shake. With all IP communications, the forwarding logic is still based primarily on the destination address with little to no regard on the source sending the traffic. Why was this done?

We have a great communications architecture called the telephone system that we could have modeled data networking after, but it was largely dismissed. Why? Data networking initially wasn’t taken seriously and my guess is that the founders wanted the plug-and-play architecture that we currently enjoy. I’m sure it’s this same architecture that has led to the phenomenal growth of the Internet. Perhaps now we can go back with new hardware and give users a CIR (committed information rate), as well as a possible burst rate while keeping in mind things like quality of service and the source of the communication. More connection oriented communications will also reduce some of security risks.

I’ll post another blog on how connection-oriented communications could help alleviate congestion on data networks….

My follow up blog two years later: OMG, the Internet isn’t overloaded yet!

Mike Patterson author pic


Michael is one of the Co-founders and the former product manager for Scrutinizer. He enjoys many outdoor winter sports and often takes videos when he is snowmobiling, ice fishing or sledding with his kids. Cold weather and lots of snow make the best winters as far as he is concerned. Prior to starting Somix and Plixer, Mike worked in technical support at Cabletron Systems, acquired his Novell CNE and then moved to the training department for a few years. While in training he finished his Masters in Computer Information Systems from Southern New Hampshire University and then left technical training to pursue a new skill set in Professional Services. In 1998 he left the 'Tron' to start Somix which later became Plixer.


Leave a Reply