Avoiding bottlenecks in any large network is impossible, and it isn't always necessary or desirable to do so. One of the main efficiencies of scale in a large network is the ability to oversubscribe the Core links. Oversubscribing means that most designers deliberately aggregate more network segments than the network can support simultaneously. Then they hope that these segments don't all burst to their full capacity at once. This issue was discussed in Chapter 3 in Section 3.4.2.1.
Just oversubscribing is not a problem. The network has a problem only when it cannot support the actual traffic flow. This is called congestion, and it results in increased latency and jitter if the application is lucky enough that the network can queue the packets. If it is not so lucky, the network has to drop packets.
A little bit of congestion is not a bad thing, provided it is handled gracefully. However, systematic congestion in which one or more network links cannot support typical traffic volumes is a serious issue. The network can handle intermittent congestion using the various QoS mechanisms discussed later in this chapter. For systematic congestion, however, the designer usually has to modify the network design to reduce the bottleneck.
By intermittent congestion, I mean congestion that never lasts very long. It is not uncommon for a link to fill up with traffic for short periods of time. This is particularly true when bursty applications use the network.
QoS mechanisms can readily handle short bursts of traffic. They can even handle longer periods of congestion when it is caused by low-priority applications such as file transfers. However, when a high volume of interactive traffic causes the congestion, it is usually considered a systematic problem. In general, QoS mechanisms are less expensive to implement than a redesign of the network, so it is usually best to try it first.
Another common method for handling intermittent congestion is using a Random Early Detection (RED) system on the router with the bottleneck. The RED algorithm deliberately drops some packets before the link is 100% congested. When the load rises above a certain predetermined threshold, the router begins to drop a few packets randomly in an attempt to coax the applications into backing off slightly. In this way, RED tries to avoid congestion before it becomes critical.
However, it is important to be careful about RED because not all applications and protocols respond well to it. It works very well in TCP applications, but in UDP applications, as well as Appletalk and IPX, RED does not achieve the desired results. These protocols cannot back off their sending rates in response to dropped packets.
There are essentially two different ways to handle a systematic and persistent congestion problem at a network bottleneck. You can either increase the bandwidth at the bottleneck point, or you can reroute the traffic so it doesn't all go through the same point.
Sometimes you can get a bottleneck because some redundant paths in a network are unused, forcing all of the traffic through a few congested links. Examining the link costs in the dynamic routing protocol can often provide a way to alleviate this problem.
In many protocols, such as OSPF, it is possible to specify the same cost for several different paths. This specification invokes equal-cost multipath routing. In some cases you may find that, despite equal costs, some of these paths are not used. This may be because the routers are configured to use only a small number (usually two) of these equal-cost paths simultaneously. Many routers offer the ability to increase this number. However, it is important to watch the router CPU and memory load if the number is increased because maintaining the additional paths may cause an additional strain on the device.
Ultimately, if a subtle rerouting cannot alleviate the bottleneck, it will be necessary to increase the bandwidth on the congested links. Doing so is not always easy. If the link is already the fastest available technology in this type, then you have to do something else.
Other options are usually available to you in these situations. You might migrate to a different high-speed link technology, such as ATM or 10 Gigabit Ethernet. Or, you may have the ability to multiplex several fast links together to make one super high-speed link. If even this is not possible, then it is probably best to start configuring new redundant paths through the network to share the load.