Efficiency means using the smallest amount of resources to accomplish the desired result. All other things being equal, heavily used network paths should have as few hops as possible.
Efficiency is actually a natural outcome of using a hierarchical network design. Building a network in a tree-like structure so that every leaf is no more than three hops away from the central Core of the network means that the greatest distance between any two leaves is six hops. Conversely, if the network has a relatively ad hoc structure, then the only effective upper limit to the number of hops between any two end points is the number of devices in the network.
Keeping hops counts low has several advantages. First, all routing protocols that were discussed in Chapter 6 and Chapter 7 (for IP and IPX, respectively) converge faster for lower hop counts. This convergence generally results in a more stable network. More specifically, it means that the network recovers more quickly after a failure. If a path is available to route traffic around the failure point, it can be found quickly and put into use.
Other advantages all have to do with the delivery of packets through the network. Every extra hop represents at least one additional queue and at least one additional link. Each additional queue introduces a random amount of latency. If the queue is relatively full, then the packet may spend extra time waiting to be processed. If the queue is realtively short, on the other hand, it may be processed immediately. If the network is extremely busy, the packet may be dropped rather than being kept until its data is no longer relevant.
This queuing delay means that every additional hop increases the net latency of the path. Latency is something that should be kept as low as possible in any network. Because this extra latency is random, the more hops that exist, the more variation there is in the latency.
This variation in latency is called jitter. Jitter is not a problem for bulk data transfers. But in any real-time applications such as audio or video, it is disastrous. These applications require that the time to deliver a packet from one point to another be as predictable as possible, or the resulting application will suffer from noticeable gaps. These gaps will appear as audible pops or skips and frozen or jumping video images.
Finally, there is the problem of what happens to a packet that passes through a highly congested device in the network. The device can do two things with a new packet entering its buffers. It can either put it into a queue to be forwarded at some future time, or it can decide that the queues are already too full and simply drop the packet. Clearly, the more hops in the path, the greater the probability of hitting one that is highly congested. Thus, a higher hop count means a greater chance of dropped packets.
This rule is true even if the network is rarely congested. A relatively short, random burst of data can temporarily exhaust the queues on a device at any time. Furthermore, the more hops there are in the path, the greater the probability of hitting a link that generates CRC errors. In most LAN media, the probability of CRC errors is relatively low, but this low probability is multiplied by the number of links in the path. Thus, the more hops there are the higher the probability that the packet will become corrupted and have to be dropped.