# Bandwidth

Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
Citable Version  [?]

This editable Main Article is under development and subject to a disclaimer.

Bandwidth, in practical terms, is differently defined for analog channels (e.g., the sound range of a stereo system) and for digital channels used in computer networks.

For analog channels, bandwidth is the length between two cut-off frequencies, as measured in hertz. It is used in radio, electronics, and signal processing. Bandwidth is determined by subtracting the lower cut-off frequency from the upper cut-off frequency.

Claude Shannon, the founder of information theory, spoke of channel capacity rather than the more familiar bandwidth. It is a very useful distinction, since bandwidth is commonly specified in units such as C bits per second. Those bits, however, may carry overhead not available to the user of the channel.

For a quick definition, a communications technique that encodes single bits, the maximum theoretical bandwidth is inversely proportional to the bit time; it is the reciprocal of delay. A technique whose bits are one microsecond long, for example, would have a maximum possible bandwidth of one megabit per second.

Channel rate is one of the components of latency. One component, transmission delay, is derived from the bit time, which is the reciprocal of bits per second. A 1 megabit per second channel imposes a 1 microsecond transmission delay on each bit.

In the absence of noise (i.e., errors), he defined channel capacity as 'C'/'H, where C is the maximum number of bits per second that can be carried on the channel, and H is a source of information. A given channel might need to reserve every fifth bit for overhead used by the channel itself, so the maximum value of C/H will be 0.8×C.[1] In noisy channels, however, H may contain some overhead used for error control.[2] Assume the mechanism is such that if 10 percent of the bits are devoted to error correction (i.e., to reconstruct the data in the presence of errors), the overhead drops the maximum information-carrying capacity to C/(0.9H)

It is rare, however, that a channel is useful if it only sends bits, rather than more complex symbols or message.[3] To take an extremely simple example, an old-style teletype encodes letters into 5 bit signals. The theoretical channel capacity, in this case, is C/5. If there is noise or error correction, assuming the 10% error correction above, the practical capacity becomes (C/5)/(0.9H).

In practical measurements of digital communications bandwidth, while bits per second is the most common way to speak of bandwidth, it can be much more useful to speak of error-free frames, packets, end-to-end segments, or application messages per second.In Shannon's terms, these all are symbols. As will be seen, such a number alone is not meaningful if there are additional variables, such as the length of frames when they are variable length. When frames have a constant number of overhead bytes, frames with a longer data field have less relative overhead than shorter frames.

With real protocols, however, the maximum theoretical bandwidth may have to be reduced due to a variety of overhead factors. Remember, however, that the universal answer to any network question is "it depends". Some protocols actually make the full theoretical bandwidth available to the entity making use of the protocol, but, internally, the protocol runs faster than the official maximum rate, to leave room for some overhead functions.

In using "layer" in this article, the term is used for convenience and not to be OSI-compliant. A key concept is that a layer has a user and a provider. For example, the data link service layer has, normally, the network layer as a user and the physical layer as a provider.

Even in this case, there will be real-world variation. On LANs and some other media, the data link layer is sublayered into medium access control and logical link control. Logical link control can be a user of medium access control, or, in a particular example, the overhead of LLC is trivial and the example works best by treating the network service as the user of data link.

Unfortunately, dealing with real and not theoretical transmission systems, the "maximum theoretical rate" is quoted in different ways. If one considers the rate of the physical protocol as the rate that is available to the function, such as a data link entity, above it, that rate may be the theoretical maximum or something less than the maximum. The reality is that almost any modern physical protocol has some internal overhaad; the protocol specifications vary as to whether that overhead is visible to the user of the physical protocol. Sometimes, the bit time is accurately stated, but the theoretical maximum is not reachable because some of the bits are required for overhead in the transmission system. With some technologies, the real bit time is shorter than the one that is used to compute the available bit rate, because the people who wrote its specification wanted the user to see the achievable maximum that the physical layer could deliver to the service using it. Let's examine two examples.

### DS1: some bandwidth reserved for overhead

The first widely used digital transmission format used inside telephone networks is called DS1, part of the plesisochronous digital hierarchy (PDH).. For many people, DS1 and T1 are interchangeable terms, although T1 carrier was the first specific implementation. The DS1 rate of 1.544 Mbps is made up of 24 DS0 digitized voice subchannels, which add up to 1.536 Mbps. For every 24 DS0 channels, which can be combined in various ways to reach 1.536 Mbps, an additional 8 Kbps is reserved for media synchronization and media functions.

### FDDI: overhead hidden from the physical layer user

With the obsolescent Fiber Data Distributed Interface (FDDI), the rate normally quoted is 100 Mbps, and that is indeed fully available to the function that uses the FDDI stream. In actuality, however, the actual signal on the FDDI medium sends four bits of data, called a quat, and then one overhead bit that is used for time synchronization. So, while the FDDI physical service makes 100 Mbps of available to the function that uses it, the real line rate is 120 Mbps.

When one moves up to data link, which, in turn, has its user entity, the bandwidth has several components:

• The maximum frame rates that should be used when testing LAN connections SHOULD be the listed theoretical maximum rate for the frame size on the media.[4]
• The number of bits that are available to be used for data from the user entity

The product of maximum number of frames and the bits available per frame would seem to be a reasonable estimate of bandwidth at the frame level, but this assumes all frames are of equal length. Since the overhead header and trailer are the same regardless of the frame payload field length, long frames, when the framing protocol allows variable lengths, are more bandwidth efficient than short frames. Ethernet, for example, allows minimum payloads of 64 bytes and maximum payloads of 1500 bytes. Bandwidth figures, therefore, are not very meaningful if the test frame length is not known.

In the real world, however, there are a number of factors that reduce the bandwidth to something below the theoretical maximum. While a nominal 10-megabit Ethernet/IEEE 802.3 stream doesn't really use 100 nanosecond bits on the line, assume that it does. Immediately, there start to be reductions due to overhead transmissions and protocol-enforced quiet times. In reality, it is not a sequence of bits, but a sequence of frames. Each frame has 64 bits of hardware synchronization preamble, 96 bits of addressing, 16 bits that identify aspects of the payload, a 32-bit error-checking field, and up to 12,000 bits of data.

Ignoring additional complications of the protocol, out of every 1526 bytes, the system can transmit 1500 data bytes. The 26 overhead bytes are relatively minor on a frame with a 1500 byte data field, but very appreciable overhead on minimum length frames with 64 byte data fields.

## Errors and bandwidth at data link

The simpler case of error handling is with LAN MAC frames. If a frame arrives with an incorrect error checking field, it will be discarded rather than sent to the MAC user. As long as the bandwidth assumptions are stated, one could state frames per second or as error-free frames per second, again also specifying the frame length.

## References

1. Shannon CE, Weaver W (1948), The Mathematical Theory of Communications p. 17
2. This article deals only with forward error correction, which is meaningful in one-way communications. Error correction based on acknowledgement and retransmission is beyond the scope of this article.
3. Shannon & Weaver, p. 37
4. Bradner S, McQuaid J (March 1999), Benchmarking Methodology for Network Interconnect Devices, RFC2544Section 20, p. 12