On Distributed Communications Series

IV. Priority, Precedence, and Overload

III. The Distributed Adaptive Message Block Network

In order that a military communication network die gracefully, it must overload gracefully. The proposed broadband distributed network described in this series has several orders of magnitude greater communications capability than any existing military network. Although overloads would not normally be expected in such a network, we must still plan for such an eventuality.

In the Distributed Adaptive Message Block Network, all traffic from each originating circuit is chopped into small blocks of data called Message Blocks. Each Message Block is "rubber stamped" with the symbols of the end destination, the originating station, and some other housekeeping data. These Message Blocks are transmitted from Switching Node to Switching Node, eventually reaching the desired end station by a reasonably efficient path.

In order to handle real-time digital voice transmission, it was necessary to limit the amount of in-transit storage at each node to minimize differential-path delay times. Because of this restriction on the amount of storage capacity at each node, consideration was limited to only those routing doctrines that utilized little storage at the Switching Nodes. While such doctrines were expected to be less efficient in line utilization than the doctrines which allow backlog of many bits at each switching center, under simulation it has been found that little circuit-utilization performance is lost. Most of the high line-utilization of store-and-forward can be had with very little in-transit storage. By limiting the amount of storage, the store-and-forward system can be made to exhibit one of the most useful features of a line-switching system; i.e., avoidance of store-and-forward oscillation by immediate confirmation of receipt of message. This is accomplished while retaining most of the store-and-forward system's advantage of greater alternate-routing capability and equitable sharing of a single channel by a plurality of users.

Hot-Potato Routing

In the distributed network routing scheme under consideration, the policy is used that if the preferred path is busy, the in-transit Message Block is not stored, but rather sent out over a less efficient, but non-busy link. This rapid passing around of messages without delay, even if a secondary route must be chosen, is called a "hot-potato" routing doctrine. (Each node tries to get rid of its messages as if they were "hot potatoes" and the node is not wearing gloves.)

With such a doctrine, only enough storage at each node need be provided to permit retransmitting messages if an acknowledgment of correct receipt is not received from the adjacent station within a prescribed time interval. Message storage capacity is modest.[1]

The Line-Switching Illusion

Although the broadband distributed network is described as a store-and-forward system (which it is, as far as internal network operation is concerned), the fast switching time of the hot-potato doctrine presents the illusion of a virtual circuit, having a short delay, between two users. While it is anticipated that this mechanism will eliminate the "open servo loop" store-and-forward delay problem discussed above, this is only part of the problem.

Figure 1 portrays several Message Blocks arriving at a distributed network Switching Node. Each node contains a routing decision mechanism. Messages are shown to arrive simultaneously from N different lines. Three of the messages--those coming in on Lines 1, 3, and "N"--are all to be delivered to Station A. Line 2 carries a message directed to Station D, while Lines 4 and 5 carry messages which are to terminate at Stations E and F, respectively.

A dynamically-updated routing table stored at each node indicates the best route for each Message Block to take to reach its directed end terminal station. In the example, three separate messages are received simultaneously at the node, each wishing to take the "best" path toward A. The shortest route is via Line 2, but all three messages cannot be sent out simultaneously, so two of the messages must travel over alternate paths. Although we could have built a system that stored the currently undeliverable messages until the line to A cleared, we have chosen to limit ourselves to a system which sends the "overflow" messages over less efficient paths, effectively preventing a common type of overload (see ODC-II, -III). When two messages seek the same preferred line, a random choice is used to select which message is sent out over the best path. In other words, if three signals are all to be directed towards A, it is not felt that it makes much difference whether the first-choice, second-choice, or third-choice path is taken, since only a few extra nodes must be traversed before the end station is reached. Simulation has shown that this use of secondary paths in lieu of storage is a surprisingly effective doctrine, enabling transmission of about 30-50 per cent of the peak theoretical capacity possible in a store-and-forward system having infinite storage and infinite delays.

Avoiding Network Overloading--Choking

"Choking" is a policy automatically executed by the computer logic at each Switching Node to cut off new local traffic to a network approaching overload. A sample of such a choking doctrine is that shown in Figs. 2-4 and described below.

Each node, as shown in Fig. 2, has N input lines feeding traffic into the node and N output lines, all carrying traffic away from the node. In the example, N = 8.

  1. There is an input line and an output line for each remote station.
  2. Ii = input data rate for the i'th line
    (i = 1,2,... ,8).
  3. 0i = output data rate for the i'th line
    (i = 1,2,... ,8).
  4. Compute
  5. Compute L = f(B), where L is the volume of locally-generated traffic that may be allowed to enter the network.

If all or almost all of the output lines are busy, the local node avoids feeding new traffic into the network until there is open network output capacity. Implementation of this simple intermittent-connection choking doctrine in the illustration presents a varying network input capacity to the tributary communications center feeding Line 1, proportional to what the network can comfortably carry. Network simulation described in ODC-II and -III confirms that a simple policy that chokes off input signals in a binary manner is sufficient to protect any station or combination of stations from overloading the network while allowing each station to retain a high input rate.

Figure 3 represents a variable volume of traffic passing through a sample node. Each of eight input lines is assumed to be able to carry a 1,500,000-bit/sec peak capacity, with the following sample choking doctrine imposed: If the total traffic in the network approaches a volume greater than a preset value, new input will be prevented from entering the network. Traffic already in the network is given preference over newly-entering traffic. Once traffic has entered into the network, the user is almost guaranteed delivery with the certainty that his own traffic will not cause a local bottleneck at another switching center and get bogged down in transit.

In Figs. 2 and 3 the number of lines in simultaneous use is measured to provide a rapid estimate of those intermittent periods when new traffic can be safely entered into the network. The measuring detector in this case might be a simple majority weighting element responding, in this case, to a threshold of five out of eight. If the traffic volume is less than threshold, the full input traffic capacity at the link is allowable. But, when the nodal threshold is exceeded, the input is cut off in a binary manner, as shown in Fig. 4. The local user sees these very short on-off periods smoothed out, with the network input capacity appearing as a slowly varying allowable data rate; i.e., the maximum input rate that can be fed into the network without causing an overload.

The next question we shall consider is allocating this total locally-allowable data rate among a large set of local users or "subscribers."

The Monetary Analogy

Each local communications center in a communications network can only feed up to a certain number of bits per second into the network without overload. This total variable volume may be thought of as being a communications resource, or a "currency," measured in bits rather than dollars. This currency can theoretically be spent in any manner desired. At present, in military networks we generally allow each user to demand as much of the limited resource of communications capability as he alone desires, limited only by the precedence of his traffic. During times of crisis, military commanders will try to spend more communications currency than exists, and there will be an effect identical to inflation. Messages once labeled "Deferred" will be stamped "Operational Immediate," and jammed back into the input hopper. It is analogous to the inflationary competition of competing buyers for scarce goods. We temporarily delude ourselves into thinking that we are buying more capability by inflating the precedence indicator. It is only when we are forced to face up to reality by a currency devaluation that we appreciate what has happened. The "Minimize" doctrine, described in Appendix B, can be seen to be analogous to a "currency devaluation." In designing a priority handling system, we should never permit ourselves to believe that we have more (or less) usable communications capability than we really have. This implies network status control feedback loops.


[1] For example, if eight, separate, full-duplex, 1.5-megabit/sec, 150-mi links feed a single Switching Node, only about 32,000 bits of storage are required at each node. Present estimates are that it will take on the order of one millisecond for a 1024-bit Message Block to be read into a node, for a decision to be made as to the best direction in which to route the Message Block, and to start the Message Block on its way to an adjacent node. Thus, if a Message Block has to traverse 30 such nodes in going across the U.S., only about 30 milliseconds will be required to pass through 30 Switching Nodes (plus another 20 milliseconds transit time through 4000 mi of transmission lines at 133,000 mi/sec. Thus, we anticipate that it may take only about 50 milliseconds to transmit data across the U.S. This is less than the delays encountered in forming digital voice into Message Blocks and unpacking the Message Blocks into audio voice.

This subject is discussed in detail in ODC-VII, -VIII. (ODC is an abbreviation of the series title, On Distributed Communications; the number following refers to the particular volume within the series. A list of all items in the series is found on p. 61.)


Contents
Previous chapter
Next chapter