next up previous
Next: Reliability Issues Up: Remote Observing with the Previous: User Protocol Layer

NETWORK PERFORMANCE

  The performance of the network was gauged using standard network tools, the primary one being the freeware ttcp utility. This utility measures the bandwidth of a network connection via memory-to-memory host data transfers, producing measures that are independent of disk speeds. The resulting statistic is a product of the processor speeds of the end-point host computers, the intrinsic speed of the underlying network fabric, and the efficiency of the lower-level protocols in terms of the amount of packaging overhead.


 
Figure 7: Bandwidth test results between Keck Observatory and the Caltech campus in Pasadena, California, over the ACTS satellite network. Measurements of UDP and TCP transfers of a fixed-length data stream are shown. TCP exhibits a remarkable dependence on the bit error rate, as is demonstrated by measurements before and after the conversion of microwave antennae to fiber optic cable in Hawaii. Also evidenced is the need to select an adequate TCP window size for satellite networks. Because of limitations in the SunOS kernel, we have been limited to a TCP window size of 1 Mbyte, approximately one-third of the preferred value for this network.  
\begin{figure}
 \begin{center}
 \epsfxsize=4in
 \ \epsfbox{figures/tests.eps} \\...
 ...arbox{4in}{\renewcommand \normalsize{\footnotesize}
 }
 \end{center}\end{figure}

The issue of end-point host processor speed was known in the beginning of the project, and we obtained the fastest machines then available which were also compatible with the Keck Observatory control software. These SPARCstation 20/51 workstations were also equipped with 1 Mbyte of level 2 processor cache to increase network throughput.

The second issue of importance in assessing the performance of the network concerns the intrinsic speed of the network fabric itself. As has been discussed, the California ATM network between Caltech and JPL was configured to run at speeds up to OC-3 (155 Mbit/sec). We confirmed this number through simple tests between our Caltech end-point and a JPL Cray system at the HDR site: with little tuning we were able to measure effective TCP and UDP bandwidths in excess of 85 Mbit/sec. Similarly, the ACTS satellite connection between California and Hawaii is configured to run at OC-3 speeds. (Although ACTS is capable of OC-12 [622 Mbit/sec] communication, the steerable antenna which reaches Hawaii is capable of only OC-3 speeds). Again, this speed was measured using our end-point host and the JPL Cray, with ACTS placed in a ``bent pipe'' configuration to connect the two. In contrast, the Hawaii ATM network was configured to run at only DS-3 (45 Mbit/sec) speeds. Although originally the network was intended to run at these speeds only while the microwave antennae were needed on the big island of Hawaii, a lack of OC-3 interface cards for GTE Hawaiian Telephone's ATM switches prevented us from attempting to increase the speed of the Hawaii network during the later stages of the experiment. This limitation set the maximum speed for our network at 45 Mbit/sec (DS-3).

Finally, since our performance measurements are computed based on transmission speeds of actual user data, the results also reflect the amount of packaging overhead in the lower-level protocols. In the case of TCP packets, this overhead includes TCP and IP headers (20 bytes each), an ATM CRC (8 bytes), and an ATM header in each cell (5 bytes). (See Figure 5.) This issue was confronted by adjusting a number of TCP and IP parameters to minimize the fractional overhead. First, we used a large TCP packet size of 65536 bytes for all testing. Unfortunately, this may give slightly skewed results, as it is difficult to modify the packet size used by the end-point systems in non-testing situations: the systems' network drivers will adjust the TCP packet size dynamically in an attempt to optimize throughput. However, this parameter is not extremely important, as TCP packets are broken up into smaller segments for transmission. The second modification we made was to raise the Maximum Segment Size (MSS) of these segments to 1500 bytes, approximately a factor of 3 above that normally used, and the highest value which is safe to assume routers can handle. Thus, each 1480 bytes of user data will be accompanied by a 20-byte TCP header for that segment. Finally, the Maximum Transmission Unit (MTU) for IP was increased to 9180 bytes. In the case of TCP, any value in excess of the MSS (plus 20 bytes of IP header) is sufficient to ensure that each TCP packet is transmitted within a single IP packet. In the case of UDP, this value limits the quantity of UDP data which may be transmitted in a single IP packet to 9160 bytes.

Given these values, we may then calculate the TCP/IP data transmission efficiency for our network. The number of 53-byte ATM cells required to transmit a single TCP segment is given by:

\setcounter{equation}{1}
 \begin{eqnarray}
 n & = & (\mbox{data bytes} + \mbox{T...
 ... \\  & = & (1480 + 20 + 20 + 8) / 48 \\  & = & 32 \hbox{ cells.}
 \end{eqnarray}

Therefore, the efficiency (ratio of data bytes to transmitted bytes) is:

\setcounter{equation}{4}
 \begin{eqnarray}
 \epsilon & = & 1480 / (32 \mbox{ cells} * 53 \mbox{ bytes/cell}) \\  & = & 87\%
 \end{eqnarray}

This implies a maximum data throughput for our network of 87% of DS-3, or approximately 39 Mbit/sec.

The most complex part of optimizing the throughput of our network has involved the TCP-LFN extensions to the SunOS kernel. As mentioned previously, we employed a Sun Consulting special package to augment the SunOS kernel with extended TCP windows and other capabilities outlined in RFC 1323. Unfortunately, a number of limitations of SunOS 4.1.4 conspire to prohibit one from obtaining extremely large window sizes, regardless of the TCP-LFN software. In our case, the compiled-in kernel limit of 2 Mbytes of Mbuf memory (i.e., IP packet wrappers) turned out to be the major constraint, limiting our window size to no more than 1 Mbyte. This is approximately one-third of the optimal value derived above in Figure /reffig:lfn. As such, our final tuned network delivered a maximum TCP/IP performance of approximately 15 Mbit/sec, about one-third of the 39 Mbit/sec expected data throughput (Figure 7).

Although perhaps disappointing in a relative sense, this bandwidth is far in excess of T1 Ethernet speed (1.44 Mbit/sec) and allows an 8 Mbyte image to be transferred in approximately 5 seconds. As a further comparison, this bandwidth exceeds by 50% that which is available on the local area Ethernet network at the Keck Telescope itself.



 
next up previous
Next: Reliability Issues Up: Remote Observing with the Previous: User Protocol Layer
Patrick Shopbell
3/17/1998