Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 2007 | Published
Book Section - Chapter Open

Packet Loss Burstiness: Measurements and Implications for Distributed Applications

Abstract

Many modern massively distributed systems deploy thousands of nodes to cooperate on a computation task. Network congestions occur in these systems. Most applications rely on congestion control protocols such as TCP to protect the systems from congestion collapse. Most TCP congestion control algorithms use packet loss as signal to detect congestion. In this paper, we study the packet loss process in sub-round-trip-time (sub-RTT) timescale and its impact on the loss-based congestion control algorithms. Our study suggests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects. First, the sub-RTT burstiness in packet loss process leads to complicated interactions between different loss-based algorithms. Second, the sub-RTT burstiness in packet loss process makes the latency of data transfers under TCP hard to predict. Our results suggest that the design of a distributed system has to seriously consider the nature of packet loss process and carefully select the congestion control algorithms best suited for the distributed computation environments.

Additional Information

© 2007 IEEE.

Attached Files

Published - 04228140.pdf

Files

04228140.pdf
Files (212.0 kB)
Name Size Download all
md5:1a9a18c4461214187dd2b11b09c012b6
212.0 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 25, 2023