Abstract
Cloud networks utilize virtual connections to connect virtual machines distributed across cloud sites. They are increasingly deployed due to flexible provisioning using software and cost-effectiveness in not requiring to build physical network infrastructure. However, their extensive virtualization makes it unclear how well the established practices of conventional networks translate to them. We study throughput measurements over a Google Cloud network using a matching hardware emulated conventional network, which provide production and exploratory conditions, respectively. The measurements span connections representing local, cross-continental and around the Earth distances. We study the effects of parallel flows, congestion control algorithms and retransmissions on the network throughput profile expressed as a function of RTT. We compare the throughput profile of Google Cloud network with those of emulated network under various loss conditions, including those too disruptive or expensive in the former. Our analysis based on the concave-convex shape and utilization-concavity coefficients of throughput profiles indicates an overall agreement of performance between the two networks, thereby justifying the use of conventional network emulations to analyze cloud networks. In terms of practical use, our study establishes that BBR and BBRv2 alpha TCP achieve higher throughput compared to loss-based congestion control algorithms under most network configurations, especially, under losses at large RTT.