Bandwidth, latency and geographical distance in load testing - Web Performance
Menu

Bandwidth, latency and geographical distance in load testing

Because we usually talk about latency in tiny numbers (e.g. 20 milleseconds of latency) it is easy to overlook just how big an effect latency can have on the effective bandwidth between geographically distant locations. While running some recent tests to measure the available bandwidth from our cloud engines, I accidentally ran a test between a load engine and a server that were more than 2600 miles apart. Knowing that our server and engine should have both delivered better results, it took me a few minutes to realize that one mistaken click (where to start the load engine) had drastically affected my test results.

Bandwidth and latency are factors in a complex relationship that determines how quickly users will see a page in their browser (along with protocol, queuing and processing delays, etc). Network bandwidth is the net bit rate, channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. Network latency is the shortest possible round-trip time of a request and corresponding response along that path.

One way to understand the two is to picture a freeway: bandwidth is determined by how many lanes the freeway has and latency is set by the speed limit on the road and the amount of traffic – which both limit how fast a vehicle can make a trip and return. Lets say we have a freeway with a single lane and we are moving all our possessions from one house to another at opposite ends of the freeway. This will take many car trips. Ignoring the time to load and unload, the total move time will be the number of trips multiplied by the total round trip time (latency). We can improve that by roping our friends into helping with the move – allowing us to send many cars at once. But there is a limit to this – after a certain point, adding more cars actually slows down the rate at which our possessions get moved due to the traffic congestion. We can add more lanes (bandwidth), but with enough traffic, even a 6-lane highway will be limited by the latency. For those who want to pursue the analogy farther: switching to trucks, which can carry more cargo in a trip, is analogous to increasing the message or packet size.  Much as been written on the topic: this, this and this may help further your understanding.

A key point is to take away is that geographical distance can greatly decrease the effective bandwidth between two points due to the increased latency. I ran a test this morning to determine the effective bandwidth between two of our load-generating engines and two servers. The load engines and servers were located on the east and west coast (one of each in both locations). In both cases my tests were able to achieve effective network throughput that was near the theoretical maximum limit of the server, when the load generator was close rather than far. For example, the test peaked at 95Mbps when both the engine and the server were in data-centers that were physically close in northern Virginia (<10 miles apart).  Move the server to San Francisco (2600 miles away) and the effective throughput dropped to 50Mbps. Reversing the tests resulted in a mirror image of the results. While this is relatively predictable, many of us are surprised to see such a large decrease in effective available bandwidth.

Another key take-away is: be sure you understand the limitations of your test infrastructure before you start testing your application!

Chris Merrill, Chief Engineer

Add Your Comment

You must be logged in to post a comment.

Resources

Copyright © 2024 Web Performance, Inc.

A Durham web design company

×

(1) 919-845-7601 9AM-5PM EST

Just complete this form and we will get back to you as soon as possible with a quote. Please note: Technical support questions should be posted to our online support system.

About You
How Many Concurrent Users