Measuring Network Throughput

2017-12-18 11:24

It happens in the course of doing my work, that I need to understand the bandwidth between a pair of VMs in disparate datacenters or regions. There are a number of tools available that provide accurate results (iperf I’m looking at you) but often I find that these tools are unavailable on the VMs that I am working with. For various I do not reliably have the ability to run yum or apt-get and install the “correct” tools. When it comes to that, I simply generate a file of a known size, and move it from one system to another using whatever protocol is necessary to perform the test.

Generate a file of known size

https://www.skorks.com/2010/03/how-to-quickly-generate-a-large-file-on-the-command-line-with-linux/

If I need a file quickly, I can generate it from /dev/zero, but the issue with this is that the file will be filled with zeros. If the tools or protocol I am using perform any data compression, this file will not produce accurate results.

dd if=/dev/zero of=<OUTPUT_FILE> bs=<BLOCK_SIZE> count=<BLOCK_COUNT>

When I need a file that contains random-ish data and thus is difficult to compress, I will use /dev/urandom to seed the file. I don’t use /dev/random because /dev/random will block, waiting for inbut form the keyborad, mouse and other parts of the computer to create enough random bits to populate the file. Since I am regularly using files in excess of 10GB this could take a while. Sometimes I even generate multi-terabyte files, which take forever using /dev/urandom as it is.

dd if=/dev/urandom of=<OUTPUT_FILE> bs=<BLOCK_SIZE> count=<BLOCK_COUNT>

so here is the breakdown of the two above commands.

Example:

To produce a 3 GB file in…

Linux: dd if=/dev/urandom of=~/my_test_file bs=1G count=3

BSD/OS X: dd if=/dev/urandom of=~/my_test_file bs=1g count=3