It happens in the course of doing my work, that I need to understand the bandwidth between a pair of VMs in disparate datacenters or regions. There are a number of tools available that provide accurate results (
iperf I’m looking at you) but often I find that these tools are unavailable on the VMs that I am working with. For various I do not reliably have the ability to run yum or apt-get and install the “correct” tools. When it comes to that, I simply generate a file of a known size, and move it from one system to another using whatever protocol is necessary to perform the test.
If I need a file quickly, I can generate it from
/dev/zero, but the issue with this is that the file will be filled with zeros. If the tools or protocol I am using perform any data compression, this file will not produce accurate results.
dd if=/dev/zero of=<OUTPUT_FILE> bs=<BLOCK_SIZE> count=<BLOCK_COUNT>
When I need a file that contains random-ish data and thus is difficult to compress, I will use
/dev/urandom to seed the file. I don’t use
/dev/random will block, waiting for inbut form the keyborad, mouse and other parts of the computer to create enough random bits to populate the file. Since I am regularly using files in excess of 10GB this could take a while. Sometimes I even generate multi-terabyte files, which take forever using
/dev/urandom as it is.
dd if=/dev/urandom of=<OUTPUT_FILE> bs=<BLOCK_SIZE> count=<BLOCK_COUNT>
so here is the breakdown of the two above commands.
dd– copies stdin to stdout
if– read from an input files instead of stdin
of– write to an output file instead of stdout
bs– sets both input and output block size (you could specify
count– specifies how many input blocks (
bs) to take
To produce a 3 GB file in…
dd if=/dev/urandom of=~/my_test_file bs=1G count=3
dd if=/dev/urandom of=~/my_test_file bs=1g count=3