I also need to regularly test the thruput for many sites that are connected to two data centers via ipsec tunnels built on commodity internet circuits. I ended up rolling my own solution using nuttcp. It exist for both windows and linux. I initiate the tests on windows because I have at least one windows workstation at each site and use linux for the server-side piece in each data center.
I have an automated process that kicks off weekly tests for all sites and then support staff can initiate on-demand tests when they need to. Overall nuttcp works well, but there are some downsides:
The nuttcp payload data is a very compressible pattern so network infrastructure equipment that does compression will cause bogus results. Fortunately none of my ipsec infrastructure does compression so this is not a problem for me but it would be nice to be able to supply a large file of pre-generated random data to test with.
I have not been able to get multi-socket bandwidth tests to run correctly, so it’s all constrained to a single TCP socket. Nuttcp has options for using multiple TCP sockets and testing via UDP but when I’ve experimented with these options the results get very strange and in many cases completely wrong.
On high latency links (like 4G cell modems) the tests sometimes take a long time to complete like ten minutes when normal time is about two minutes. In these cases, when I look at network traces of the traffic I see a weird pattern of the data receiver slowly building up to the receiver system’s max TCP window size, and then the test finishes. It’s not that big of a deal to me so I haven’t really dug into it.