Now you’re less likely to miss what’s been brewing in our knowledge base with this weekly digest
Please, try again later.
This article provides examples of using common workload simulators (diskspd and fio) to simulate Veeam Backup & Replication disk I/O.
This article provides examples using two common tools for benchmarking and simulating disk workloads:
Diskspd is a command-line tool commonly used with Windows-based environments for storage subsystem testing and performance validation.
Note: While DiskSpd is available for Linux, fio is included in this article because it is easier to installDiskSpd for Linux must be compiled, whereas fio binaries are available for most Linux distros through their package manager. and more commonly used.
diskspd [options] [target]
Any command that contains the -w switch must not target an actual backup file, as that would overwrite the backup file contents. You should only target a backup file when performing the listed restore performance tests.
Compatible Targets:
You can specify multiple targets. Allowing you to simulate several jobs running at the same time.
-b specifies the size of a read or write operation.
For Veeam, this size depends on the job settings. The "Local" storage optimization setting is selected by default, corresponding to a 1MB block size in backups. However, every data block is compressed before being written to the backup file, reducing the size. It is safe to assume that blocks compress down to half the size, so in most cases, picking a 512KB block size is a reasonable estimate.
If the job is using a different setting, WAN (256KB), LAN (512KB), or Local+ (4MB), change the -b value accordingly to 128KB, 256KB, or 4MB, respectively. And if the Decompress option is on, don't halve the values.
-c specifies the file size you need to create for testing. We recommend using sizes equivalent to restore points. If the files are too small, they may be cached by hardware, thus yielding incorrect results.
-d specifies the duration of the test. By default, it does 5 seconds of warm-up (statistics are not collected), then 10 seconds of the test. This is OK for a short test, but for more conclusive results, run the test for at least 10 minutes (-d600).
-Sh disables Windows and hardware caching.
This flag should always be set. VeeamAgents explicitly disable caching for I/O operations for improved reliability, even though this results in lower speed. For example, Windows Explorer uses the Cache Manager and, in a straightforward copy-paste test, will achieve greater speeds than Veeam due to cached reads and lazy writes. That is why using Explorer is never a valid test.
Fio (Flexible I/O tester) is an open-source workload simulator commonly used with Linux-based environments.
fio [options] [jobfile]
It is possible to configure fio using preconfigured jobfiles that tell it what to do. This article does not use jobfiles but uses all command options to simplify the presentation and demonstrate the parity of settings with the DiskSpd commands.
Any command that contains the 'rw=write' or 'rw=randrw' parameters must not target an actual backup file, as that would overwrite the backup file contents. You should only target a backup file when performing the listed restore performance tests.
For a list of compatible targets, please reference fio documentation.
--bs specifies the size of a read or write operation.
For Veeam, this size depends on the job settings. The "Local" storage optimization setting is selected by default, corresponding to a 1MB block size in backups. However, every data block is compressedExcept when Compression is disabled at the Job Level or the Repository Level. before it is written to the backup file, so the size is reduced. It is safe to assume that blocks compress to half the size, so in most cases, picking a 512KB block size is a reasonable estimate.
If the job is using a different setting, WAN (256KB), LAN (512KB), or Local+ (4MB), change the -b value accordingly to 128KB, 256KB, or 2MB, respectively. And if the Decompress option is on, don't halve the values.
--size specifies the file size you need to create for testing. We recommend using sizes equivalent to restore points. If the files are too small, they may be cached by hardware, thus yielding incorrect results.
--time_based specifies that the test should be performed until the specified runtime expires.
--runtime specifies the duration of the test. For more conclusive results, run the test for at least 10 minutes (--runtime=600).
--direct when set to =1 disables I/O buffering.
This flag should always be set. VeeamAgents explicitly disable caching for I/O operations for improved reliability.
Each section below provides a command example of how to simulate performance for equivalent Veeam operations.
Please keep in mind that, as with all synthetic benchmarks, real-world results may differ.
NEVER target a restore point with a write speed test.
Doing so would overwrite the restore point and destroy backup data.
Commands in this article that use the following parameters perform write operations:
You should only target a backup file when performing the listed restore performance tests; those examples that demonstrate using a backup file.
This test simulates the sequential I/O generated when creating an Active Full or Forward Incremental restore point.
diskspd.exe -c25G -b512K -w100 -Sh -d600 D:\testfile.dat
fio --name=full-write-test --filename=/tmp/testfile.dat --size=25G --bs=512k --rw=write --ioengine=libaio --direct=1 --time_based --runtime=600s
This test simulates the I/O that occurs when creating a Synthetic Full or when a Forever Forward Incremental Merge occurs. Both of these operations within Veeam Backup & Replication involve two files wherein one is being read from while the other is being written.
diskspd.exe -c100G -b512K -w50 -r4K -Sh -d600 D:\testfile.dat
fio --name=merge-test --filename=/tmp/testfile.dat --size=100G --bs=512k --rw=randrw --rwmixwrite=50 --direct=1 --ioengine=libaio --iodepth=4 --runtime=600 --time_based
After completing the test, combine the read and write speed from the results and divide it by 2. This is because, for every processed block, Veeam needs to do two I/O operations; thus, the effective speed is half.
To estimate an expected time to complete a synthetic operation (in seconds):
This test simulates the I/O that occurs during an Incremental run of a backup job that uses Reverse Incremental.
diskspd.exe -c100G -b512K -w67 -r4K -Sh -d600 D:\testfile.dat
fio --name=reverse-inc-test --filename=/tmp/testfile.dat --size=100G --bs=512k --rw=randrw --rwmixwrite=67 --direct=1 --ioengine=libaio --iodepth=4 --runtime=600 --time_based
The read performance of a restore point during a restore task can vary depending on the fragmentation level of the blocks being read within the restore point. The two tests below can provide a lower-bound and upper-bound of the expected restore read speed. In theory, the actual restore speed should fall somewhere in between.
One factor contributing to fragmentation is the use of Forever Forward Incremental or Reverse Incremental. These methods can cause the Full restore point to become fragmented over time as blocks are added or replaced. To reduce fragmentation caused by these retention methods, consider using the 'Defragment and compact full backup file' option.
This test performs a random read to simulate a restore operation as if the blocks being read are fragmented within the restore point.
diskspd.exe -b512K -r4K -Sh -d600 \\nas\share\VeeamBackups\Job\Job2014-01-23T012345.vbk
fio --name=frag-read-test --filename=/VeeamBackups/JobName/VMname.vbk --bs=512k --rw=randread --ioengine=libaio --direct=1 --time_based --runtime=600s
This test performs a sequential read to simulate a restore operation as if the blocks being read are not fragmented within the restore point.
diskspd.exe -b512K -Sh -d600 \\nas\share\VeeamBackups\Job\Job2014-01-23T012345.vbk
fio --name=seq-read-test --filename=/VeeamBackups/JobName/VMname.vbk --bs=512k --rw=read --ioengine=libaio --direct=1 --time_based --runtime=600
This addtional test is strictly for Windows-based VMware Backup Proxy that would be engaged in backing up VMs using Direct-SAN Transport. This test can be used with Offline disks and will not overwrite data.
diskspd.exe -Sh -d600 #X
Where X is the number of the disk that you see in Disk Management.
This test will not overwrite data, it is a safe test, and it works for Offline disks. You can simulate and measure the maximum possible reading speed in SAN or hot-add modes. However, this will not take any VDDK overhead into account.
Note: The target specified must be in quotes if the command is executed from a PowerShell prompt. (e.g., diskspd.exe -Sh -d600 "#2")
Your feedback has been received and will be reviewed.
Please, try again later.
Please try select less.
This form is only for KB Feedback/Suggestions, if you need help with the software open a support case
Your feedback has been received and will be reviewed.
Please, try again later.