![]() Why are some objects bigger than the expected size? Note: While it was not a huge increase, it was notable enough to realize that with large scale production workload and with a longer retention, the impact on the performance tier’s storage consumption could be quite significant. very small data set, few VMs, abnormally low change rate, few restore points), I indeed observed that with larger storage optimization settings, backup size of the same source data increases. I measured offload time, API requests per second, offload throughput, backup size and collated the results on the graph and table below.Īs expected, with larger storage optimization settings, the processing time and number of PUTs decreases while throughput increases. To test the assertions above, I created 5 backup jobs of the same source VMs with different storage optimization settings (block size).Ĭhange rate was simulated by copying a consistent variety of file types to the VMs.Įach backup job targeted a different Scale Out Backup Repository. So why not just set larger block sizes if offloading to an Object Repository? Well, as documented, larger storage optimization settings typically will result in larger size increments. This also means that your throughput requirements should increase with the object size. ![]() The table below shows the relationship between job block size, object count and PUTs per MB.įrom a processing time perspective, storing the same amount of source data with bigger objects would require less PUT operations and therefore should complete faster than storing it with smaller objects. Less Objects to track mean less object metadata which would suit on-premises Object Storage vendors well. If you consider a conservative estimate of 2:1 data reduction through Veeam compression and de-dupe, the expected average object size will be about half of the storage optimization settings.įor the same amount of backup data, a larger object size should reduce the overall object count and PUT operations. Object Size, Number of objects, PUTs per MB, Increments size: The above formula shows that for the same amount of source data, the resulting number of PUT operations should decrease with bigger storage optimization setting (job block size). To estimate the number of put operations for a given amount of source data, we must divide that capacity by the Veeam job block size (storage optimization). We will re-use some of this math here to explain the impact of changing the storage optimization settings (job block size) before reviewing some testing results. In a previous post, I’ve offered some basic math to estimate the number of PUT operations, average object size, Monthly PUTs, Objects/s, etc. In addition, it is important to grasp the impact larger storage optimization settings have on the size of backup increments. Local target – extra-large blocksĪdjusting the Veeam storage optimization settings will have an impact on an Object Storage based repository in terms of resulting object size, throughput utilization, PUT operations and processing time. Veeam’s default storage optimization setting is “Local Target” which is 1MB but this can be adjusted as described below (see the user guide and the v11a release notes). Important: It is imperative that you check the Storage Vendor’s recommendation regularly for recommended best practices. The bigger the storage optimization settings, the bigger the average object size which means less objects to keep track of (object metadata). In our experience, using larger storage optimization settings (job block size) with on-premises Object Storage devices typically works best. To accommodate for vendor specific on-premises Object Storage Device limitations, it is sometimes necessary to adjust Veeam’s storage optimization settings.Įvery on-premises Object Storage vendor may express these limitations differently, but it usually boils down to the ability to handle large amounts of object metadata (and therefore number of objects) per bucket and/or device. In this post, I discuss on-premises vendor considerations, and how Veeam job settings can make an impact. Some of these considerations can be limiting, while others simply help dictate architecture decisions. Object storage vendors all have configuration considerations that must be accounted for as part of the process of designing Veeam backup repositories.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |