There is a famous quote by Mark Twain “The reports of my death have been greatly exaggerated”, and despite claims made by All Flash disk array manufactures, this quote still holds true for 10 k SAS drives when comparing only cost per gigabyte. SSD drives have been and are still more costly per gigabyte than their spinning ancestors 10k SAS drives.
This is the second part of this series, the first part can be found from here.
- Starting point is very similar as with Scale Up sizing exercise
- Collect some performance data from existing system in order to estimate growth rate
- Select over how many years solutions should be amortized
- Typically three to five years
- With Scale Out you don’t initially size / buy solution for the whole three to five years, but it is good to know your upper limit, especially if there are limits in Scale Out cluster size
- Based on collected data, plot required scaling over years
- You might also want to plot different scenarios at different growth rates
In this two part series I will study different ways to scale resources in data centers and how chosen model impacts costs.
In Virtualized Data Centers performance is one of the most common factors limiting scaling.
Compute layer has a Scale Out model for performance:
- If you need more compute power just add servers to a shared resource pool
- Balance load by live migrating VMs to new servers
- Keep using one management point
- No additional silos
- Nearly linear scaling
- Problem solved
This is follow up to my previous post, which can be found here.
Disclaimer: following tests are done just to show how easy it is to do bogus performance testing or showcase false performance numbers and demonstrate Nutanix analytics capabilities to catch these unrealistic results. This does not represent in any shape or form normal performance of Nutanix . Nor does it imply that Nutanix is using these techniques while publishing performance numbers. This is NOT a true or realistic benchmark or should NOT be interpreted as one.
Once again I am withholding some key information about the configuration and workload characteristics. I’ve chosen to do this to comply with Nutanix EULA. This is also done to prevent anyone copying the tests and then running them on a competing product and then claiming that my box is faster than yours. Since this a bogus test, doing that would be silly, but you never know, there are plenty of crazy people to go around 🙂 Continue reading “How to create unrealistic hero numbers while showcasing storage performance”
Since I’ve now written few posts about NetApp, it is time to switch gears. While I am quite noob with Nutanix, I’d like share something about Nutanix as well.
I received a demo unit from Nutanix a while go. One way to get familiar with a product is to put some load on to it and see what happens.
Because I am going to show some performance figures and Nutanix EULA forbids publishing benchmarking results, I am not going to disclose the configuration of the Nutanix box. This way performance figures are just numbers, not benchmarking results and hopefully I am not breaching the EULA. Furthermore without disclosing all the workload parameters and the configuration of the box, metrics such as “IOPS” and “Latency” are just numbers without relevance and should not be used in any comparisons with other products. Continue reading “Performance testing pitfalls with artificial load generators”