Variants of that article have been doing the rounds for a few years from many different vendors, for as many different reasons
Usually it's linked to competitive guarantees based on data reduction ratios and trying to undermine their competitions numbers. However only a few vendors continue to rely on thin provisioning etc to reach their stated numbers but they do sometimes play fast and loose with other measuments. If you don't trust the numbers simply compare the front-end host data to the back-end stored data.
As linked to in the white paper compaction is a measurement that includes thin provisioning and by inference zero detect. Dedupe and compression ratios don't include these as the resultant ratios would start off artificially high and degrade rapidly. However you might still be interested in thin efficiency if you aren't 100% flash and so can't take advantage of those features.
Quote:
• The compaction ratio is how much logical storage space a volume consumes compared to its virtual size and applies to all thin volume types.
• The dedup ratio is how much storage space is being saved by deduplication on deduplicated or deduplicated-compressed volumes.
• The compression ratio is how much storage space is being saved by compression on compressed or deduplicated-compressed volumes.
• The data reduction ratio is how much storage space is being saved by the combination of both deduplication and compression.
Similar to MammaGutt's example - it's an extreme case but illustrates the point:
Consider you have 2 identical volumes in the same CPG, neither contains intra volume data that can be deduped so each volume individually would report a 1:1 ratio. However inter volume dedupe would occur as the two contain identical data. As such CPG dedupe ratio reports 2:1, as yes you are saving 50% across both volumes rather than within the individual volumes. Delete one of the volumes and you're back to 1:1 for both measurements. Add in a few more partial copies, overwrite some of the data etc and the picture gets even more confusing. Load factoring attempted to represent these savings at the volume level, but could lead to wide variations based on what was happening on the array at any given time.
Unless you understood in detail the inter volume dependencies (which is impossible to track outside of a carefully controlled test environment) then the numbers were open to interpretation. Hence why it was removed as trying to load factor each volume meant estimated space per volume could change drastically and caused lots of confusion around space reporting as volume and CPG often didn't tie up.
Edited for clarity - first draft was from my phone