Doing the math, if you insist
> The problem will be - do the math - 10,000 times bigger.
If there's any form of error correction or avoidance - RAIDlike duplication or reed-solomon or whatever - then that math is not true.
Let's see, say the newer one has a 1/10 chance per cell per month of failing. With a simple duplication that's roughly 1/100 chance (independent probabilities). So the older version similarly has a 10^-4*10^-4 = 10^-8 probability, in other words 10,000 times smaller than the quotation suggests.
Obviously there's not usually a direct duplication, but any other self-correcting design has a similar exponential, instead of linear, dependence on error probabilities.