I'm not sure there are any quantum algorithms developed yet which use floating-point. Quantum computation is all about integers and set theory, at this stage. When they say "accuracy" I think they mean correctness, not floating-point precision. That would be correctness as in not broken, plus I believe quantum computers always have some probabilistic chance of giving the wrong answer; that probability is also a kind of accuracy (level of confidence).
For brokenness the problem seems easy - try a bunch of NP problems and verify them (in P) on a classical computer. But the result is a mixture of brokenness and probabilistic accuracy. You then have to figure out what proportion is brokenness.
So I believe the issue is that they really need to simulate the quantum computer on a classical computer and calculate what the probabilistic accuracy should be, so they can cross-check with the empirical figures and see what the brokenness level is. But a 300-qbit computer is so large that it can't be simulated.