Indeed, I'm not happy with 'research' that replies on a cheap and ready tool to do the judgements. That's not research, that's arithmetic.
"... describe their analysis of 28 Java open-source projects, which included 4.7m code quality issues in 36,000 pull requests."
So the code was bad because some tool said it was? Did the units tests run? Other tests? Were there further PRs required to fix actual issues from the studies PRs? Did they track their assumptions with a small subset, say 1/200th, to validate their criteria? Shallow research reaches shallow findings.