Feeds

back to article Most brain science papers are neurotrash: Official

A group of academics from Oxford, Stanford, Virginia and Bristol universities have looked at a range of subfields of neuroscience and concluded that most of the results are statistically worthless. The researchers found that most structural and volumetric MRI studies are very small and have minimal power to detect differences …

COMMENTS

This topic is closed for new posts.
Silver badge

Didn't we already know that most FMRI studies are rubbish

Just google the phrase 'FMRI dead salmon' and you'll see that the statistical significance of most such studies is woeful. It's actually quite worrying that so many of these get published in the first place.

5
1
Stop

Re: Didn't we already know that most FMRI studies are rubbish

Indeed, before it disappeared behind a paywall, the Science News story on it was my recommended first stop for anyone believing a study that rested on fMRI.

0
0
Meh

And who didn't know this?

Not exactly limited just to neuroscience.

0
0

Re: And who didn't know this?

Indeed. Just yesterday, in fact, I attended a lecture on genomics in which the researcher was making the same depressing claim: the vast quantity of information published in the field is rubbish, unreproducible and just as likely to be the result of statistical error or problems with alignment of sequenced bits of DNA as to actually show any meaningful results.

In one case, he talked about a report on genetic variations in mtDNA that was published in Nature and whose results neatly fell within what you'd expect to see if your alignment of sequenced strands was off.

1
0
Silver badge

Re: And who didn't know this?

I think the editors of Nature said that 75% of what they print turns out to be wrong.

0
0
Silver badge

OK everybody

Back to Phrenology...

6
0
Yag

Lies, damn lies and statistics?

i'll *again* advocate for the reading of Darell Huff's "How to Lie with Statistics"...

1
0
Silver badge

Re: Lies, damn lies and statistics?

Listening to the "More or less" podcast is also a good place to learn about the dubious use of statistics.

0
0
Boffin

What if...

If you put a neuroscience researcher in an fMRI machine and showed them pictures of their own brain working, would they find the part of the brain responsible for doing neuroscience?

Or would it induce some weird kind of video feedback loop like in the early Dr Who title sequences?

https://www.youtube.com/watch?v=C8Xm3EA3_XE

2
0

Bad journalism.

Badly written article with an irresponsible headline.

Try painting with brushes a little more narrow than. .. a very wide thing.

1
4
Bronze badge

Dyslexia

It's amazing how often the dyslexia industry relies on this stuff. And the reading = teach phonics industry.

But especially the dyslexia= teach phonics industry ( which is most of it).

1
0
Bronze badge

p < 0.05 ???

'Research that produces novel results, statistically significant results (that is, typically p < 0.05)"

This is 2*sigma significance level, right? This is what, in my rather unscientific observations, people in medical professions seem to consider "statistically significant". I am a physicist by education, and I jump every time I read a medical paper. I don't do it often, mind you, but *all* I have read presented 2*sigma level results as significant. A few had footnotes describing a process whereby a committee approves the methodology, including sample sizes, designed to expect a 2*sigma level result (larger studies are presumably too expensive/long/whatever - and are deemed unnecessary).

Sorry, a 5% chance of getting the expected result by chance, while assuming normal (i.e., dropping very quickly indeed) error distribution, is NOT statistically significant. Even 3*sigma level (for p < 0.0027) isn't. Try a few orders of magnitude better (in terms of p value) for real science.

Many years ago I taught physics lab at a university. The physics was not very sophisticated, but it taught students to gather and analyse data. Just about every science or engineering student passed through it. Medical students were conspicuous by their absence - I suppose they were too busy with other things to learn. I guess those students now write scientific papers.

Some neuroscientists are probably not physicians. They should know better than sticking to 2*sigma then.

2
1
Stop

Re: p < 0.05 ???

Stop with the stats fit and have a little think about what you are saying...

If I wanna do an experiment with say, mice for example, and in each treatment group there are 6 VERY expensive genetically modified animals that I have carefully age, sex and <insert other controllable variable> matched to reduce variablity, AFTER spending years generating that fragile/genetically complex mutated mouse line, I get a significant (P<0.05, or even <0.0027) difference in an already super expensive experiment (tens of thousands of pounds), you will immediately think it shit because of that significance value. Funding bodies are not made of money and most scientists do very well at getting good results with the minimum of expenditure.

To cut a rant short PHYSICS /= BIOCHEMISTRY

And before you annoy me any more, Biochemistry is a 'REAL' science. Just like physics is. Remember that next time you get all hot and bothered when reading a medical paper and need to stick another pill down your neck to stem the heart palpitations it gave you.

3
2

Re: p < 0.05 ???

> already super expensive experiment (tens of thousands of pound)

What you mean like CERN?

*facepalm*

1
0
Silver badge

Re: p < 0.05 ???

What dr. Potatohead said...

Physics does not equate to biochemistry does not equate to "neuroscience".

The LHC cost a fair bundle, but it got the scientists working with it millions upon millions of collisions observed by multiple experiments. That's a fair bang for your buck. Now imagine stripping and replacing the LHC after every single shot, for each and every experiment, then you'd find a way to do with smaller samples rather quickly..

the 2-sigma criterion in biology is , if correctly applied, sufficient to get valid results. The statistical techniques assume, and require, a stable equilibrium state, which in itself is representative for the biological process studied in an organism, and in which only one variable is changed. Even then you're running into the usual random mutations, unexpected interference from other processes, and a host of other things that make biological experiments both complex and frustrating, so getting 2-sigma is about as good as you can get given the level of control you have over the experiment.

1
0
Bronze badge
FAIL

Re: p < 0.05 ???

Maybe Goldacre's "Bad Pharma" should be compulsory reading.

0
0
Silver badge

Re:difference in an already super expensive experiment

You might not like it, but statistics don't give a crap how expensive your experiment is, only whether or not it is statistically significant.

0
0
Silver badge

Don't believe everything you read

I don't even believe everything I write.

3
0
Ru

Re: Don't believe everything you read

Oh? Do you have any evidence of that?

2
0

Re: Don't believe everything you read

No, he believes everything he writes, just not this one.

0
0
Silver badge

no surprise

Anybody that has ever worked in a university psychology department can you tell you the majority of them are closer to pseudo social "scientists" than real scientists. They do teach a special statistics class for the grad students but that is only so they can pretend to be worth the grant money and to keep from being sued.

2
0
Anonymous Coward

you can prove anything with statistics.

0
0
Bronze badge
Thumb Down

"you can prove anything with statistics."

correction - you can justify anything with statistics. A subtle , but significant difference.

5
0

...and you can do statistics on anything without any proof.

0
0
Silver badge
Coat

"you can prove anything with statistics."

correction - you can justify anything with statistics. A subtle , but significant difference.

Ah, but is it a statistically significant difference?

Okay, okay, I'm leaving already...

0
1
Anonymous Coward

I don't think anyone here, include the original poster, actually read the article in question. The results are specific to Meta analyses (essentially, analyses of analyses), for a specific year. don't apply to neuroscience as a whole, and the same concerns could be leveled at any scientific discipline in which sampled data is used to support conclusions. The "prescriptions" made by the authors in the paper (a priori power calculations, data management and transparency), rather than some novel idea the authors have suddenly happened upon, are in fact standard procedures where I work, and I assume elsewhere.

So it seems to me either someone is careless or has an axe to grind.

1
2
This topic is closed for new posts.