Wee 'P' values
There's an idea that a small 'P' value always means something, and also, an idea that statistics isn't something that needs a specialist, that your graduate course in stats and Excel functions are enough to see you through...
I can't help feeling that every science team needs a statistician or three to analyse their data and produce results. Sort of like system/acceptance testing, members of the team, but kept apart so they can at least make a show of independence.
As for publishing code and data to allow reproducibility, why would anyone want to do that unless they're forced to? IMO for peer review to be worth anything, every published paper should have had its code and data analysed by at least one independent reviewer and should be published to allow anyone to reproduce the results.
Remember a paper published last year that claimed to show that people who held right wing views correlated with people with psychotic tendencies? Someone got hold of the data and showed that a bug in the researchers Excel spreadsheets meant they'd got the 'results' the wrong way round.
If forcing reproducibility during review means that guff like this never gets published, job done.