> You know, the whole publication + review + reproduction thing really helped science become a more solid process, but we need something more elaborate now.
No, we don’t. Increasing demands for rigour in pre-publication peer review are why publication times from submission sociology and economics reach and exceed two years and why papers which start off thirty pages long end up with eighty, after adding robustness checks and citing every tangentially relevant paper in the literature. We know that post publication peer review works perfectly well because it was the norm until after WWII with the rise of state funding of big science and the accompanying ass covering and form filling proceduralism that made it popular.
As always only replication counts, whether that’s checking that an experiment has the results claimed or that an argument follows from its premises.
We certainly don’t need to lean on reputation more. Science isn’t law; arguments from authority aren’t valid.
I think the accessability of publication has lead to a broader sampling of the normal curve, and as a result the overall quality of scientific literature is in decline. I imagine that just 50 years ago University was for a select elite, whether by nature or nurture, and the cost of running and printing journals pre internet ensured prioritization of a scarce resource. Nowadays publishing is relatively cheap and that coupled with what I imagine is a modern use of publication numbers as a KPI means lots of noise.
I felt it personally in grad school. If you objectively observe the work of yourself and your peers in such an environment, you may notice that there's a reason that none of you got into the Ivy Leagues.
Actually I totally botched the intent of my first message. My point is that the publication process is used as a reputation metric, which is something the research world (not science) needs. It needs it for valid purposes and using number of citations and impact factor for it is a hack that is now becoming very noisy due to the various exploits that can be made to this metric.
The publication+review+reproduction process is fine to discover scientific fact, I totally agree.
> We know that post publication peer review works perfectly well because it was the norm until after WWII with the rise of state funding of big science and the accompanying ass covering and form filling proceduralism that made it popular.
You know if there's anything written about the history of the modern scientific process, specifically on the rise of state funding of science? I'm particularly interested in when academics started to be essentially required to bring in external funding. I've only read offhand remarks like this and don't feel I have the full story.
This is an active area of interest in History of Science scholarship these days, which is steadily dismantling a lot of myths about how long peer review has existed and where it came from. It is in fact linked to the need to bring in funds from big grants agencies during the Cold War. You might try for example Melinda Baldwin, Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States, which has a lot of references to recent scholarship: https://www.journals.uchicago.edu/doi/pdfplus/10.1086/700070
But there is zero reward for showing replication results. Not novel enough, you won't get published. And if you're unable to replicate it then maybe you just did it wrong, or there was a small trick they were using in the code which they left out of the paper, etc.
Having replication studies/papers be on par with "innovation" studies/papers on academic conferences and journals. Or maybe not on par, but considering them as something worthy of publication.
No, we don’t. Increasing demands for rigour in pre-publication peer review are why publication times from submission sociology and economics reach and exceed two years and why papers which start off thirty pages long end up with eighty, after adding robustness checks and citing every tangentially relevant paper in the literature. We know that post publication peer review works perfectly well because it was the norm until after WWII with the rise of state funding of big science and the accompanying ass covering and form filling proceduralism that made it popular.
As always only replication counts, whether that’s checking that an experiment has the results claimed or that an argument follows from its premises.
We certainly don’t need to lean on reputation more. Science isn’t law; arguments from authority aren’t valid.