Science; Institution design for

“Scientist, falsify thyself”. Peer review, academic incentives, credentials, evidence and funding…

May 17, 2020 — August 7, 2023

academe
agents
collective knowledge
economics
faster pussycat
game theory
how do science
incentive mechanisms
information provenance
institutions
mind
networks
sociology
wonk
Figure 1

Upon the thing that I presume academic publishing is supposed to do: further science. Reputation system and other mechanisms for trust in science, a.k.a. our collective knowledge of reality itself.

I would like to consider the system of peer review, networking, conferencing, publishing and acclaim and see how closely it approximates an ideal system for uncovering truth, and further, imagine how we could make a better system. But I do not do that right now, I just collect some provocative links to that theme, in hope of time for more thought later.

Figure 2: Vesaelius pioneers scientific review by peering Credit: the University of Basel

1 Open review processes, practical

pubpeer (who are behind peeriodicals) produces a peer-review overlay for web browsers to spread their commentary and peer critique more widely. The site is itself brusquely confusing, but well blogged; you’ll get the idea. They are not afraid of invective, and I thought they looked more amateurish than effective. But I was wrong; they are quite selective and they seem to be near the best elective peer review today.🎶 This system been implicated in topical high-profile retractions (e.g. 1 2.

Figure 3: xkcd 2304

2 Mathematical models of the reviewing process

e.g. Cole, Jr, and Simon (1981);Lindsey (1988);Ragone et al. (2013);Nihar B. Shah et al. (2016);Whitehurst (1984).

The experimental data from Neurips experiments might be useful: See e.g. Nihar B. Shah et al. (2016) or a blog post on the 2014 experiment (1, 2).

3 Economics of publishing

See academic publishing.

Figure 4

4 Mechanism design for peer review process

There is some fun mechanism design in this, e.g. Charlin and Zemel (2013);Gasparyan et al. (2015);Jan (n.d.);Merrifield and Saari (2009);Solomon (2007);Xiao, Dörfler, and van der Schaar (2014);Xu, Zhao, and Shi (n.d.).

An interesting edge case in peer review and scientific reputation. Adam Becker, Junk Science or the Real Thing? ‘Inference’ Publishes Both. As far as I’m concerned, publishing crap in itself is not a catastrophic. A process that fails to discourage crap by eventually identifying it would be bad.

5 How well does academia gatekeep?

Baldwin (2018):

This essay traces the history of refereeing at specialist scientific journals and at funding bodies and shows that it was only in the late twentieth century that peer review came to be seen as a process central to scientific practice. Throughout the nineteenth century and into much of the twentieth, external referee reports were considered an optional part of journal editing or grant making. The idea that refereeing is a requirement for scientific legitimacy seems to have arisen first in the Cold War United States. In the 1970s, in the wake of a series of attacks on scientific funding, American scientists faced a dilemma: there was increasing pressure for science to be accountable to those who funded it, but scientists wanted to ensure their continuing influence over fundingonline decisions. Scientists and their supporters cast expert refereeing—or “peer review,” as it was increasingly called—as the crucial process that ensured the credibility of science as a whole. Taking funding decisions out of expert hands, they argued, would be a corruption of science itself. This public elevation of peer review both reinforced and spread the belief that only peer-reviewed science was scientifically legitimate.

Thomas Basbøll says

It is commonplace today to talk about “knowledge production” and the university as a site of innovation. But the institution was never designed to “produce” something nor even to be especially innovative. Its function was to conserve what we know. It just happens to be in the nature of knowledge that it cannot be conserved if it does not grow.

Andrew Marzoni, Academia is a cult. Adam Becker on the assumptions and pathologies revealed by Wolfram’s latest branding and positioning:

So why did Wolfram announce his ideas this way? Why not go the traditional route? “I don’t really believe in anonymous peer review,” he says. “I think it’s corrupt. It’s all a giant story of somewhat corrupt gaming, I would say. I think it’s sort of inevitable that happens with these very large systems. It’s a pity.”

So what are Wolfram’s goals? He says he wants the attention and feedback of the physics community. But his unconventional approach—soliciting public comments on an exceedingly long paper—almost ensures it shall remain obscure. Wolfram says he wants physicists’ respect. The ones consulted for this story said gaining it would require him to recognize and engage with the prior work of others in the scientific community.

And when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. “I’m disappointed by the naivete of the questions that you’re communicating,” he grumbles. “I deserve better.”

6 Style guide for reviews and rebuttals

See scientific writing.

7 Reformers

8 Incoming

Andrew Gelman in conversation with Noah Smith

Anyway, one other thing I wanted to get your thoughts on was the publication system and the quality of published research. The replication crisis and other skeptical reviews of empirical work have got lots of people thinking about ways to systematically improve the quality of what gets published in journals. Apart from things you’ve already mentioned, do you have any suggestions for doing that?

I wrote about some potential solutions in pages 19–21 of Gelman (2018) from a few years ago. But it’s hard to give more than my personal impression. As statisticians or methodologists we rake people over the coals for jumping to causal conclusions based on uncontrolled data, but when it comes to science reform, we’re all too quick to say, Do this or Do that. Fair enough: policy exists already and we shouldn’t wait on definitive evidence before moving forward to reform science publication, any more than journals waited on such evidence before growing to become what they are today. But we should just be aware of the role of theory and assumptions in making such recommendations. Eric Loken and I made this point several years ago in the context of statistics teaching (Gelman and Loken 2012), and Berna Devezer et al. published an article (Devezer et al. 2020) last year critically examining some of the assumptions that have at times been taken for granted in science reform. When talking about reform, there are so many useful directions to go, I don’t know where to start. There’s post-publication review (which, among other things, should be much more efficient than the current system for reasons discussed here, there are all sorts of things having to do with incentives and norms (for example, I’ve argued that one reason that scientists act so defensive when their work is criticized is because of how they’re trained to react to referee reports in the journal review process, and various ideas adapted to specific fields. One idea I saw recently that I liked was from the psychology researcher Gerd Gigerenzer, who wrote that we should consider stimuli in an experiment as being a sample from a population rather than thinking of them as fixed rules (Gigerenzer n.d.), which is an interesting idea in part because of its connection to issues of external validity or out-of-sample generalization that are so important when trying to make statements about the outside world. * Jocelynn Pearl proposes some fun ideas, uncluding blockchainy ones, in Time for a Change: How Scientific Publishing is Changing For The Better.

9 References

a Literal Banana. 2020. “Extended Sniff Test,” 7.
Aczel, Balazs, Barnabas Szaszi, and Alex O. Holcombe. 2021. A Billion-Dollar Donation: Estimating the Cost of Researchers’ Time Spent on Peer Review.” Research Integrity and Peer Review 6 (1): 14.
Afonso, Alexandre. 2014. How Academia Resembles a Drug Gang.” SSRN Scholarly Paper. Rochester, NY.
Agassi, Joseph. 1974. The Logic of Scientific Inquiry.” Synthese 26: 498–514.
Alon, Uri. 2009. How to Choose a Good Scientific Problem.” Molecular Cell 35 (6): 726–28.
Arbesman, Samuel, and Nicholas A Christakis. 2011. Eurekometrics: Analyzing the Nature of Discovery.” PLoS Comput Biol 7 (6): –1002072.
Arbilly, Michal, and Kevin N. Laland. 2017. The Magnitude of Innovation and Its Evolution in Social Animals.” Proceedings of the Royal Society B: Biological Sciences 284 (1848).
Arvan, Marcus, Liam Kofi Bright, and Remco Heesen. 2022. Jury Theorems for Peer Review.” The British Journal for the Philosophy of Science, January.
Azoulay, Pierre, Christian Fons-Rosen, and Joshua S. Graff Zivin. 2015. Does Science Advance One Funeral at a Time? Working Paper 21788. National Bureau of Economic Research.
Baldwin, Melinda. 2018. Scientific Autonomy, Public Accountability, and the Rise of ‘Peer Review’ in the Cold War United States.” Isis 109 (3): 538–58.
Björk, Bo-Christer, and David Solomon. 2013. The Publishing Delay in Scholarly Peer-Reviewed Journals.” Journal of Informetrics 7 (4): 914–23.
Bogich, Tiffany L, Sebastien Balleseteros, Robin Berjon, Chris Callahan, and Leon Chen. n.d. On the Marginal Cost of Scholarly Communication.”
Charlin, Laurent, and Richard Zemel. 2013. The Toronto Paper Matching System: An Automated Paper-Reviewer Assignment System,” May.
Cole, S., Jr, and G. A. Simon. 1981. Chance and Consensus in Peer Review.” Science 214 (4523): 881–86.
Coscia, Michele, and Luca Rossi. 2020. Distortions of Political Bias in Crowdsourced Misinformation Flagging.” Journal of The Royal Society Interface 17 (167): 20200020.
Couzin-Frankel, Jennifer. 2015. PubPeer Co-Founder Reveals Identity—and New Plans.” Science 349 (6252): 1036–36.
Dang, Haixin, and Liam Kofi Bright. 2021. Scientific Conclusions Need Not Be Accurate, Justified, or Believed by Their Authors.” Synthese 199 (3-4): 8187–8203.
Devezer, Berna, Danielle J. Navarro, Joachim Vandekerckhove, and Erkan Ozge Buzbas. 2020. The Case for Formal Methodology in Scientific Reform.” Royal Society Open Science 8 (3): 200805.
Gasparyan, Armen Yuri, Alexey N. Gerasimov, Alexander A. Voronov, and George D. Kitas. 2015. Rewarding Peer Reviewers: Maintaining the Integrity of Science Communication.” Journal of Korean Medical Science 30 (4): 360–64.
Gelman, Andrew. 2011. “Experimental Reasoning in Social Science.” In Field Experiments and Their Critics.
———. 2018. The Failure of Null Hypothesis Significance Testing When Studying Incremental Changes, and What to Do About It.” Personality and Social Psychology Bulletin 44 (1): 16–23.
Gelman, Andrew, and Eric Loken. 2012. “Statisticians: When We Teach, We Don’t Practice What We Preach.” Chance 25 (1): 47–48.
Gharbi, Musa al-. 2020. Race and the Race for the White House: On Social Research in the Age of Trump.” Preprint. SocArXiv.
Gigerenzer, Gerd. n.d. We Need to Think More about How We Conduct Research.” Behavioral and Brain Sciences 45.
Go Forth and Replicate! 2016. Nature News 536 (7617): 373.
Greenberg, Steven A. 2009. How Citation Distortions Create Unfounded Authority: Analysis of a Citation Network.” BMJ 339 (July): b2680.
Hallsson, Bjørn G., and Klemens Kappel. 2020. Disagreement and the Division of Epistemic Labor.” Synthese 197 (7): 2823–47.
Heesen, Remco, and Liam Kofi Bright. 2021. Is Peer Review a Good Idea? The British Journal for the Philosophy of Science 72 (3): 635–63.
Hodges, James S. 2019. Statistical Methods Research Done as Science Rather Than Mathematics.” arXiv:1905.08381 [Stat], May.
Ioannidis, John P. 2005. Why Most Published Research Findings Are False. PLoS Medicine 2 (8): –124.
Jan, Zeeshan. n.d. “Recognition and Reward System for Peer-Reviewers,” 9.
Jiménez, Ángel V., and Alex Mesoudi. 2019. Prestige-Biased Social Learning: Current Evidence and Outstanding Questions.” Palgrave Communications 5 (1): 1–12.
Kirman, Alan. 1992. Whom or What Does the Representative Individual Represent? The Journal of Economic Perspectives 6 (2): -117-136.
———. 2010. “Learning in Agent Based Models.”
Krikorian, Gaëlle, and Amy Kapczynski. 2010. Access to knowledge in the age of intellectual property. New York; Cambridge, Mass.: Zone Books ; Distributed by the MIT Press.
Lakatos, Imre. 1980. The Methodology of Scientific Research Programmes: Volume 1 : Philosophical Papers. Cambridge University Press.
Laland, Kevin N. 2004. Social Learning Strategies.” Animal Learning & Behavior 32 (1): 4–14.
Leyton-Brown, Kevin, Mausam, Yatin Nandwani, Hedayat Zarkoob, Chris Cameron, Neil Newman, and Dinesh Raghu. 2022. Matching Papers and Reviewers at Large Conferences,” February.
Lindsey, D. 1988. Assessing Precision in the Manuscript Review Process: A Little Better Than a Dice Roll.” Scientometrics 14 (1): 75–82.
Maxwell, N. 2009. What’s wrong with science? Edited by L. S. D. Santamaria. Sublime, no. 17/09: 90–93.
McCook, Author Alison. 2017. Meet PubPeer 2.0: New Version of Post-Publication Peer Review Site Launches Today.” Retraction Watch (blog).
Medawar, Peter B. 1969. Induction and Intuition in Scientific Thought. American Philosophical Society Philadelphia.
———. 1982. Pluto’s Republic. Oxford University Press.
———. 1984. The Limits of Science. Harper & Row.
Merali, Zeeya. 2010. Computational Science: Error.” Nature 467: 775–77.
Merrifield, Michael R, and Donald G Saari. 2009. Telescope Time Without Tears: A Distributed Approach to Peer Review.” Astronomy & Geophysics 50 (4): 4.16–20.
Nature Editors: All Hat and No Cattle.” 2016, December.
Nguyen, C. Thi. 2020. Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.” Synthese 197 (7): 2803–21.
Noorden, Richard van. 2013. Open Access: The True Cost of Science Publishing.” Nature 495 (7442): 426–29.
Post, Daniel J. van der, Mathias Franz, and Kevin N. Laland. 2016. Skill Learning and the Evolution of Social Learning Mechanisms.” BMC Evolutionary Biology 16 (1): 166.
Post-Publication Criticism Is Crucial, but Should Be Constructive.” 2016. Nature News 540 (7631): 7.
Potts, Jason, John Hartley, Lucy Montgomery, Cameron Neylon, and Ellie Rennie. 2016. A Journal Is a Club: A New Economic Model for Scholarly Publishing.” SSRN Scholarly Paper ID 2763975. Rochester, NY: Social Science Research Network.
Ragone, Azzurra, Katsiaryna Mirylenka, Fabio Casati, and Maurizio Marchese. 2013. On Peer Review in Computer Science: Analysis of Its Effectiveness and Suggestions for Improvement.” Scientometrics 97 (2): 317–56.
Rekdal, Ole Bjørn. 2014. Academic Urban Legends.” Social Studies of Science 44 (4): 638–54.
Ridley, J, N Kolm, R P Freckelton, and M J G Gage. 2007. An Unexpected Influence of Widely Used Significance Thresholds on the Distribution of Reported P-Values.” Journal of Evolutionary Biology 20: 1082–89.
Ritchie, Stuart. 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. First edition. New York: Metropolitan Books ; Henry Holt and Company.
Rzhetsky, Andrey, Jacob G. Foster, Ian T. Foster, and James A. Evans. 2015. Choosing Experiments to Accelerate Collective Discovery.” Proceedings of the National Academy of Sciences 112 (47): 14569–74.
Schimmer, Ralf, Geschuhn, Kai Karin, and Vogler, Andreas. 2015. Disrupting the subscription journals’ business model for the necessary large-scale transformation to open access.”
Sekara, Vedran, Pierre Deville, Sebastian E. Ahnert, Albert-László Barabási, Roberta Sinatra, and Sune Lehmann. 2018. The Chaperone Effect in Scientific Publishing.” Proceedings of the National Academy of Sciences 115 (50): 12603–7.
Sen, Amartya K. 1977. Rational Fools: A Critique of the Behavioral Foundations of Economic Theory.” Philosophy and Public Affairs 6: 317–44.
Shah, Nihar B. 2022. Challenges, Experiments, and Computational Solutions in Peer Review.” Communications of the ACM 65 (6): 76–87.
Shah, Nihar B, Behzad Tabibian, Krikamol Muandet, and Isabelle Guyon. 2016. “Design and Analysis of the NIPS 2016 Review Process,” 34.
Smith, Richard. 2006. Peer Review: A Flawed Process at the Heart of Science and Journals.” Journal of the Royal Society of Medicine 99 (4): 178–82.
Solomon, David J. 2007. The Role of Peer Review for Scholarly Journals in the Information Age.” Journal of Electronic Publishing 10 (1).
Spranzi, Marta. 2004. Galileo and the Mountains of the Moon: Analogical Reasoning, Models and Metaphors in Scientific Discovery.” Journal of Cognition and Culture 4 (3): 451–83.
Stove, David Charles. 1982. Popper and After: Four Modern Irrationalists. Pergamon.
Suppes, Patrick. 2002. Representation and Invariance of Scientific Structures. CSLI Publications.
Thagard, Paul. 1993. “Societies of Minds: Science as Distributed Computing.” Studies in History and Philosophy of Modern Physics 24: 49.
———. 1994. “Mind, Society, and the Growth of Knowledge.” Philosophy of Science 61.
———. 1997. “Collaborative Knowledge.” Noûs 31 (2): 242–61.
———. 2005. “How to Be a Successful Scientist.” Scientific and Technological Thinking, 159–71.
———. 2007. Coherence, Truth, and the Development of Scientific Knowledge.” Philosophy of Science 74: 28–47.
Thagard, Paul, and Abninder Litt. 2008. “Models of Scientific Explanation.” In The Cambridge Handbook of Computational Psychology. Cambridge: Cambridge University Press.
Thagard, Paul, and Jing Zhu. 2003. “Acupuncture, Incommensurability, and Conceptual Change.” Intentional Conceptual Change, 79–102.
Thurner, Stefan, and Rudolf Hanel. 2010. “Peer-Review in a World with Rational Scientists: Toward Selection of the Average.”
Vazire, Simine. 2017. Our Obsession with Eminence Warps Research.” Nature News 547 (7661): 7.
Whitehurst, Grover J. 1984. Interrater Agreement for Journal Manuscript Reviews.” American Psychologist 39 (1): 22–28.
Wible, James R. 1998. Economics of Science. Routledge.
Xiao, Yuanzhang, Florian Dörfler, and Mihaela van der Schaar. 2014. Incentive Design in Peer Review: Rating and Repeated Endogenous Matching.” arXiv:1411.2139 [Cs], November.
Xu, Yichong, Han Zhao, and Xiaofei Shi. n.d. “Mechanism Design for Paper Review,” 9.
Yarkoni, Tal. 2019. The Generalizability Crisis.” Preprint. PsyArXiv.