A reminder not to be overly impressed when presented with statistically significant coefficients, from FiveThirtyEight.com.
Read More »Economics as Bullshit Detection
In separate blog posts, Russ Roberts and John Cochrane have called for humility on the part of economists. Asking “What do economists know?,” Roberts and Cochrane point out—correctly—that economics is not as strong on quantification as some economists and many pseudo economists pretend, and as is often expected from economists. Economics is not the same as applied statistics although the latter can help clarify, at least to some extent, the empirical relevance of economic theories....
Read More »Doubts about Empirical Research
The Economist reports about research by Paul Smaldino and Richard McElreath indicating that studies in psychology, neuroscience and medicine have low statistical power (the probability to correctly reject a null hypothesis). If, nevertheless, almost all published studies contain significant results (i.e., rejections of null hypotheses), then this is suspicious. Furthermore, Smaldino and McElreath’s research suggests that the process of replication, by which published results are tested...
Read More »Research Funding in Economics
In the Journal of Economic Perspectives, Tyler Cowen and Alex Tabarrok question whether NSF funds are allocated efficiently. They write: First, a key question is not whether NSF funding is justified relative to laissez-faire, but rather, what is the marginal value of NSF funding given already existing government and nongovernment support for economic research? Second, we consider whether NSF funding might more productively be shifted in various directions that remain within the legal and...
Read More »GRIM Test
The Economist reports about a simple test of the plausibility of published research, and that many well published psychology papers failed it.
Read More »Outcome Switching
The Economist reports about “outcome switching”—promoting empirical evidence collected in the context of a specific hypothesis test (that didn’t succeed) as support for a different hypothesis. Outcome switching is a good example of the ways in which science can go wrong. This is a hot topic at the moment, with fields from psychology to cancer research going through a “replication crisis”, in which published results evaporate when people try to duplicate them.
Read More »