Placing economics on the science spectrum
Where does economics fit on the spectrum of sciences? ‘Hard scientists’ argue that the subjectivity of economics research differentiates it from biology, chemistry, or other disciplines that require strict laboratory experimentation. Meanwhile, many economists try to separate their field from the ‘social sciences’ by lumping sociology, psychology, and the like into a quasi-mathematical abyss reserved for ‘touchy-feely’ subjects, unable to match the rigor required of economics research. However, the dismal science’s poor (and thin) replication record does little to lend credence to the claim that economics is more rigorous than the other ‘social sciences’.
The blogosphere and media are abuzz with news of the most recent case of flawed economics research. Herndon, Ash, and Pollin’s replication study discovered coding errors, unconventional weighting, and selective data inclusion in Reinhart and Rogoff’s influential work on the effects of national debt on GDP growth. Krugman explains here and here how these errors obscure the previous causal relationship findings between debt and growth. As others (Fox here; Konczal here; Krugman here and here) discuss the validity of the original study, a bigger question should be asked of the field in general: When will we economists rigorously hold ourselves accountable for what we publish?
This latest controversy shines a bright light on a major difference between economics and the ‘hard sciences’. The scientific method demands replication and accountability from ‘hard science’ research. Replication, or the independent reexamination of data and code to ensure that original results are reproducible, deserves an equally important place in economics. Economists directly advise policymakers, who make life-altering decisions based on the results of sometimes fallible research. Shouldn’t these researchers be held to the same replication standard?
Every couple of years a major economic finding is questioned because of a coding error, limited data ranges, or inaccurate datasets. In response to these events, there is usually a call for increased data sharing and replication in economics (see the famous Journal of Money, Credit, and Banking example ). But eventually the commotion dies down, the attention fades, and we return to the insular world of empirical economics research.
There are some notable exceptions to the dearth of data documentation and replication in economics. The American Economic Review, for example, requires authors to publish data and code along with their research (with mixed results). But the list of journals requiring data and code to be submitted as a criterion for publication in economics is a fairly short one – Econometrica and The American Journal of Agricultural Economics immediately come to mind. And unearthing raw data, which have not gone through a (typically undocumented) cleaning process, is still extraordinarily rare. Until providing raw data and code becomes a publication requirement, replication will remain the exception rather than the rule in economics.
Efforts are underway to increase the amount of replication being conducted of influential, innovative, and controversial economic research. The University of Göttingen’s Replication in Economics program warehouses a large number of replication studies in one repository. The International Initiative for Impact Evaluation’s Replication Programme (small plug for the currently open Replication Window) incentivizes researchers to begin filling the replication gap in development economics by reexamining existing studies. But unless economics broadly embraces the importance of replication, we will continue to stumble from one occasional retraction to another, never reaching the standards required of the ‘hard sciences’.
Add new comment