EconTalk host Russ Roberts has made no secret of his misgivings about high-level statistical analysis. So it’s no surprise that his skepticism is brought to bear in his interview this week with Columbia University’s Andrew Gelman. However, Roberts magnanimously starts the conversation by wondering aloud whether he’s gone too far in his skepticism. Maybe there are indeed things we can learn, and that we could not learn otherwise, via data analysis.
Gelman, a statistician, suggests that reliance on statistical significance is answering the wrong question…There is an extended discussion on the extent to which “p-hacking” is a problem in statistical research, as well as a fascinating thread on the prevalence of “priming.” (At the end of the conversation, Roberts refers to Brian Nosek’s replication project as “God’s work.”)
The real point of the conversation to me, though, are the big questions raised. Roberts, about half-way through, genuinely asks, “So, now what?” Are we to discard all data analyses and resort once again to pure theory? Can statistical analyses ever avoid becoming ideological cudgels employed to beat down one’s opponents? Should we reconsider the place of social science in policy altogether? What about what we consider to be social science? Is it enough to rely on your “gut” and be honest about it, as Roberts suggests?
These are just some of the questions I’m left thinking about after this week’s conversation. I’m not really comforted by Gelman’s contention that things would be better if only people had a better understanding of what statistical significance does (and does not) convey. I’m even less optimistic that more social scientists will go Gelman’s route and endeavor to better integrate theory into their data modeling. But I always aspire to be proven wrong…
READER COMMENTS
ScottA
Mar 24 2017 at 6:31pm
Psyched to listen to that episode, but feel like it’s worth commenting on one part of the above: ‘back to pure theory VS stats’: it’s a flawed dichotomy.
The answer is to accept that we can’t know what we can’t measure. This has a few implications. First, too many social scientists (economists are particularly bad offenders) spend their lives analyzing data collected by someone else, using methods they don’t know, incorporating errors, imputations and assumptions they don’t understand, and utilizing techniques that may or may not be valid. It’s one of my favorite ironies that empirical public choice typically relies on government-produced data to draw conclusions. Good thing the theory apparently doesn’t apply to the data generating process…. There are several clever exceptions to this point, but still.
Second, The amount of academic energy spent on analyzing data is astronomically greater than the energy spent collecting and verifying data. I think this is the great need right now. We have too much bad data and not nearly enough good data.
Last, some things just can’t be studied correctly with the data we have. Common Russ example: there’s only been 1 great depression, and only a few significant recessions, in the US in the data-availability era. It’s not enough to go on if you’re a macro-economist that refuses to consider any economy smaller than a nation-state. Just stop trying, or at least look at every nation-state, not just the US. There’s literally no reason for any macro-economic study to focus only on the US.
Comments are closed.