Of particular interest is how they code their dependent variable, "transparency." They go to the World Development Indicators database and count the number of missing data points; missing data indicates opacity. This is a very clever way to get at a notoriously difficult concept to measure, but the project raises a few questions that I, along with others, posed in the Q&A:
1) Once countries start reporting they rarely stop. So, it may be that "bad-ass" (to borrow from other Rosendorff work and from Vreeland's concise explanation of it) dictators just refuse to report their data. But it also might be that leaders find ways to fudge the numbers (and if you think that the WDI has a good vetting process, look at their economic indicators for Greece over the past few years). That, combined with other reasons why states don't report (mainly issues of capacity) and the path-dependent nature of the decision to report, makes me wonder if we can really use missing data as a proxy for "transparency."
2) The much larger issue here stems from the authors conclusions in which they argue that their use of missing data indicates that political scientists should use multiple imputation to correct for biases created by missing data. But, this depends on your dependent variable. Actually, if their empirics hold up, this means in many situations we will have non-ignorable missingness and multiple imputation won't be able to help us. More specifically, if the probability of data (being used as an IV) being missing depends on the dependent variable, then multiple imputation will systematically bias created values. So, the moral is, multiple imputation can work in some situations but not in others.
Overall, though, a very entertaining and intellectually interesting project and presentation. Here's a link to Vreeland's blog as well.
No comments:
Post a Comment