The RG begins with a concern about the theoretical orientation of IPE and then highlights the consequences of this theoretical orientation for empirical work. Here, I reverse the logic and highlight the empirical manifestation of the problem using statistical theory and then work back to the theoretical problem from there.
Assume a system that is constituted by n units over which we collect data. Assume further that two distinct data generating processes (“DGPs”) coexist in this system. DGP ‘A’ generates observations for each unit that are independent of one another. DGP ‘B’ generates observations for each unit that are not independent of one another. The relative frequency of A and B in the system is unknown.
Assume that
the only statistical tool employed to analyze observations collected on the
units generates unbiased estimates of the effect of x on y if and only if the
observations are generated by DGP ‘A.’
Statistical analysis of the data collected on this system will thus generate
biased estimates at a rate proportional to the relative frequency of DGP ‘B’ in the system.
Within this framework, the problem
RG claims to identify, therefore, is that OEP scholarship published in IO and
APSR between 1996 and 2006 (and thus at the center of mainstream American IPE for the last 15-20 years) has relied upon statistical techniques (the general linear model implemented in a TSCS framework) that generate
unbiased estimates if the observations for each country are independent of one
another. In fact, however, the observations often are not independent of one
another. Hence, the estimates these articles report are biased at a rate
proportional to the relative frequency of DGP ‘B’ in the contemporary global economy. We do not know the frequency
of DGP ‘B’ but suspect that it increases with the density of interaction among the units--a density which we call interdependence.
The RG
develops this argument as problem of theory rather than as a statistical
problem. Read my discussion of Herbert Simon (RG pages 317-19) and then map
that discussion back on to the discussion here of DGPs. Here is the concise version.
Simon suggests that the denser the web of interactions that link units to one
another in a complex system, the more the outcomes in each unit are driven by
systemic mechanisms and the less they are driven by unit-specific mechanisms.
Hence, as the density of cross-unit relationships increases, the ability to
theorize about the units independent of the system diminishes. It seems evident that Simon is a theoretical
analog to the two DGPs highlighted above, though he conceives of the problem in continuous rather than discrete terms.
The “complex”
characterization in Simon's conception of systems is an important one; in a complex system “more is different.” What this means is that one cannot understand how a complex system works by
studying the units that comprise it, and one cannot understand the units by
examining them (singly or in large groups) as if they are independent of one
another or the system. A complex system is irreducible; one must study it as a system. Complex systems are thus characterized by DGP 'B' and therefore must be modeled empirically as systems.
The RG focuses
on the theory problem rather than the statistical manifestation of the problem for a number of
reasons.
Thanks for listening.
- The problem is not a statistical problem because statistical solutions to (some of) the challenges posed by dependence among observations exist (or might be fashioned). We have spatial regression, ERGMS and latent space techniques. Consequently, the problem isn’t necessarily that we lack statistical solutions (though problems do exist for which we currently lack solutions); the problem is that until quite recently, we rarely implement them (or even test whether we need to). These techniques are now being applied with greater frequency (see, e.g., Cao for a recent application; see Nexon on Zeev Maoz's Kahler's network book). I note some of these statistical solutions in the conclusion (RG, 334-5).
- The problem is a theory problem because in order to implement the appropriate statistical solutions, one needs theory that recognizes the potential importance of DGP 'B'.
- OEP as a theoretical enterprise doesn't encourage us to recognize the potential importance of DGP 'B'. Consider David Lake’s characterization of OEP. "OEP begins with individuals, sectors, or factors of production as the units of analysis and derives their interests over economic policy from each unit's position within the international economy. It conceives of domestic political institutions as mechanisms that aggregate interests (with more or less bias) and structure the bargaining of competing societal groups. Finally, it introduces, when necessary, bargaining between states with different interests. Analysis within OEP proceeds from the most micro- to the most macro-level in a linear and orderly fashion, reflecting an implicit uni-directional conception of politics as flowing up from individuals to interstate bargaining"(Lake 2009, 225).
- The core assumption here is that every sub-system can be examined in isolation from the rest (this is what I mean by methodological "reductionism," by the way). The assumption applies to vertical disaggregation (we don’t need to factor in domestic institutional structure to understand individual preferences) and horizontal disaggregation (we don’t need to factor in bargaining between states to understand domestic aggregation and policy outcome).
- In Simon’s terms, OEP assumes the international system is nearly decomposable. In the language of statistical theory, OEP assumes the system contains a single DGP: DGP 'A'. In practice, this assumption has led to the modal empirical research design in which observations are assumed to be independent of one another. The implication is that this approach generates biased estimates in proportion to the relative frequency of DGP 'B' in the international economy.
- If the prevailing theory held that the global political economy was a complex system in which developments in one sub-system were dependent upon developments in other sub-systems, vertically and horizontally, then we would be more inclined to design research that incorporated those relationships and thus less likely to implement statistical techniques that assume observations are independent.
- In short, it is a theory problem not a statistical problem because our theory drives our choice of statistical tool. We need theory that encourages us to think about the potential presence of DGP 'B'. We don't presently have such theory.
- Theory could usefully move beyond classical IR thinking that remains wedded to the ontology of Newtonian mechanics.
- IPE could make this move by drawing a bit more from complexity science than it has done to date (yes, I know, Axelrod and Cederman, but also, I think one could count the number of IPE articles on the international financial system that cite Sornette or Gabaix on less than one hand).
- One way to begin moving down this path is to draw on the science of complex networks as potentially useful theoretical models of the international system.
- We have a paper forthcoming in Perspectives on Politics that applies this approach to the international financial system. The paper explores how the network topography of international financial relationships shapes the stability of the global financial system (the global spread of local crises; the stability of the topography in the face of changes in the underlying distribution of economic “power.”
- Some of Kindred's dissertation moves in this direction by applying network models to the global financial system, but I will let him write about that when he is ready to do so.
Thanks for listening.
4 comments:
Social network analysis may help get around this problem to an extent if I understand it correctly.
It's also curious that political science types don't use structural equation modelling more since it has the potential to deal with recursive relationships, IVs in turn being DVs and so forth--although there are still quite a few methodological limitations.
Still, SEM is more able to address issues of causal order that Herbert Simon identified.
I think the point is that we have the appropriate statistical tools. We just don't have the theoretical imagination.
Nor the imagination in practice.
Sorry, couldn't resist.
Har.
Post a Comment