Tuesday, February 26, 2013

UNC Everywhere

. Tuesday, February 26, 2013
0 comments

One of my fellow graduates students, John Cluverius (whom I still owe some drinks for proctoring an exam for me last semester... I haven't forgotten, John) has a post at the Monkey Cage on the American public's support for social spending across time. Using the "policy mood" data created by UNC Prof Jim Stimson, he shows a different result to that being bandied about in the press recently. It's an interesting post. Check it out.

Saturday, February 23, 2013

Cultural Amnesia: We Must Invest in STEM or We Will Lose the Future!

. Saturday, February 23, 2013
12 comments

[E]ven in the lower grades I found evidence of a much stronger emphasis on science than we give in the US. It is quite probable that within a generation Russia may have twice as many well-trained scientist as we. Russian resources are separated by vast distances. Her climate is exceedingly difficult. Transportation problems will always be most serious. But much of her soil is rich in the elements which when combined with a severe climate produce a most vital type of human being.
This could have be said by almost any politician in the US today, with the possible substitution of China for Russia. But, via Brad DeLong, it was actually said by Henry Wallace in 1952. There was a lot of threat inflation concerning the USSR in the 1950s and 1960s, and much of it was related to the knowledge economy and how that translated into national security ("missile gap", etc) combined with a sense that the US had underestimated the USSR in the 1940s. Not all of this was unreasonable, as Tony Judt recounted in a wonderful 1997 essay in The New Republic on the Chambers/Hiss affair (it's collected in Reappraisals but I can't find it online), but it got blown out of all proportion: without Hiss (and Chambers' special pumpkin) there wouldn't have been McCarthyism, and in the end McCarthyism mostly served McCarthy. In a similar, now we know that many of the principles that dominated US foreign policy from 1960-1990 -- brinksmanship, domino theory, MAD -- were simply unnecessary dangers, as the Soviet Union was not nearly as capable as the US leadership imagined it was.

Is there a lesson to be learnt? Towards the end of the Cold War we heard a lot about institutional superiority -- "end of history" and all that. Some of it was hyperbole, but there is some shred of truth in it as well. We genuinely believe that some form of democratic capitalism produces better outcomes than other systems, and we genuinely believe that societies which have norms of liberté, égalité, et fraternité, even if they are imperfectly applied, fare better than those which do not. Maybe we should take our own rhetoric seriously, for once: maybe we should worry a lot less about scores on science exams of 13 year old Chinese, and focus instead on translating the principles of liberté, égalité, et fraternité into practice.

P.S. the whole speech by Henry Wallace linked above is well worth reading, and could possibly be useful in an Introduction to International Relations course. There's a lot packed into a short document: commitment problems, asymmetric information, perception/misperception, etc.

UPDATE: Thanks to Noah Smith for driving traffic here. For new readers I should note that, contrary to what Smith wrote, as a quantitative social scientist I have nothing against math and science education. To say the least. I do have an issue with politicizing math and science education as a national security imperative, however, particular when done in a zero-sum "either we win the future or the Russians Chinese win the future" way. I thought I'd made that clear, but perhaps not.

Friday, February 22, 2013

How the World Works, Redux

. Friday, February 22, 2013
0 comments

Michael Pettis has written "A brief history of the Chinese growth model" which reads quite a lot like James Fallows classic 1993 essay on Japan's growth model, "How the World Works". Pettis hits many of the same notes: it isn't new, it was advocated for by Alexander Hamilton in the US and Friedrich List in Japan Germany; it is focused on enhancing national capabilities as much as improving the actual standards of living of citizens; to that end it prioritizes investment and exports over consumption and imports. There are other similarities as well.


Anyway, I know that Fallows' essay gets assigned in undergrad IPE classes a lot. This could be an interesting/useful update or companion piece.

In other (related?) news, here's a right-up-to-date essay on the economic situation in China from Caixin (sort of a Financial Times of China). It's not very optimistic; the title is "Waiting for a Crisis".

FWIW, Fallows' article on Japan appeared about a year before that country's financial crisis in 1994.

Thursday, February 21, 2013

It Isn't Always Appropriate in Comparative Politics, Either

. Thursday, February 21, 2013
0 comments

Jay over at the Dart-Throwing Chimp  extends the argument we have been making here and in print about the mismatch between the complexity that characterizes the real world causal mechanisms we study and the research designs we typically employ to study them. Though we have focused on the implications of this mismatch for studying politics in the global economy, Jay extends the logic to comparative politics. He suggests that it is "impossible to understand persistence and change in national political institutions without thinking about how those institutions are embedded in a larger global context." As he elaborates:

We’re stuck in a complex adaptive system that doesn’t really distinguish between national and international, political and economic, human and natural, and our theories of stability and change in political institutions should take that whole more seriously...The simplifying assumption that states are separable units certainly has its uses, but we shouldn’t conflate that utility with causal relevance. Like maps, all models are simplifications, but those simplifications aren’t useful if they ignore the very causes they’re meant to locate.
Of course, I think Jay is absolutely correct. It makes no sense to speak of the third wave of democratization (or the second or the first waves, for that matter) without recognition that some sort of global process may be at work. It makes no sense to speak of the Arab Spring (or is this the fourth wave?) without recognition that some sort of regional process may be at work. And, as Jay points out, the issue isn't that students of comparative politics fail to recognize these cross-country dependencies and the issue isn't that we lack statistical tools to address them. The issue is that the modal approach to theorizing is to theorize about states as if they are independent. And so as a general rule, they are omitted from theories and empirical models.

It seems that we might theories and methods are now emerging that allow us to develop and evaluate theories that incorporate explicitly these dependencies. They could be applied fruitfully to comparative political development, as well as to international and comparative political economy.



A BIT (sorry) More on ISDs

.
0 comments

A nice discussion of my article with SBD at The National Interest has taken place in comments at the International Economic Law and Policy Blog, mostly centering on the question of investor-state dispute (ISD) clauses in trade deals. A bit wonkish (okay very wonkish), but potentially very important as well.

Mark Kantor, who is affiliated with Georgetown and Columbia Universities, has disagreed with our take on ISDs, which is that a US-EU trade deal could precipitate a general decline in their usage. He raises some very good points; you should read them. He may very well be correct. (Although he's wrong to say that we don't take into account recent US and EU ISD behaviors, including the inclusion of ISDs in the model BITs of the US and EU; in fact we mention that specifically.)

But I'm not quite ready to give up our claim just yet. Via Nathan Jensen, here's a recent report in Columbia FDI Perspectives by Joachim Karl of UNCTAD demonstrating, among other things, the increased costliness to governments (including those of developed economies) of ISDs. One highlight:

Governments face a dilemma. While many governments consider ISDS a key element of international investment protection, ISDS is becoming increasingly risky. For one, governments’ risk of being sued by foreign investors is growing. Second, when a dispute arises, the defence requires enormous resources; if a case is lost, damages can be very high. Third, governments live with an unpredictable arbitration practice without having the legal safety net of an appellate body like in the WTO. Fourth, complex domestic legal issues reaching beyond international investment law are examined by international arbitrators. Fifth, as more disputes are directed against countries with highly developed domestic judicial systems, governments need to ask themselves how positive discrimination of foreign investors in respect of ISDS can be justified.
Karl notes that many countries are in something of a holding patterns regarding ISDs: not ready to do away with them, but not exactly expressing enthusiasm for them either. He also notes that the US is one of the leaders in restrictions to and regulations of ISDs. As such, if the US decides to de-emphasize ISDs it could provide momentum for a more general movement in that direction. the Here's part of the crux:
Overall, the existing ISDS system is no longer recognized as an indispensable core part of IIAs. Discontent is not limited to a few developing countries, but has spread to G-20 countries, including some of the BRICs. Further momentum could jeopardize the ISDS system as a whole.
We suggest that, for political reasons, a US-EU FTA/BIT could be part of that momentum if it excludes an ISD, and there are good reasons to believe that it might. In fact, that is our argument.

We could be wrong, but it's nice to know that we're not the only ones thinking along these lines.

Against Ceteris Paribus Theories of International Relations, A Prelude

.
15 comments

A quick hit from me, as I'm swamped with dissertation work and am also in grading purgatory. But this, via Tyler Cowen, is interesting:

Since 2008, in response to the economic downturn, most big European countries have cut defence spending by 10-15 per cent. The longer-term trends are even more striking. Britain’s Royal Air Force now has just a quarter of the number of combat aircraft it had in the 1970s. The Royal Navy has 19 destroyers and frigates, compared with 69 in 1977. The British army is scheduled to shrink to 82,000 soldiers, its smallest size since the Napoleonic wars. In 1990 Britain had 27 submarines (excluding those that carry ballistic missiles) and France had 17. The two countries now have seven and six respectively.

And yet Britain and France are commonly regarded as the only two European countries that still take defence seriously. The British point out that, even after the current round of cuts, the UK will have the fourth-largest military budget in the world. Britain is also, for the moment, one of only two European nations to meet the Nato target of devoting 2 per cent of gross domestic product to defence – the other is Greece.

The situation in most other European countries is worse – Spain devotes less than 1 per cent of GDP to military spending. And much European military spending goes on pensions or pay, not equipment. The Belgians distinguished themselves in the Libyan campaign of 2011. But about 75 per cent of Belgian military spending now goes on personnel – causing one critic to call the Belgian military “an unusually well-armed pension fund”.
Britain and France now have one aircraft carrier... combined (they share it). What this means is that the US now accounts for 50% of the world's naval power.

Why does this matter? For awhile I've been hoping to carve out time for a short article with the working title "Against Ceteris Paribus Theories of International Relations". It is intended as a retort to those who claim, based on stylized comparative statics, that US military spending is essentially wasted and therefore those funds should be re-directed either to shoring up the social welfare state or tax rebates. (As of now I'm planning on targeting John Quiggin particularly, who has made similar arguments to these in especially egregious form on numerous occasions, but the scope may broaden, narrow, or shift in some other direction at the point of writing.)

In other words, the article will argue that if the US significantly lowered its military spending the effect would be dynamic, not static: an increase in security dilemmas worldwide, which would be marked by a large and sustained increase in military spending in the non-US world. The net effect could very well be great global military spending; this would be something of a tragedy on its own, since most military spending does not go to uses which expand human dignity and well-being. But it could be worse if security dilemmas lead to arms races which spiral into conflict.

The core of the argument will be this: take away 50% of the world's naval power... do you think anything else might change? Put into the language of economics, some sectors of the economy are natural monopolies; it just wouldn't be efficient to have 100 different telephone companies putting up poles all over the place. Similarly, from the perspective of global welfare, it is probably more efficient for the US to spend disproportionately on its military. There are downsides to this, of course, similar to market monopolies. Nevertheless, the most efficient outcome is likely for the US to out-spend the rest of the world to such an extent that security dilemmas (and concomitant arms races and conflict) do not result, accepting the negatives that go along with this positive.

Anyway, that paper is roughly 20th in the queue so I'll probably never get around to it.

Monday, February 18, 2013

Outside Options

. Monday, February 18, 2013
0 comments

Apropos of my recent article with SBD in The National Interest is this, (via Jonathan Dingell):

UPDATE: There's some good discussion of our article over at the always-excellent International Economic Law and Policy Blog (my go-to source for information on trade disputes and agreements-under-negotiation). (ht: Simon Lester)



When Is Reductionism Not Appropriate in Theory?

.
4 comments

In "The Reductionist Gamble" Oatley actually makes a weak argument: we can't know when outcomes we observe are independent from each other (what he calls data-generating process (DGP) "A") and when they are not ("DGP B"). In Oatley's framework assumptions regarding the DGP may be trivial in many cases even if they are not always trivial. Oatley, in other words, is arguing that our inability to directly observe the data-generating process should make us humble and encourage us to investigate international politics under a range of assumptions regarding the DGP. Instead, IPE -- at least as it is represented in the APSR and IO -- has honed in on one: the Open Economy Politics (OEP) paradigm, which he calls "reductionist" because it assumes independence of observations, both theoretically and methodologically, and because it recommends beginning analysis at the lowest unit-level (individuals or firms), while bringing in systemic effects only when necessary. Oatley argues, persuasively, that in some cases the assumption of independence will be false, and when it is false the reported empirical results are not reliable.

In my view this argument should be uncontroversial. Oatley is not saying that all OEP research is wrong, nor that most of it is; he's merely saying that OEP contains a critical assumption which ought to be justified more regularly than it is. This is Research Design 101. Since it is not, there is a strong likelihood that some unknown percentage of OEP research has been done under false premises, and is therefore biased in an unknown direction. Oatley re-estimates several empirical models central to the literature, while adding terms which allow for conditional "systemic" effects, and finds that the reported results are called into question.

I'm going to make a stronger argument, in speculative form (i.e. I'm trying to be a bit sensational): almost all of the theoretical traditions in IR/IPE suggest DGP A is not a trivial assumption, but most of our research designs go on to assume DGP A anyway. My argument is that, in the typical case, our empirical design does not match our theoretical structure.

Regarding Oatley's post (and article), it might be helpful to think of some of the major theoretical orientations in IPE -- within which mid-level theories are ostensibly embedded -- and place them into different bins: those which assume independence of observations (DGP A) and those which assume non-independence of observations (DGP B). For these purposes, "observations" refers to realizations of some outcome at a given unit of analysis. The outcome could be "amount of trade" or "incidence of war" or "occurrence of a financial crisis". The unit of analysis could be states, firms, or advocacy groups. To greater or lesser extent, all of the prominent grand theories involve an assumption of non-independence: DGP B. David Lake (and presumably other OEPers) hate isms so this might not be of interest to them, but it's useful as a starting exercise.


1. Various realist theories suggest that states will modify their behavior in response to the international environment within which they operate. In particular, state behavior is conditional upon the underlying distribution of power in the system. If that is the case, then treating states as if they are realizations of independent and identically distributed observations is not theoretically sound: the behavior of state A is reactive to the behavior of state B, and is contingent upon the underlying distribution of power in the international system. Economic coercion, "redistributive cooperation", the structure of trade, and other outcomes are responsive to interactions between states.

2. Institutionalists argue that the behavior of international actors is modified by the ways in which they interact. Membership in common organizations -- e.g. the WTO -- can have an impact on common outcomes -- e.g. increased dyadic trade -- even after controlling for domestic factors. The US, for example, does not decide how much it should trade based solely on local factors; how much the US trades depends at least partially on its ability to find willing trading partners. This logic is easily extended from dyadic relationships to systemic relationships: the amount that the US trades with France may not be independent of the amount that the US trades with the UK, by virtue of the common membership of the US, UK, and France in organizations which are designed to facilitate trade. In this way, we are starting to understand that even pairs of states are not independent from other pairs of states, much less individual states. Indeed, even the most vigorous findings in a dyadic context -- such as the democratic peace -- often have trouble holding up in a systemic analysis.

3. Varieties of constructivism also think in terms of DGP B. The identities, beliefs, and actions of agents cannot be understood outside of the social structure within which they exist. For constructivists, interests are not assigned according to immutable materialist Laws of the Cosmos, but emerge via processes of interaction. Thus, it is nearly impossible for constructivists to theorize about social actors and entities as if they existed independently from one another.

4. In its most basic form, which has obviously been complicated in thousands of ways, Marxists believe that the material conditions of society lead to exploitation and imperialism, and that this is a global phenomenon. E.g., the exploitation of labor in the global South is a necessary component of the enrichment of capital in the global North. It is thus impossible to explain the outcomes in the "base" of society -- e.g., employer-employee relations -- without reference to the "superstructure" of society -- its organization of political power. Because capitalism is a global phenomenon, the superstructure is itself global in nature. Therefore, employer-employee relations in Britain are not independent of the British imperial exploitation of the Indian subcontinent. Treating the two as if they were same misses the entire point of the theory.


5. First wave IPE was concerned with "complex interdependence". The name itself implies that an assumption of independence of observations is not warranted (even though its authors didn't generally argue in those terms), and that patterns of dependency will not be simple or monotonic. There were varying types of interdependence that Keohane and Nye were concerned with. Among them was the non-independence of the economic system from the political system; the impact of transnational nonstate actors in influencing power politics; the effect of international integration on the formation of states' perception of interests, the effectual nature of international institutions, etc. There is no contemporary research paradigm centered on complex interdependence (yet), but this rhetoric has colored IPE theory ever since the 1960s. In a recent essay, Robert Keohane professed some nostalgia for the "Old IPE", which focused on complex interdependence, over the new IPE which assumes it away (at least methodologically). He expresses hope for the future by way of diffusion and network analyses; methods which do not assume unit independence:
[T]he null hypothesis that national governments make decisions independently is not sufficient to explain the spread of liberalism or of convergence in certain sectors. Competitive pressures seem to play a major role, and there is some evidence, less clear, that coercion, learning and emulation also are significant factors. In other words, the distinctively international and transnational processes studied by IPE have to be brought back into the picture. 

Moving to theoretical orientations which are often thought of as being "middle range", and thus in the wheelhouse of OEP, we can consider:

6. Theories of globalization posit non-independence in many ways. "Golden straightjacket" and "race to the bottom" models believe that global forces coerce national governments to behave differently than they otherwise would, largely in response to market structures and the nature of strategic interactions with other states as well as market actors. Even careful studies which have complicated these accounts have concluded that policymaking at the national level responds to international developments in strong, if narrow, ways. Other theories of globalization focused on cultural hegemony, homogeneity/"Americanization", and a diffusion of norms and practices across countries all argue that what happens in Country B is not independent of what happens in Country A. That assumption is a necessary component of DGP A.

7. The long tradition of dependency theory obviously requires non-independence. Some read Oatley's "Reductionist Gamble" (and a subsequent article forthcoming in Perspectives on Politics) and thought he was advocating for a return to dependency theory, because of language like "core" and "periphery". That's not unreasonable, although I don't know if Oatley would claim the mantle. Dependency theory has been making a comeback in other corners of mainstream IPE as well.

8. Many developments in trade theory suggest that a focus on domestic factor endowments and political institutions (as exogenously-given) are missing a lot of the picture. New trade theory in economics suggests that scale economies play a major role in determining the type and size of trade, quantum trade theory argues that "beachhead costs" and other factors influence how and why governments might invest in trade-generating policies. We've employed gravity models which explicitly assume that the probability of trade relationships forming is not equal across all observations (while only allowing for several forms of dependence, especially proximity and the size of a state's internal market. We're beginning to look at how the structure of trade networks impacts the establishment of future trade. All of the major advancements in economic and political theories of trade over the past three decades have been at the expense of an assumption of independent and identically distributed observations. All of our theoretical improvements, in other words, make it harder and harder to assume DGP A.

9. Transnational financial flows also involve complex processes which make independence a questionable assumption. How do countries get "lumped in" to categories by investors, so that what happens in Thailand has an effect on Indonesia? How does petrodollar recycling lead to a crisis in Brazil? How might US Federal Reserve policy have influenced the Arab Spring? How could a collapse in home prices in the US threaten the existence of the European Union? All of these questions require a theoretical structure which assumes non-independence: DGP B.

I could go on, but I'll stop here. The point is that it's hard to think of a major theoretical orientation -- whether "grand theory" or "mid-range theory" -- in IPE that establishes DGP A. And yet the dominate research paradigm -- open economy politics -- explicitly argues in favor of such an assumption. There is a disjuncture here.

I believe this is why does an article like "The Reductionist Gamble" resonates with some and infuriates (or confuses) others. Why should such an article exist in the first place? How did we get to this point? That will be the subject of my next post.

Sunday, February 17, 2013

How Not to Write an Abstract

. Sunday, February 17, 2013
0 comments

We've been blogging about trade and trade networks a decent amount lately, so I was interested to see a new NBER working paper titled "Multinational Firms and the Structure of International Trade". I clicked through and read the abstract, which is:

This article reviews the state of the international trade literature on multinational firms. This literature addresses three main questions. First, why do some firms operate in more than one country while others do not? Second, what determines in which countries production facilities are located? Finally, why do firms own foreign facilities rather than simply contract with local producers or distributors? We organize our exposition of the trade literature on multinational firms around the workhorse monopolistic competition model with constant-elasticity-of-substitution (CES) preferences. On the theoretical side, we review alternative ways to introduce multinational activity into this unifying framework, illustrating some key mechanisms emphasized in the literature. On the empirical side, we discuss the key studies and provide updated empirical results and further robustness tests using new sources of data.
Got it? No? Me either. The point of an abstract is to give a precis of the article's scope, method, and findings. This abstract mentions that there are findings ("provide updated empirical results and further robustness tests") but doesn't say what they are. Neither does it mention what empirical design it uses other than that there is a review of literature. It lists a few questions asked by that literature, but does not answer them.

I've read this abstract four or five times now, and all I want to know is what the structure of international trade is, and I can't tell.

I guess I'll actually have to read the thing.

Saturday, February 16, 2013

The Downside of a Currency War

. Saturday, February 16, 2013
6 comments

Matt Yglesias has suggested, many times, that there are no downsides to a “currency war”. Here is the most recent example. Now Paul Krugman and Greg Ip are on board. The basic argument is as follows: much of the world needs more monetary stimulus, and currency wars are a form of quasi-coordinated monetary stimulus. Hence, if we start a currency war we’ll all get monetary stimulus and a bunch of national economies will improve simultaneously. According to this view, currency wars not only are not negative-sum (the Barry Eichengreen finding concerning the 1930s devaluations), they aren't even neutral; they're positive-sum.

The problem with this way of thinking is common in a lot of commentary about economics: it refers to global dynamics purely in terms of local effects (if it refers to global dynamics at all). But global dynamics have global effects.

That is, this is a conceptual problem. If we think of the global economy as a single system comprised of many different interdependent units, then we would ask what the effect of a currency war would be on the system. There is no reason to think it would be the same everywhere. At the very least, we should think that parts of the system which control systemically important currencies would react different than parts of the system which do not. If we conceptualize the global economy in terms of the units, and not the interdependencies between them, then we end up only caring about what happens in the U.S. and EU.

Thinking in terms of systemic, rather than just local effects, leads us to the conclusion that there may some significant downsides to a currency war, although they are likely not to be primarily in the U.S. There is mounting evidence suggesting that the several food crises since 2007 have been a result of ever-expansionary Federal Reserve policy. For example, a 2009 paper suggested:

While the market dynamics during this period are still not well understood, a combination of macroeconomic factors such as the depreciation of the dollar and lower interest rates in the United States...
Krugman considered this view in 2011, and rejected it. He blamed the food crises on climate change. While that may be a contributing factor -- and noting that boosting demand in industrialized societies would only increase carbon emissions, and thus exacerbate the problem -- the role of the Federal Reserve cannot be dismissed so easily:



That's a pretty close association, and even more impressive as it tracks non-linearly.

And now the scholarly evidence is starting to mount. Here's another paper:
Released in July 2008, What’s Driving Food Prices? identified three major drivers of prices—depreciation of the U.S. dollar, changes in production and consumption, and growth in biofuels production.
More recently, David Leblang of the University of Virginia has presented a conference paper arguing that, through the channel of increased commodity prices, Federal Reserve interventions did have a role in the onset of the Arab Spring, via the mechanism of commodity price volatility. I can't find an online version of the paper, but he makes a persuasive argument.

This follows a lot of previous literature suggesting that commodity price volatility is associated with conflict in less-developed areas, and the Fed actions are associated with commodity price movements. In other words, the theoretical story is pretty sound. All you have to do is connect the dots. To connect the dots, you need to think systemically.

So this is the potential downside of a currency war: monetary stimulus in the countries which control major currencies, and volatility everywhere else. This is why the term "currency war" was coined by a Brazilian (and reiterated by a Russian), not a German. In the worst case scenario relatively rich people in what used to be called the global North get richer, while relatively poor people in the global South get poorer, and possible even face political instability and violence.

I'd like to see what Jay Ulfelder thinks about this line of thinking. In any case, the logic appeals to me.

Friday, February 15, 2013

NSF Recipient Has "No Idea" if He Should Be an NSF Recipient

. Friday, February 15, 2013
0 comments

I'm not sure why I'm jumping in on this again. Oh well.

I suggested in a previous post that if social scientists want to continue to get federal funding we should be prepared to explain why we should continue to get federal funding. I further explained that a simple recitation of "interesting" or "important" findings was not sufficient. Neither was it enough to say that research is worth of subsidy because it is a public good (which it actually isn't). We needed to explain why our work was more in the public interest than some other spending program. After all, if poor folks now have to take drug tests before they can get food aid, the least we should be expected to do is explain why our work deserves federal money instead of, I don't know, condition-less food aid. Or biomedical research, or deficit reduction, or tax cuts, or early childhood education, or universal post-secondary education or or or.

Either we need to explain why our work is in the public interest or we need to admit we're rent-seekers and start trying to be better at it. Presumably we should know how to do that.

So we were given a chance! Krugman went after Cantor, Cantor fired back by naming a particular NSF-funded political science research program, and the recipient political scientist -- Walter Stone of UC-Davis -- took to the Monkey Cage to state his case. Here's his (Stone's! Not Cantor's!) answer to the question:

Are these and other results we are reporting worth the $267,000 support NSF granted the project? I have no idea. Could the money have created more value for the nation if it had been devoted to medical or biological research? Possibly.
Nice. Credit to Walter Stone for honesty and humility, but I doubt Cantor feels chastised.

The actual findings of the study are interesting to me. They also support things we generally already knew: proximity models of elections work pretty well in most cases. They worked pretty well in 2010, just as they had previously worked pretty well. 2010 is actually an interesting case for this, since many had assumed that the rise of the Tea Party are thrown a wrench in proximity models. This wasn't so. The findings reinforce my priors, so I like that too.

But in the end it doesn't matter what the findings were. The question is whether it was worth $267,000. I don't know any more than Stone, but it'd be easier for me to argue that it isn't than that it is. The problem with social science is not that we haven't done enough voter surveys or tested the median voter theorem enough, at least in my view. Even if it was... is there any public interest in it? Other than the "knowledge for its sake" sense I can't see it. There's an academic interest in it, but that is not the same as a public interest. Just because findings are interesting or important doesn't mean that they are worth public subsidy. Lots of things are interesting or important that do not.

John Sides previously objected to my suggestion that social scientists were acting like rent-seekers when they defend their, erm, rents. I didn't mean it pejoratively, but the first commenter at the Monkey Cage (an anonymous grad student, apparently) made my point:
I thought we political scientists were supposed to know something about the practice of politics. Quotes likes this are why we are at serious risk of losing our funding. No professional lobbyist would EVER make these kinds of concessions. If I were in the APSA executive office, I would be furious after having read this response.
Indeed. But if we acted like lobbyists it'd be even harder to argue that our work was in the public interest, wouldn't it?

Stone ends his response to Cantor with this:
I am confident, however, that some small investment in understanding citizen behavior in the world’s oldest democracy is worthwhile. If we find that voters act reasonably in selecting candidates for seats in the “people’s House”—that they are not dominated by money and other distorting influences—perhaps we will learn to trust that deliberations in Congress, including over how best to spend federal research dollars, will ultimately reflect the public interest.
Maybe. But the APSA will disagree if the House votes to cease funding political science through the NSF.

Thursday, February 14, 2013

Another Shameless Plug

. Thursday, February 14, 2013
2 comments

Talk of a potential bilateral trade deal between the US and EU is heating up, probably because Obama gave it attention in his State of the Union Address. Matt Yglesias thinks the deal is going to be difficult, because it will focus on "thornier" issues like agriculture and regulatory policies. Tyler Cowen thinks that regulatory barriers makes a deal unlikely as well. The Financial Times also sounds a somewhat skeptical note. There's plenty more where that comes from if you look around the commentariat.

Well, in a stunning reversal of our typical demeanor, Sarah and I are here to make the optimistic case for a deal getting done. The full essay is in The National Interest, but the basic argument is that normal political problems standing in the way of a deal are reversed in this case: there are no easily-identifiable domestic interest groups likely to mobilize politically to lobby against it, it might provide the US and EU with much-needed leverage in WTO negotiations, and it could impact the future of international investment law.

Check it out.

A wonkish complaint about gravity models

.
0 comments

Recently, I've been thinking a lot about how to apply gravity models of trade to FDI flows, especially to infer "missing FDI."[And, if you also interested in gravity models, Will (aka Kindred) has a recent post about quantum gravity models of trade.] One thing I've learned is that several economists are trying to create comprehensive gravity models of FDI that have solid theoretical and empirical footing, but there currently is no FDI gravity model that is as widely accepted as the standard gravity models of trade. Firm level locational decisions that drive FDI are theoretically more complex than trade decisions. Perhaps the most straightforward example of this is that while some argue FDI is a theoretical substitute for trade, bilateral FDI and trade are actually positively correlated. There are some new models that are much better equipped to estimate firm entry as a function firm productivity, building off work by Melitz as well as Helpman. We seem to be getting closer to a standard gravity model for FDI. I want to make two points about issues I see with this agenda.

1) Any gravity model (of trade, of FDI, of any flow) is necessarily retrospective. We establish the veracity of models by how well they predict previous flows. But, what happens when the decision rules firms use to determine economic activity fundamentally change? The gravity model can well predict economic exchange that follow previous patterns, but it runs into trouble when production networks follow new logics. A quick look at patterns of sales of overseas affiliates of US MNCs in 1999 and 2009 illustrates the logic behind FDI is shifting. About 60% of overseas affiliate sales were local. This is relatively consistent across regions, although local sales by overseas affiliates were as low as 51% in Africa. Overseas affiliate sales back to the US were just under 9%, indicating vertical FDI is less prevalent than most assume. Overseas affiliate sales to other markets, whether to other affiliates or to unaffiliated buyers was about 30%. These sort of sales indicate FDI locational decisions based on export platform models and complex supply chains. What is particularly interesting is how the composition of sales by US overseas affiliates have changed over the past 10 years. In 1999, local market, home market, and third market sales stood at 67% 10% and 13% respectively. Thus, FDI over the past 10 years has experienced an important change. MNCs, at least from the US, are shifting strategies from horizontal FDI to FDI motivated by export platform strategies and by the increasing complexity of supply chains.

The take away from these changes is that gravity models of FDI that are built on patterns of FDI flows in the 1990s may not generate appropriate predictions for FDI today. And, as complex supply chains and export platform models drive a greater percentage of MNC's investment decisions, estimation techniques that are fundamentally bi-lateral may significantly lose their theoretical appeal as well as their predictive capabilities (heteroskedasticity in the error term is going to be increasing difficult to fix!).

2) All these changes in patterns of FDI probably have important implications for gravity models of trade. After all, intrafirm trade accounts for half of global trade.

I guess the implications of all this are threefold. In a world in which we love data sets with long time series, we need to rethink when it is appropriate to pool temporally. And, we will need to continually go back to the structure of gravity models to ascertain whether they remain good predictors of economic flows. Finally, we must also consider when modeling economic exchange in a bilateral context misses so much of the underlying structural dynamics that dyadic estimation techniques will be wrong.



Tuesday, February 12, 2013

A Shameless Plug

. Tuesday, February 12, 2013
0 comments

While the world of economics is busy debating Jeremy Stein's recent argument that the Fed can -- and sometimes should -- use monetary policy as a de facto regulatory tool (see here for a recent discussion and links), I figure I should use the opportunity to plug a paper of mine which analyzes a similar question in a cross-sectional context. My conclusion is that banks respond to institutional incentives, not just actual central bank policies.

The gist: Copelovitch and Singer found that monetary policy is fundamentally different when central banks are also regulators. Specifically, they argue that when central banks are regulators they tailor monetary policy towards the needs of the firms they regulate. I build off of their framework to show that banks respond to these differences in predictable ways: they act more riskily when central banks are regulators, and note that this effect is independent from actual monetary policies themselves.

The paper is currently in the "revise and resubmit" phase at a journal (it has been revised and resubmitted and I am awaiting a decision), but a previous version is here.

There. Is. No. Technocracy. Dammit.

.
0 comments

Felix Salmon is one of my favorite journalists, but he routinely makes a common error: forgetting that there is such a thing as politics. Take this reflection on Tim Geithner in which Salmon wonders what made the former Treasury Secretary "change his mind" on how to deal with financial crises:
[T]he most obvious case in which Geithner has done a complete U-turn from his former views is that of Indonesia. The great Australian financial journalist Peter Hartcher explained this very well back in 2009, when Geithner took over as Treasury secretary. He quoted former Australian president Paul Keating explaining in a nutshell exactly what Geithner did wrong: “Tim Geithner was the Treasury line officer who wrote the IMF program for Indonesia in 1997-98, which was to apply current account solutions to a capital account crisis.” With hindsight, Geithner did the exact opposite of what he is now prescribing in the event of a crisis... 
Indonesia in 1998 had a problem not dissimilar to what we saw in the US 20 years later: a sudden credit crunch afflicting a country whose government finances were fundamentally sound. Geithner’s solution, now, is for the government to “be very aggressive” spending money, and for the central bank to provide its own monetary support, all in the service of “compensating for the huge collapse in private sector demand”. But that’s not what he thought in 1998, when he forced the Indonesian government to cut spending and raise interest rates — precipitating a recession much larger than anything the US saw during the financial crisis.

Now that Geithner is going to write a book, I very much hope he goes as far back as Indonesia, and covers his two-year tenure at the IMF as well, rather than glossing over those episodes on the way to the juicy stuff about the more recent crisis. For one thing, it will be fascinating to see when and how his mind changed on such issues. And for another thing, it’s conceivable that the book might shed light on the how this consummate career government technocrat thinks — and thereby shed light on much of the system of global governance.
This type of commentary bothers me because it is so common (which is why I keep harping on it). Isn't it possible, just possible, that an American public official might respond to a crisis in the United States differently than to a crisis in Indonesia for political reasons? Isn't it possible, just possible, that the reason why the IMF pushed Asian (and Latin American) countries into austerity in exchange for emergency finance is because the IMF's creditors cared more about getting their money back than about finding the most optimal solution to the problem? Isn't it possible, just possible, that an American central banker or Treasury Secretary might care more about American interests (and interest groups) than those of, say, Thailand? Of course those things are possible. So why doesn't Salmon mention them as a possibility?

There is quite a lot of political economy research on the IMF. None of it concludes that it is an impartial technocratic institution. It is involved in power politics, generally in ways which benefit US interests. It lends in a way that benefits the American financial sector. It trades lax conditionality for UN votes on the Security Council and in the General Assembly. It adjusts conditionality requirements based on the recipient's geopolitical importance, and enforces conditionality more or less strictly based on a country's ties to major powers. This is but a small sampling of the literature demonstrating that the IMF is a political, and politicized, institution. It acts in the interests of the major stakeholders in the major powers, especially the United States (which is the only country which possesses an effective veto on IMF funding decisions). The IMF is not on a relentless pursuit for the Most Optimal Policy as determined by the economists' imagined technocratic Benevolent Social Planner. (Needless to say, the US Treasury Department and Federal Reserve are even more political.)

In other words, when parsing Geithner's career we do not need to make an assumption that he has been on a quest to find technocratic nirvana. We don't have to assume that he's had a Road to Damascus moment which caused him to change his mind on key issues. All we have to note is that an American policymaker, when faced with very different crises in very different countries with very different levels of geopolitical importance reacted... very differently. That makes sense! That is what we should expect from an interested government official.

Friday, February 8, 2013

Global Trade Network, 2006

. Friday, February 8, 2013
2 comments





Kindred wondered, in a comment on my prior post how the trade network has evolved since 2000. Here is a quick reply that brings us up to 2006.  These are two screenshots of the full global trade network. The left is the center of the starburst pictured on the right. Node size is weighted degree, or total imports. I enourage you to click this link and look at the full image, which is an SVG that allows you to zoom in and retain resolution. (And sorry, I did this quickly and labeled only the large nodes at the center). Data come from Barbieri. The network here is larger than the one I presented yesterday: 185 nodes. But, the trade network is not very dense: .39. This indicates a hierarchical rather than random or flat network structure, a conclusion supported by the starburst evident in the full visualization. I ran a community detection algorithm which identified the three groups reflected in the graph colors: the largest (tealish) centered on the US, which includes 50% of the nodes; a second European community (pea greenish) of about 34%, and a third composed of sub-Saharan African states (the rust colored group) about 13% of the nodes.

Hence, even by 2006, as the discussion of decoupling was beginning to emerge, we see no evidence of a regionalization of Asian trade that could provide a substitute for ties to the North American block.

Here is a version of the visualization that omits the community structure.


Thursday, February 7, 2013

More on Trade Politics

. Thursday, February 7, 2013
0 comments

Continuing my mini-trend on research on the domestic politics of trade, here's a new one from Hicks, Milner, and Tingley in ISQ. Abstract:

Developing countries have increasingly opened their economies to trade. Research about trade policy in developed countries focuses on a bottom-up process by identifying economic preferences of domestic groups. We know less about developing countries. We analyze how economic and political variables influenced Costa Rican voters in a referendum on CAFTA-DR, an international trade agreement. We find little support for Stolper–Samuelson models of economic preferences, but more support for specific factor models. We also isolate the effects of political parties on the referendum, controlling for many economic factors; we document how at least one party influenced voters and this made the difference for CAFTA-DR passage. Politics, namely parties using their organizational strength to cue and frame messages for voters, influenced this important trade policy decision. Theories about trade policy need to take into account top-down political factors along with economic interests.
CAFTA's hot, apparently.

Global Trade Network

.
6 comments


This is the network structure of global trade in 2000.* We do not often employ network models to characterize the global economy, so I thought I would offer one that I generated in connection with a book I am writing on the political economy of American hegemony. The visualization depicts the 31 largest trading economies in 2000. As a group, their imports accounted for 91% of total world trade, while trade among these 31 countries accounted for 76% of total world trade. Node size is total imports, and thus gives a sense of relative importance of each national market. Ties are directed, and the weight is import value. One can appreciate the US trade deficit, for instance, by comparing the relative weights of the two ties that connect the US to China (or Japan). I used the Force Atlas 2 algorithm in Gephi to generate the layout and made small adjustments by hand to move overlapping nodes. The algorithm approximates the gravitational pull among nodes based on tie structure and weights. Hence, the clustering present in the figure is an attempt by this algorithm to represent the underlying structure of the network, not a structure I imposed on the data. The underlying data are from the Gleditsch dyadic trade data.

Three patterns seem worth emphasizing.

As we know, trade has structure that is not fully captured by geographic proximity. The cluster at the top of the graph nicely illustrates the regional nature of some EU trade, especially among the original six plus the UK. The cluster at the bottom left, centered on the United States, in contrast, highlights the degree to which trade of the major East Asian economies is oriented around the US economy rather than primarily intra-regional.

Second, countries that we might expect to orient their trade more toward the EU than toward the US, due to geography, seem to orient more toward the US than we would expect. The central European countries, the Scandinavian countries, Ireland, as well as Israel, UAE, and India, appear to be pulled toward the US and away from the EU cluster, in spite of the geographic proximity of the latter. This appears to nicely illustrate the importance of economic mass as a counter to geographic proximity.

Third, overall, the system is strongly hierarchical. The US is central--it trades larger volumes with more partners than any other country. The remaining countries fall into three tiers: large economies that engage in substantial trade with multiple parties, but are (at best) regionally rather than globally central. Germany, for instance, is large and regionally central; Japan and China both are large but not central. Second, medium sized countries who engage in an appreciable amount of trade but are not regionally central. These are, for instance, the small open economies of Scandinavia and Eastern Europe. The fourth tier is the set of countries not represented--the 140 or so for which trade may be important but which as individual economies account for an imperceptible amount of world trade.

Nothing terribly surprising here, perhaps, but one interesting question arises--to what extent will Asia evolve toward a regional cluster that is more independent of the US? Are we likely to observe a future Asia that resembles contemporary Europe, or will future East Asia look like East Asia of 2000? This is obviously the decoupling question--but the network visualization might suggest why decoupling has not occurred to the extent people believed it would. Decoupling requires a fundamental reorientation of trade relationships throughout the region. And if much of this trade is intra-firm, then the underlying production network must also be reoriented. These are substantial changes; and it is not obvious how the global economy treats sunk costs. Are sunk costs sunk, or are important path dependencies at work?

*If you would like an SVG version of the image, leave a comment. 

Tuesday, February 5, 2013

Ditch the Job Talk...

. Tuesday, February 5, 2013
4 comments

Dan Nexon sparked a conversation about job talks. The whole exchange is curiously ahistorical. I searched the web for something that might document the historical development of this tradition. I found nothing. So, let me advance an hypothesis. (If someone knows a source that discusses this historical development, please give me the cite).

People invented job talks because the cost of reproducing non-published written work was too high to allow distribution of large volumes of printed material in support of job applications. It seems that in the pre-photocopier era, the cost of reproducing written work, such as an academic paper, was about $.15 to $.25 per page. This was, according to wiki, at a time when the minimum wage was $1.65. This per page cost did not include the labor of typing the manuscript onto a stencil which could then be used to generate the mimeograph. In this environment, it was cost prohibitive to circulate hard copies of non-published research for three candidates to an entire faculty(somewhere in the range of $540 to $900 in 1968 dollars). How then, by golly, would a department faculty ever come to learn about the current research being conducted by the persons they are considering for a position? And thus was invented the job talk. A low-cost method of disseminating knowledge across a large community of interested listeners.

This hypothesized explanation for job talks hews close to what I take to be a pretty reasonable explanation for the origins of lecturing in the University. In the pre-Guttenberg era, books were costly to reproduce. Brad DeLong suggests that a single student's textbook costs for an eight course per year four year college degree based on today's "all students buy books" model would have been around $1.6 million. There were not many students who could afford that. Hence, teachers stood at the podium and read to their students from a single copy of an illuminated manuscript.

Back to job talks. I think the marginal cost of reproducing and distributing unpublished work today is about zero (Yes, neither the transmission nor the storage of data is free. But, still). Arguably, therefore, the function for which job talks were created no longer needs to be filled by job talks. And yet, we show up at the appointed hour to listen to nervous candidates present less sophisticated versions of the very paper they uploaded to the personnel website when they applied for the job. The very same paper that the committee sent, or could have sent, to the rest of the faculty as an email attachment when they notified us about the forthcoming job talk. In short, the tradition persists long after the technology that made it necessary has been supplanted by technology that renders it obsolete (arguably, the identical logic applies to conferences, but I'll save that for another post).

And thus robbed of its original quite useful purpose, we invent new purposes that the job talk supposedly serves. It's an initiation. It's how I can determine what kind of teacher you will be. Faculty are lazy, so it's the only way they will ever become familiar with candidates' work. We couldn't run a pro-seminar (why would we want to?) A much simpler explanation seems more compelling: We have job talks because that's how we do things and nobody stops to consider whether they actually provide any useful information that isn't better and more cheaply attained elsewhere. (Indeed, as Uncle Wuffle notes, there are two main purposes for giving a job talk. (1) to get a job and (2) to practice the job talk so you can get a job). We give job talks because we ask people to give job talks because that's what we do.

In fact, I might even suggest that the job talk creates the suboptimal outcomes that Dan's initial post deplored. Our belief that we can learn enough about a candidate's work by attending the job talk removes the obligation to take the time to read the research. As a result, we make decisions based on dumbed-down and overly general presentations of the candidate's research. If we dumped the talk, faculty might feel a greater obligation to read the work in order to have an informed opinion. That might generate more informed discussions and better hiring decisions.

I am sure our practice eventually will catch up with technology. I mean, how many of us teach big lecture classes anymore?

Does Social Science Deserve Public Funding?

.
1 comments

John Sides notes that the Congressional Republicans have resumed their attack on federal funding of social science. Here's Eric Cantor, as quoted by Sides:

There is an appropriate and necessary role for the federal government to ensure funding for basic medical research. Doing all we can to facilitate medical breakthroughs for people … should be a priority. We can and must do better. 
This includes cutting unnecessary red tape in order to speed up the availability of life saving drugs and treatments and reprioritizing existing federal research spending. Funds currently spent by the government on social science – including on politics of all things – would be better spent helping find cures to diseases.
Cantor's argument is not that social science has no merit; it is that other policy goals should have priority. This is possibly wrong, but it is not unreasonable on its face. When development organizations start programs in less developed countries they do not fund social science. They fund health care, infrastructure, and basic education. This suggests that publicly-funded social science is, to some extent at least, a luxury good. The US government obviously does not face the same budget constraint as say Liberia, but at some margin there is a tradeoff between funding program A and funding program B. If 'A' is medical research and 'B' is social science research, it might make sense to prioritize the medical research.

Many people believe that the US is not spending nearly enough on infrastructure, health care for all, education, alternative energy programs, public transportation systems, and biomedical research. Or foreign aid, for that matter. Indeed, social scientists frequently make these claims. Cantor has laid down a challenge: can the social sciences demonstrate that their work is a better investment than research into new medical procedures, alternative energy sources, infrastructure upgrades, etc.? Can the social sciences demonstrate that money spent on their programs is worth more to society than whatever the next-best option is? More technically, Cantor is asking us to think about the relative opportunity costs given actual budget constraints.

This is an opportunity for the social sciences to demonstrate their value by making a clear, coherent argument. Simply pointing to research on topics of possible public interest (as Sides does) is not enough... it must be accompanied by an argument that that research is more deserving of public funding than something else. So far I have not seen such an argument made. I have seen social scientists act like any other interest group: they want public spending on programs that benefit them because those programs benefit them. There's nothing wrong with that, but it's a bit distasteful to equate common rent-seeking behavior with a broad public interest. If the social sciences deserve public funding they ought to be able to make the case on its merits. In a way, Cantor is challenging us to think like civically-minded social scientists.

Sides concludes his post:
The broader point is that Cantor’s goal, curing disease and saving lives, can be better accomplished by including social and political science alongside the “hard” sciences and medicine.
Maybe that's true (I'm not being sarcastic here), but it is indisputable that we cannot cure diseases without medicine. However, we can administer medicine without studies showing how we have previously administered medicine, however useful those might be. (If that wasn't the case we social scientists would have no cases to study!) If the efficiency gains and complementarity effects from combining research in the social and physical sciences are sufficiently high that they out-weigh the costs, then we ought to be able to demonstrate that fact using the tools of social science. In other words, it is incumbent upon social scientists -- not congressional representatives -- to demonstrate their value to society. The question is whether we can do it.

UPDATE: John Sides has responded with a good post. I don't disagree with much of; maybe not any of it. But also see my comment.

Tim Harford on Thomas Schelling

.
0 comments

Get Adobe Flash player

Monday, February 4, 2013

Quantum Gravity Trade Models

. Monday, February 4, 2013
3 comments

Richard Baldwin on the state of the art in empirical international trade economics:

The most empirically successful model in international trade – the so-called gravity equation – is based on “Newtonian” trade theory. The amount of trade between two nations varies with the product of the economic mass of the two nations and inversely with the distance between them. Strange as it may seem to students of Ricardo, Heckscher-Ohlin and the Krugman trade models, these three variables ‘explain’ well over 50% of all variation in bilateral trade flows. No other trade model comes even close. The gravity model, in short, works impeccably at the level of aggregation available to empiricists – until recently. ... 
Standard gravity theory, the distance matters since it affects relative price (distance proxies for all manner of trade costs that raise the price of imported goods relative to local goods) and the destination GDP matters since it effects total expenditure on all goods in the market. In the new ‘quantum’ gravity theory (expounded by, for example Melitz, Helpman and Rubinstein, or Chaney) a key new feature is the impact on firms’ decision to enter a market or not. Importantly, this decision depends upon the fixed cost of establishing a beachhead in a new market – what I have in the distant past called, ‘beachhead costs.
Plenty more here. For political scientists these "beachhead costs" might contain some of the most interesting features of trade regimes. These costs may be natural or artificial (i.e. political), and they may be strengthened or lessened by governments. While trade is not my primary area of expertise, I know of no applications of quantum gravity models in political science despite the prevalence of "Newtonian" gravity models. (If I am incorrect about this I'm sure I'll be corrected in the comments.) This could be a rewarding avenue for future IPE research, and it is a nice illustration of the importance of structure interacting with agency to condition outcomes as discussed in Oatley's recent post on the reductionist gamble.

Friday, February 1, 2013

The Reductionist Gamble

. Friday, February 1, 2013
4 comments



The Reductionist Gamble (RG) appeared in IO two years ago this spring. It has been met with some puzzlement and it has been misunderstood. I can understand both reactions, as the paper asks people to think differently about the world, and yet it does so by using terms and concepts in ways that depart from more typical usage. I say reductionism, and people hear Waltz. I say system, people here system level. That's fine, to a point. Yet, when people begin to offer solutions to the problems the RG identifies as criticisms of the RG itself, it becomes apparent that some clarification of the argument may be not only useful but essential. I offer the following in that spirit. I apologize for the length of the post; the paper was 13,000 words.

The RG begins with a concern about the theoretical orientation of IPE and then highlights the consequences of this theoretical orientation for empirical work. Here, I reverse the logic and highlight the empirical manifestation of the problem using statistical theory and then work back to the theoretical problem from there.

Assume a system that is constituted by n units over which we collect data. Assume further that two distinct data generating processes (“DGPs”) coexist in this system. DGP ‘A’ generates observations for each unit that are independent of one another. DGP ‘B’ generates observations for each unit that are not independent of one another. The relative frequency of A and B in the system is unknown.

Assume that the only statistical tool employed to analyze observations collected on the units generates unbiased estimates of the effect of x on y if and only if the observations are generated by DGP ‘A.’ Statistical analysis of the data collected on this system will thus generate biased estimates at a rate proportional to the relative frequency of DGP ‘B’ in the system.

Within this framework, the problem RG claims to identify, therefore, is that OEP scholarship published in IO and APSR between 1996 and 2006 (and thus at the center of mainstream American IPE for the last 15-20 years) has relied upon statistical techniques (the general linear model implemented in a TSCS framework) that generate unbiased estimates if the observations for each country are independent of one another. In fact, however, the observations often are not independent of one another. Hence, the estimates these articles report are biased at a rate proportional to the relative frequency of DGP ‘B’ in the contemporary global economy. We do not know the frequency of DGP ‘B’ but  suspect that it increases with the density of interaction among the units--a density which we call interdependence. 

The RG develops this argument as problem of theory rather than as a statistical problem. Read my discussion of Herbert Simon (RG pages 317-19) and then map that discussion back on to the discussion here of DGPs. Here is the concise version. Simon suggests that the denser the web of interactions that link units to one another in a complex system, the more the outcomes in each unit are driven by systemic mechanisms and the less they are driven by unit-specific mechanisms. Hence, as the density of cross-unit relationships increases, the ability to theorize about the units independent of the system diminishes. It seems evident that Simon is a theoretical analog to the two DGPs highlighted above, though he conceives of the problem in continuous rather than discrete terms.

The “complex” characterization in Simon's conception of systems is an important one; in a complex system “more is different.” What this means is that one cannot understand how a complex system works by studying the units that comprise it, and one cannot understand the units by examining them (singly or in large groups) as if they are independent of one another or the system. A complex system is irreducible; one must study it as a system. Complex systems are thus characterized by DGP 'B' and therefore must be modeled empirically as systems.

The RG focuses on the theory problem rather than the statistical manifestation of the problem for a number of reasons.

  • The problem is not a statistical problem because statistical solutions to (some of) the challenges posed by dependence among observations exist (or might be fashioned). We have spatial regression, ERGMS and latent space techniques. Consequently, the problem isn’t necessarily that we lack statistical solutions (though problems do exist for which we currently lack solutions); the problem is that until quite recently, we rarely implement them (or even test whether we need to). These techniques are now being applied with greater frequency (see, e.g., Cao for a recent application; see Nexon on Zeev Maoz's Kahler's network book). I note some of these statistical solutions in the conclusion (RG, 334-5).
  • The problem is a theory problem because in order to implement the appropriate statistical solutions, one needs theory that recognizes the potential importance of DGP 'B'.
  • OEP as a theoretical enterprise doesn't encourage us to recognize the potential importance of DGP 'B'. Consider David Lake’s characterization of OEP. "OEP begins with individuals, sectors, or factors of production as the units of analysis and derives their interests over economic policy from each unit's position within the international economy. It conceives of domestic political institutions as mechanisms that aggregate interests (with more or less bias) and structure the bargaining of competing societal groups. Finally, it introduces, when necessary, bargaining between states with different interests. Analysis within OEP proceeds from the most micro- to the most macro-level in a linear and orderly fashion, reflecting an implicit uni-directional conception of politics as flowing up from individuals to interstate bargaining"(Lake 2009, 225).
  • The core assumption here is that every sub-system can be examined in isolation from the rest (this is what I mean by methodological "reductionism," by the way). The assumption applies to vertical disaggregation (we don’t need to factor in domestic institutional structure to understand individual preferences) and horizontal disaggregation (we don’t need to factor in bargaining between states to understand domestic aggregation and policy outcome).
  • In Simon’s terms, OEP assumes the international system is nearly decomposable. In the language of statistical theory, OEP assumes the system contains a single DGP: DGP 'A'. In practice, this assumption has led to the modal empirical research design in which observations are assumed to be independent of one another. The implication is that this approach generates biased estimates in proportion to the relative frequency of DGP 'B' in the international economy.
  • If the prevailing theory held that the global political economy was a complex system in which developments in one sub-system were dependent upon developments in other sub-systems, vertically and horizontally, then we would be more inclined to design research that incorporated those relationships and thus less likely to implement statistical techniques that assume observations are independent.
  • In short, it is a theory problem not a statistical problem because our theory drives our choice of statistical tool. We need theory that encourages us to think about the potential presence of DGP 'B'. We don't presently have such theory.
So what is the theory? I don’t have fully developed answers to this question. But here is where I am.

  • Theory could usefully move beyond classical IR thinking that remains wedded to the ontology of Newtonian mechanics.
  • IPE could make this move by drawing a bit more from complexity science than it has done to date (yes, I know, Axelrod and Cedermanbut also, I think one could count the number of IPE articles on the international financial system that cite Sornette or Gabaix on less than one hand).
  • One way to begin moving down this path is to draw on the science of complex networks as potentially useful theoretical models of the international system.
    • We have a paper forthcoming in Perspectives on Politics that applies this approach to the international financial system. The paper explores how the network topography of international financial relationships shapes the stability of the global financial system (the global spread of local crises; the stability of the topography in the face of changes in the underlying distribution of economic “power.”
    • Some of Kindred's dissertation moves in this direction by applying network models to the global financial system, but I will let him write about that when he is ready to do so.
In short, RG constitutes the first step in a call to theorize in terms of (complex) systems, because OEP's theoretical orientation encourages an approach to empirical research that is likely to produce biased estimates systematically. And while it is true that statistical solutions to these problems exist, we need theories that encourage us to employ them. Multiple critiques of the argument exist. The assertion "but, we have spatial regression" isn't one of them.

Thanks for listening.


International Political Economy at the University of North Carolina: February 2013
 

PageRank

SiteMeter

Technorati

Add to Technorati Favorites