Showing posts with label Academia. Show all posts
Showing posts with label Academia. Show all posts

Tuesday, March 26, 2013

All Networks Are Not Equal (and Financial Crises Are Not Viruses)

. Tuesday, March 26, 2013
3 comments

Many thanks to Henry Farrell for discussing some research co-written by a decent chunk of this blog's contributors, which was just released (and is currently ungated, thanks!) by Perspectives on Politics as part of an issue on inequality and the global financial crisis. It's been kicked around the internet a bit already, and already I've come across a major misinterpretation* of the central argument from Mark Thoma:

Are highly interconnected networks better at dispersing risk? It depends upon the type of risk. Suppose a toxin hits a network. If diluting the toxin across the network also dilutes its effects to practically nothing, then we want the network to be as large and interconnected as possible. When shocks hit they will be quickly diluted and rendered relatively harmless. But for toxins that are deadly in minute doses, toxins that kill whatever they touch even when they are highly diluted, we want the infected node on the network to be isolated as much as possible.
This is, we believe, the dominant view of financial contagion in the social sciences and in particular in international economics. In the paper we cite several different formulations of this view in the academic and policy literatures. If we reiterated this view it would not be noteworthy, and it probably would not be publishable. We think our article is noteworthy (and was published) because it argues that this conceptualization of risk in networks is fundamentally misguided: it places undue focus on the strength of the shock and the density of the network, rather than the location of the shock and the topology of the network.

To see the difference consider two shocks of equal strength which hit two networks of equal density. The only difference in the two networks is in the distribution of that density: in one of the networks the connections are distributed more or less equally -- most nodes in the network have about the same number of connections to other nodes -- but in the other network the connections are distributed very unequally -- a few nodes have a lot of connections, while most nodes have few.

We believe that we should expect very different outcomes from the same shock and the same overall density because of different distributions of connections. All networks are not equal. Outcomes do not just depend on the strength of the toxin, but whom it contaminates.

We show empirically that different crises have have different impacts on the global system: crises originating in the US have adverse consequences throughout the entire network, while crises that hit other places do not. We show empirically that the global financial network is highly unequal: it is centered around the US (as Farrell notes in the bit Thoma quotes). And we argue that it is this variation in the distribution of connections, which we call "topology",  which made the subprime crisis so severe from a global (i.e. "systemic") perspective. Or, as Farrell put it in his useful discussion:
Oatley et al. argue that you get two kinds of financial crisis in this kind of world. First, you get financial crises in the periphery, which tend to be limited to a particular region because few other countries are directly exposed to the countries undergoing crisis, and to fizzle out. Here, US dominance serves as a dampener – since it is large enough to absorb shocks itself, it can prevent financial contagion from spreading. In contrast, when a crisis occurs within the US, it tends to spread everywhere, since every other country is heavily linked to the US. When US mortgage markets sneeze, everyone catches cold.
I'd say that when the US sneezes everyone catches pneumonia. So in our view the question isn't whether the toxin is "diluted"*. Nor is it whether a denser network might be more or less capable of absorbing a shock. In our view the performance of the system in the face of a shock depends on the structural properties of the system, such as its topology, and the location of the shock within that structure: if it hits the periphery, the impact is narrow and remains in the periphery; if it hits the core, the impact is broad and emanates throughout the system.

This may seem obvious. We believe it is obvious, after you've read the article. Before you've read it (as Thoma obviously has not) you may end up writing things like this (as Thoma has):
We have been told that problems in places like Cyprus have been walled off -- nodes in the network have been isolated -- but so long as a few isolated connections still exist that are difficult to cut, highly toxic shocks can pollute the rest of the network. In addition, as we saw today when "Jeroen Dijsselbloem, the current head of the Eurogroup, held a formal, on-the-record joint interview with Reuters and the FT today, saying that the messy and chaotic Cyprus solution is a model for future bailouts" and financial markets reacted negatively (the statement is being walked back), some connections -- those involving expectations -- cannot be severed in any case.

Highly interconnected networks are highly desirable so long as (1) we can quickly identify trouble, and (2) nodes can be quickly and effectively isolated. But when those conditions are not present, the occasional highly toxic shock will cause quite a bit of damage.
Despite being the conventional view (here's another example, also from yesterday) we think this is totally wrong. We think that the ongoing collapse in Cyprus is unlikely to have a major effect on the global economy, just as the collapses in Iceland and Ireland did not have a major effect on the global economy: the effects were devastating for those economies, and had some impact on the few countries which were strongly tied to them (mostly regional partners), but did not advance outside of that. Indeed, as the eurozone crisis has deepened over the past few years, the world economy has gone from recession to growth and global financial markets have posted strong gains (esp in the West; less in the "Rest").

We don't think this is a coincidence. We don't think we have just gotten lucky. We don't think that we were saved by wise and prudential crisis management (does anyone?). We think crises in peripheral nodes are very unlikely to spread to the core of the system because of the structural properties of complex networks. We think, in other words, that the global financial network is not some abstract quantity, but something that can be modeled and understood.

Thoma says that markets "reacted negatively" yesterday. The S&P was off 0.33% yesterday -- a totally normal fluctuation -- after increasing by 4.3% over the past month, during which time the botched Italian election and worsening situation in Cyprus were supposed to send financial markets into turmoil. As I write this, European markets are up today.

So financial turmoil from Cyprus hasn't happened yet, just as it didn't happen last summer when the Greek crisis flared up. We wrote about that too, and said the same thing then as we're saying now, just as many economists and pundits argued then that we may be on the brink of doom just as they are now. We were right then and we're right now. Greece has defaulted at least twice since last May, yet the global economy hardly even notices. The Cypriot financial sector has essentially disappeared overnight (from a network perspective the links connecting that node have been effectively severed) and financial markets are up.

I'm being pedantic about this because the message doesn't seem to get across. The belief that a crisis anywhere can lead to a crisis everywhere is so ingrained that very intelligent people don't even recognize contradictory arguments with supporting evidence when they are quite literally staring them directly in the face, as Farrell's precis of our research (and link to the article) was peering through the monitor right into Thoma's cornea.

So I'm afraid I might have to be boring on this point until folks start internalizing it: all networks are not equal.

*Thoma doesn't explicitly assign this view to us, but he quotes part of Farrell's summary of our article -- which says something different from what Thoma says -- and then goes on to the bit I quite as if we were making the a similar case.

**In fact, we refrain from discussing viral contagion at all, as we believe it is not an appropriate analogy for financial contagion. We spent a bit of time discussing this in a previous draft, but as it was somewhat tangential to our main argument we eliminated it in the final version for reasons of space. The general point is that a virus can infect anyone it comes into contact with regardless of who that person is: a king is no less vulnerable than a peasant. Our argument is that not all financial crises are capable of infecting all nodes with which they come into contact. Most crises are not contagious at all, in fact. Our theory provides an explanation for that. But the virus language was referenced by both Farrell and Thoma, so I'll run with it for the purposes of this post.

Sunday, February 17, 2013

How Not to Write an Abstract

. Sunday, February 17, 2013
0 comments

We've been blogging about trade and trade networks a decent amount lately, so I was interested to see a new NBER working paper titled "Multinational Firms and the Structure of International Trade". I clicked through and read the abstract, which is:

This article reviews the state of the international trade literature on multinational firms. This literature addresses three main questions. First, why do some firms operate in more than one country while others do not? Second, what determines in which countries production facilities are located? Finally, why do firms own foreign facilities rather than simply contract with local producers or distributors? We organize our exposition of the trade literature on multinational firms around the workhorse monopolistic competition model with constant-elasticity-of-substitution (CES) preferences. On the theoretical side, we review alternative ways to introduce multinational activity into this unifying framework, illustrating some key mechanisms emphasized in the literature. On the empirical side, we discuss the key studies and provide updated empirical results and further robustness tests using new sources of data.
Got it? No? Me either. The point of an abstract is to give a precis of the article's scope, method, and findings. This abstract mentions that there are findings ("provide updated empirical results and further robustness tests") but doesn't say what they are. Neither does it mention what empirical design it uses other than that there is a review of literature. It lists a few questions asked by that literature, but does not answer them.

I've read this abstract four or five times now, and all I want to know is what the structure of international trade is, and I can't tell.

I guess I'll actually have to read the thing.

Friday, February 1, 2013

The Reductionist Gamble

. Friday, February 1, 2013
4 comments



The Reductionist Gamble (RG) appeared in IO two years ago this spring. It has been met with some puzzlement and it has been misunderstood. I can understand both reactions, as the paper asks people to think differently about the world, and yet it does so by using terms and concepts in ways that depart from more typical usage. I say reductionism, and people hear Waltz. I say system, people here system level. That's fine, to a point. Yet, when people begin to offer solutions to the problems the RG identifies as criticisms of the RG itself, it becomes apparent that some clarification of the argument may be not only useful but essential. I offer the following in that spirit. I apologize for the length of the post; the paper was 13,000 words.

The RG begins with a concern about the theoretical orientation of IPE and then highlights the consequences of this theoretical orientation for empirical work. Here, I reverse the logic and highlight the empirical manifestation of the problem using statistical theory and then work back to the theoretical problem from there.

Assume a system that is constituted by n units over which we collect data. Assume further that two distinct data generating processes (“DGPs”) coexist in this system. DGP ‘A’ generates observations for each unit that are independent of one another. DGP ‘B’ generates observations for each unit that are not independent of one another. The relative frequency of A and B in the system is unknown.

Assume that the only statistical tool employed to analyze observations collected on the units generates unbiased estimates of the effect of x on y if and only if the observations are generated by DGP ‘A.’ Statistical analysis of the data collected on this system will thus generate biased estimates at a rate proportional to the relative frequency of DGP ‘B’ in the system.

Within this framework, the problem RG claims to identify, therefore, is that OEP scholarship published in IO and APSR between 1996 and 2006 (and thus at the center of mainstream American IPE for the last 15-20 years) has relied upon statistical techniques (the general linear model implemented in a TSCS framework) that generate unbiased estimates if the observations for each country are independent of one another. In fact, however, the observations often are not independent of one another. Hence, the estimates these articles report are biased at a rate proportional to the relative frequency of DGP ‘B’ in the contemporary global economy. We do not know the frequency of DGP ‘B’ but  suspect that it increases with the density of interaction among the units--a density which we call interdependence. 

The RG develops this argument as problem of theory rather than as a statistical problem. Read my discussion of Herbert Simon (RG pages 317-19) and then map that discussion back on to the discussion here of DGPs. Here is the concise version. Simon suggests that the denser the web of interactions that link units to one another in a complex system, the more the outcomes in each unit are driven by systemic mechanisms and the less they are driven by unit-specific mechanisms. Hence, as the density of cross-unit relationships increases, the ability to theorize about the units independent of the system diminishes. It seems evident that Simon is a theoretical analog to the two DGPs highlighted above, though he conceives of the problem in continuous rather than discrete terms.

The “complex” characterization in Simon's conception of systems is an important one; in a complex system “more is different.” What this means is that one cannot understand how a complex system works by studying the units that comprise it, and one cannot understand the units by examining them (singly or in large groups) as if they are independent of one another or the system. A complex system is irreducible; one must study it as a system. Complex systems are thus characterized by DGP 'B' and therefore must be modeled empirically as systems.

The RG focuses on the theory problem rather than the statistical manifestation of the problem for a number of reasons.

  • The problem is not a statistical problem because statistical solutions to (some of) the challenges posed by dependence among observations exist (or might be fashioned). We have spatial regression, ERGMS and latent space techniques. Consequently, the problem isn’t necessarily that we lack statistical solutions (though problems do exist for which we currently lack solutions); the problem is that until quite recently, we rarely implement them (or even test whether we need to). These techniques are now being applied with greater frequency (see, e.g., Cao for a recent application; see Nexon on Zeev Maoz's Kahler's network book). I note some of these statistical solutions in the conclusion (RG, 334-5).
  • The problem is a theory problem because in order to implement the appropriate statistical solutions, one needs theory that recognizes the potential importance of DGP 'B'.
  • OEP as a theoretical enterprise doesn't encourage us to recognize the potential importance of DGP 'B'. Consider David Lake’s characterization of OEP. "OEP begins with individuals, sectors, or factors of production as the units of analysis and derives their interests over economic policy from each unit's position within the international economy. It conceives of domestic political institutions as mechanisms that aggregate interests (with more or less bias) and structure the bargaining of competing societal groups. Finally, it introduces, when necessary, bargaining between states with different interests. Analysis within OEP proceeds from the most micro- to the most macro-level in a linear and orderly fashion, reflecting an implicit uni-directional conception of politics as flowing up from individuals to interstate bargaining"(Lake 2009, 225).
  • The core assumption here is that every sub-system can be examined in isolation from the rest (this is what I mean by methodological "reductionism," by the way). The assumption applies to vertical disaggregation (we don’t need to factor in domestic institutional structure to understand individual preferences) and horizontal disaggregation (we don’t need to factor in bargaining between states to understand domestic aggregation and policy outcome).
  • In Simon’s terms, OEP assumes the international system is nearly decomposable. In the language of statistical theory, OEP assumes the system contains a single DGP: DGP 'A'. In practice, this assumption has led to the modal empirical research design in which observations are assumed to be independent of one another. The implication is that this approach generates biased estimates in proportion to the relative frequency of DGP 'B' in the international economy.
  • If the prevailing theory held that the global political economy was a complex system in which developments in one sub-system were dependent upon developments in other sub-systems, vertically and horizontally, then we would be more inclined to design research that incorporated those relationships and thus less likely to implement statistical techniques that assume observations are independent.
  • In short, it is a theory problem not a statistical problem because our theory drives our choice of statistical tool. We need theory that encourages us to think about the potential presence of DGP 'B'. We don't presently have such theory.
So what is the theory? I don’t have fully developed answers to this question. But here is where I am.

  • Theory could usefully move beyond classical IR thinking that remains wedded to the ontology of Newtonian mechanics.
  • IPE could make this move by drawing a bit more from complexity science than it has done to date (yes, I know, Axelrod and Cedermanbut also, I think one could count the number of IPE articles on the international financial system that cite Sornette or Gabaix on less than one hand).
  • One way to begin moving down this path is to draw on the science of complex networks as potentially useful theoretical models of the international system.
    • We have a paper forthcoming in Perspectives on Politics that applies this approach to the international financial system. The paper explores how the network topography of international financial relationships shapes the stability of the global financial system (the global spread of local crises; the stability of the topography in the face of changes in the underlying distribution of economic “power.”
    • Some of Kindred's dissertation moves in this direction by applying network models to the global financial system, but I will let him write about that when he is ready to do so.
In short, RG constitutes the first step in a call to theorize in terms of (complex) systems, because OEP's theoretical orientation encourages an approach to empirical research that is likely to produce biased estimates systematically. And while it is true that statistical solutions to these problems exist, we need theories that encourage us to employ them. Multiple critiques of the argument exist. The assertion "but, we have spatial regression" isn't one of them.

Thanks for listening.


Thursday, January 24, 2013

Don't Read; Write!

. Thursday, January 24, 2013
0 comments


In re Will's comments on Chris Blattman's advice to graduate students about selecting research topics, I ran across this advice to young scholars from Richard Thaler:

Work on your own ideas, not your advisor’s ideas (or at least in addition to her ideas). And spend more time thinking and less time reading. Too much reading leads people to think of small variations on existing studies. Admittedly my strategy of writing the paper first and only then reading the literature (or, more likely, letting the referees tell me what they think I should have read) is an extreme one, but it is better than trying to read everything. Try writing the first paper on some topic, not the tenth, and never the 50th.
Part of this seems like solid advice for graduate students and junior faculty. No one gets tenure by reading a lot, so, "don't read, write" is probably good advice (I seem to recall this being favorite advice of Munger). Moreover, given the curse of "shrinkingly important papers," (Thaler's term) there really isn't very much that one needs to read. However, no paper is easier to publish than the one which appears to establish precisely that which everyone already believes to be true but that has not yet been established precisely. And no papers are harder to publish than those that cut against conventional wisdom. So, shooting to write the nth paper is probably a safer tenure strategy than writing the first one.




The Scorched Earth Method of Research Design

.
0 comments


Thomas previously had some great thoughts on how to engage in the peer review process, from the perspective of an experienced reviewer. I also like this way of thinking, in many ways saying the same things as Thomas in a different way, from Chris Blattman:

The PhD slides have my first inklings of a framework for thinking about research in political economy of development. My idea is that we should be able to draw a tree from the fundamental questions (the trunk), the big questions (the boughs), and the little questions (the branches). We should be able to hang every paper on that tree. It’s a device I use when I get a paper to referee.
At this stage of my career I take this sort of advice for "reviewing" as advice for how to conceive of and carry out my research. In other words, these types of discussions help me think about what my goals should be for the program of research that I'm engaged in. So I also appreciated Blattman's conclusion:
If you are wondering what the roots to the tree are, well of course it’s the egos and established interests of faculty in the field. So of course the big lesson for my students is that they should mainly aim to burn it down.
Not every paper (or research track) can destroy the entire edifice of all previous research of course, but those that can will certainly get folks' attention.

I used to think that research programs could be divided into high-risk/high-reward strategies and lower-risk/lower-reward strategies: if you strike gold with the former, you'll do well in journals and job markets; but if you strike out you'll... strike out. On the other hand, if you aim a bit lower you'll be more likely to hit the mark. Maybe you'll never be an academic superstar, but you'll never be unemployed either. Using Blattman's metaphor, this is the difference between a research program that exists on the trunk (or sets fire to the roots) and one which lives among the branches.

That may be true, but having recently gone through the job market process for the first time* I'm beginning to think that the "safe" path is actually not low-risk at all. By that I mean that the success of a research program which only asks branch questions is idiosyncratic: some hiring committee better be really interested in those particular branches, or else generating sufficient interest in your research to get a job offer will be difficult. At the same time, there better not be anyone else on the market investigating these particular branches; or if there is, you need to be doing it noticeably better than them.

Branch-work, by definition, does not have immediate appeal to the broad discipline. And broad appeal is helpful when trying to convince hiring committees (and then entire departments) that your work is interesting and important enough that they should pay you to do it, even if most of them don't really understand the particulars of what you're doing.

That doesn't mean that every grad student should try to upend the discipline with every dissertation. That's not my strategy, and I don't think it's a good one. It does mean that research programs which ask big questions of broad interest -- bough and trunk -- are at an advantage to those which do not, holding the quality of the research constant. And those which can ignite the roots are better still.**

*About which more another time, I suppose. I've been planning to write a post about this for awhile now, but haven't been motivated.

**Unless you're trying to get hired in a department where those roots are buried.

Friday, January 4, 2013

No Social Science Among Social Scientists

. Friday, January 4, 2013
0 comments

Matt Yglesias, one of the most econ literate journalists going, is at the annual flagship conference for economists and notes something odd:
I'm in San Diego for the American Economics Association's annual meeting, and so naturally I wanted to grab a cup of coffee in the hotel lobby before the 8AM sessions started. Imagine my surprise to find that as of 7:45 AM the lines were punishingly long and there was no way I'd be able to get to the session on high-skill immigration if I waited around.  
Sad. And a result of a shocking lack of economics. Clearly the price charged should have been much, much higher. You only have the logistical capacity to serve so many people between 7:30 and 8:00 AM, so you ought to serve the people with the most willingness to pay. Let folks who don't care about being on time to an 8AM session just wait around and buy their coffee later after prices fall. Let those who place a strong premium on both coffee and punctuality pay through the nose. It seems so simple and yet even at a conference of economics nobody wants to apply economic ideas.
I find the same thing is true in political science. For example, when Jeff Flake launched a campaign to cut federal funding of political science research what did political scientists do? Did we utilize our theories of politics to form an effective lobbying organization? Did we use our professional organization to overcome the collective action problem and secure our rents? No. APSA released an outraged statement on its website and encouraged members to... write their Congressperson. How imaginative, how theory-driven, how efficacious. The Monkey Cage reviewed a bunch of studies which had received NSF funding. That's about the sum total of the response from political scientists, excepting snarky posts on Facebook and Twitter.

Similarly, it is a cliche that political science faculty departments are often governed, shall we say, sub-optimally. While I'm not yet a faculty member, and thus don't yet have much experience in this area, I hear stories all the time (not just from my department) about how weird and screwed up things frequently get in faculty meetings. Why not use our theories to improve this in Pareto-improving ways? We supposedly know how to do that. While we're at it, why don't political scientists control governance at the university level? Don't we know how insurgencies succeed?

And why do most of us vote, contribute to campaigns, and even volunteer to work for particular candidates? Leaving aside whether voting is "rational" for instrumental reasons, most of our theories suggest that actual policy differences between candidates will be minimal and that most of politics occurs in a bureaucratic setting anyway. Why not direct our efforts towards influencing that process instead? For that matter, why are the successful politicians a bunch of lawyers rather than political scientists?

I could go on but you get the point: all too often social scientists tend to not take their theories seriously enough to actually make use of them in the real world. That says something. Not sure what, but something.

Monday, December 24, 2012

Update on L'Affaire Loomis

. Monday, December 24, 2012
0 comments

Via Dan Nexon. While I see Nexon's point that we are dealing with mealy-mouthed university administrators, I must completely disagree with his ("modest") level of satisfaction. This represents no victory at all because this new statement from URI officials, like the first one, completely misses the point. This is not about First Amendment rights. Nobody was saying that Loomis should be thrown into the deepest darkest dungeon never to be heard from again. They were saying that he should be fired or otherwise professionally damaged for an emotional -- and politically motivated -- response to a mass killing.

The relevant standard here is academic freedom, not First Amendment rights. The University of Rhode Island subscribes to the 1940 "Statement of Principles on Academic Freedom and Tenure" issued by the American Association of University Professors. This Statement indicates that Loomis deserves the full support of the University of Rhode Island even if he was speaking under the banner of the University. (Which he always is, implicitly, contra the views of the CT commenters.) Instead of espousing that principle, which is fundamental to the mission of public universities, the University has repudiated it by saying that Loomis deserves no greater protection than those who have written to the University on this matter, whether in solidarity with or opposition to Loomis.

Loomis does not need the University to protect him from the threats of violence he has received; he has the FBI and the Rhode Island police for that. Loomis does not need the University to protect him from those who would suppress his speech; he has the U.S. Constitution for that. Loomis needs the University to protect him from professional damage as the result of a campaign of sabotage in response to his expression of a political nature. The University has failed to do that. Therefore the University has failed.

This new statement from URI is no better than the first. It simultaneously misses the point and refuses to honor its obligations to its faculty. A better statement would have read, in toto:

"The University of Rhode Island does not comment on the statements of individual faculty members, but it steadfastly defends the principles of academic freedom which are an essential component of the University's commitment to 'fostering a collective and individual propensity for inquiry' so that students may 'communicate, understand, and engage productively with people very different from themselves', including those with different beliefs and values."

UPDATE: Dan Nexon further explains his position. I respond in comments.

Tuesday, October 2, 2012

Academia QotD

. Tuesday, October 2, 2012
0 comments

Jim Dixon, the fortunate one in Kingsley Amis' Lucky Jim, is described by Matthew Walther:

Its eponymous hero, Jim Dixon, is a junior lecturer in history at an undistinguished Welsh college. Dixon’s pleasures are simple: he smokes a carefully allotted number of cigarettes each day and drinks a rather less measured amount of beer most nights at pubs. His single goal is to coast successfully through his two-year probation period and become a permanent faculty member in the history department.

Standing in his way is the departmental supervisor, Professor Welch. (“No other professor in Great Britain, Dixon thought, set such store by being called Professor.”) Welch is a dedicated amateur flautist—or, as he insists, recorder player—and busybody who forces Dixon to attend chamber music recitals during impossibly dull weekend visits to the professor’s home and perform quotidian tasks such as doing Welch’s research for him and proofing his manuscripts.

In order to remain in good standing with his department, Dixon must also publish an article, “The Economic Influence of Shipbuilding Techniques, 1450 to 1485,” in an scholarly journal. Dixon, despite his having little knowledge and even less interest in the period, is a medievalist. Amis’s description of Dixon’s article will ring true for anyone who has ever been forced into academic writing: “It was a perfect article, in that it crystallized the article’s niggling mindlessness, its funereal parade of yawn-enforcing facts, the pseudo-light it threw upon non-problems. Dixon had read, or begun to read, dozens like it, but his own seemed worse than most in its air of being convinced of its own usefulness and significance.”
That last bit, the "niggling mindlessness" and most especially the "pseudo-light it threw upon non-problems," is one of the great indictments of any profession in literature. As a description of quite a lot of social science it has the markings of the best satire: hilarity (it is even laugh-out-loud funny), recognizable truth, and melancholy.

Of course it isn't all that way. But still.

Tuesday, June 26, 2012

PSA

. Tuesday, June 26, 2012
0 comments

There is a new site called Footnote, which is dedicated to translating academic work for a general audience. They asked to re-run selected blog posts written by me, and I've agreed. Here is a link to the first one. Presumably more to follow.

So far I like the site. It's clearly still growing, but it's a really good idea and they have a decent cast already assembled. Give it a look.

Wednesday, April 25, 2012

Not Quite Crony Capitalism?

. Wednesday, April 25, 2012
0 comments

I haven't read this yet, but Lucas Puente -- a PhD student at Stanford -- has an interesting-looking article in the new PS (I don't see an ungated version). Abstract:

I investigate one mechanism through which financial institutions could have used political influence to receive preferential treatment in the US Department of the Treasury-administered “bailout.” I find that neither proxies of political influence nor other political variables, such as public interest in specific deals, can explain variance in the sale price of warrants (a type of financial asset) Treasury acquired through TARP's Capital Purchase Program. Moreover, I find that the more politically active the firm is, the more likely Treasury is to auction its warrants (thereby receiving fair market value). This conclusion is not consistent with recent studies investigating the role of such variables in the initial administration of TARP and can be interpreted as good news for American taxpayers.
PS summary (bold added):
In the wake of the recent global financial crisis, many have suggested that the US government's administration of the taxpayer-funded rescue of the financial industry offered disproportionate benefits to politically active firms. However, quite the opposite occurred. Puente's research into Treasury's handling of the disposition of warrants (assets similar to stock call options) acquired through the Capital Purchase Program (CPP) shows that, at least in this phase of the "bailout," political variables did not matter. That is, lobbying expenditures, campaign contributions, and connections with Secretary of the Treasury Geithner, among other independent variables, cannot explain variance in the percentage of market value Treasury received for these warrants. Moreover, according to Puente, the more politically active a firm is, the more likely Treasury is to auction its warrants (thereby receiving fair market value). This suggests that Treasury is attempting to counter-act allegations of preferential treatment. Taxpayers should be pleased. By insulating itself from politics and making efforts to maximize the taxpayer return on the warrants, Treasury may have prevented billions of dollars in taxpayer losses.
I personally don't find this very surprising. Nor would I find it surprising if preferential treatment came mainly through less transparent channels, e.g. the Fed. It looks like Puente might be investigating that question in his ongoing research.

Friday, March 30, 2012

Academia vs. Policy Snark

. Friday, March 30, 2012
0 comments

Andrew Exum, today:

Nonetheless, in case anyone is interested, these are the [academic] journals I dutifully scan for articles, listed in the order I typically read them. ...
3. The American Political Science Review
Andrew Exum, 2010:
Anyway, you guys could probably care less why I never read the APSR. 
Wonder what made him change his mind.

Wednesday, March 28, 2012

On IR Professionalization, Grad School, and What I Think I've Been Doing for the Past Four Years

. Wednesday, March 28, 2012
9 comments

Dan Nexon has written a lengthy diatribe opposing what he calls the "over-professionalization" of IR, particularly as it involves graduate student instruction. Over-professionalization seems to reduce to, basically, extensive training in neopositivist approaches (mostly but not only quantitative) to the study of international relations, with less space for idiosyncratic approaches. Nexon believes this has led to, and is leading to, an impoverishment of IR theory. It's well worth reading, as is Erik Voeten's critique, and comments on both posts.

No one has responded to Nexon from my perspective -- a grad student about to go onto the job market -- and I've been thinking a decent amount about these issues lately, so I'm going to chime in. Before I get into more specific discussion, let me note that it's been very interesting to watch the ways in which my approach to grad school has evolved over the past four years, and the ways in which my fellow grad students have taken different approaches to their time in grad school. For me, having an academic career is only valuable if I can do it more or less my way. I'm not a "careerist" in that sense... if what it takes for me to get a job at a good school is to do a bunch of work that I don't enjoy or find interesting, then I don't want a job at a good school. I'd much rather have a place at a lesser school or in a non-academic setting. Therefore, I'm somewhat risk-acceptant regarding both my professionalization (evidence: my willingness to pick blog fights with tenured faculty) and my research program (which carries a nonzero chance of being an abject failure). Some of my fellow graduate students don't seem to feel the same way, and they make their choices accordingly. There's nothing at all wrong with that, it's just not the approach I'm comfortable taking.

At this point I have almost no idea what the values of "the discipline" at large are, and I'm somewhat skeptical that there is any such thing.* It seems to depend very much on which departments we're talking about, and even then it's somewhat contingent upon the balance of recent hires/turnover within departments. Same with the journals. While different journals (naturally) tend to publish different types of work, it's not clear whether that is because authors are submitting strategically, editors are dedicated to advancing their preferred research paradigms, both, or neither. There are so many journals that any discussion of them as doing any one thing -- or privileging any one type of work -- seems like painting with much too wide a brush. Nevertheless, on to Nexon's claims.

Perhaps ironically, I think the phenomenon that Nexon is trying to describe (to the extent that it exists at all) is best understood using language taken from formal bargaining theory. That is, in an environment where competition for any decent academic job is fierce -- much less a so-called "top tier" job -- there is no possible way that grad students will not be "over-professionalized". Grad students will always be incentivized to game the job market as best as they can, and their advisors and departments are incentivized to help them do it. The job market rewards graduate students that can strongly signal an ability to maintain a productive research agenda into the indefinite future. Even if the efficient outcome is for a less professionalized discipline (if e.g. it leads to more interesting theorizing), and Nexon does little to establish the case that it is (more below), those who defect and become more professionalized are more likely to get jobs because they will be the ones who can better signal their quality to hiring departments. In politics it is often said that you can't enact the policy without being elected, and you can't have a long career doing interesting research in IR without first getting hired somewhere. Getting a job is therefore the first goal of every student, advisor, and department. There is no other measure of success, nor can there be.

From where I sit, as a grad student there appear to be only so many ways you can signal your value to hiring committees that receive tons of applications for each job -- particularly if you're coming from a department outside of the top 10. One way is to demonstrate mastery over difficult material and/or techniques for inquiry that will be useful in conducting research over a long career. Another is by having work published -- indicating that you actually can do research -- and in progress -- indicating that you don't have only one decent idea -- and by getting well-regarded scholars to write strong letters on your behalf. For the first, taking an extra methods course adds to the "toolkit" in ways that taking an extra class out of your major field does not. Or hell, even within your major field, if it's on a topic that isn't directly relevant to your dissertation. Sure, the substantive class will expose you to new ideas, but nothing's stopping you from reading those books on your own. The value added of discussing the substantive work in seminars with other grad students who only had time to skim the reading is almost surely less than the time spent learning a new method. Having more training in more methods gives you a greater ability to ask a wider range of questions and a greater ability to answer them. We're always being told to not let method determine research question, but if you only know one method you have no other option. Moreover, without some methodological chops it is difficult to conduct research that is publishable (the second signal) or that is interesting enough to excite jaded faculty that are writing letters for you (the third). In short, it's important for graduate students in political science to be able to do political science. Methods training is a prerequisite for that.

Unlike Nexon, I don't see this as a very bad thing.** I've taken more methods classes in my graduate education than substantive classes. I don't regret that. I've come to believe that the majority of coursework in a graduate education in most disciplines should be learning methods of inquiry. Theory-development should be a smaller percentage of classes and (most importantly) come from time spent working with your advisor and dissertation committee. While there are strategic reasons for this -- signaling to hiring committees, etc. -- there are also good practical reasons for it. The time I spent on my first few substantive classes was little more than wasted; I had no way to evaluate the quality of the work. I had no ability to question whether the theoretical and empirical assumptions the authors were making were valid. I did not even have the ability to locate what assumptions were being made, and why it was important to know what those are. Those questions, which are central to any study in political science (or should be, anyway), can only be answered once one has some sense of how theory is constructed, how models perform, how data is collected and analyzed, and why every choice made by the analyst is important. Coming into grad school I didn't have that ability despite having being fairly comfortable with basic statistics (what Nexon lumps into GLR -- "general linear reality") and models as an undergraduate economics major. (In my opinion it's still a weakness for me, and for nearly everyone I come across in academia.)

The fact remains: students at architecture schools are not asked to design a skyscraper on the first day. Someone taking their first saxophone lesson is not encouraged to abandon traditional chord structures in favor of free improvisation. Nor would it be productive for most graduate students to be "thinking outside the box" during their early years in graduate school. At that point I didn't even know where the box was. I couldn't possible have been any less professional, and so I was incapable of producing anything of theoretical or substantive value. I don't think a single one of my grad student colleagues was any different.

More importantly, I don't think that learning a bunch of methods closed off avenues of research for me; on the contrary, it opened them up! Learning that not every real-world variable has a Gaussian distribution, that assumptions regarding the data-generating process are very important, that (yes) not every process in the world is linear, that bias can come in all shapes and sizes (regardless of what method one uses, and I don't just mean quant here)... this is important knowledge to possess! I could not acquire it without "professionalizing" myself.

So with the caveat that he has much more experience and knowledge than me, I don't think I can agree with Nexon's core argument: that professionalization of graduate students has led to a lack of interesting theorizing. I believe a decent amount of professionalization is a prerequisite for understanding how good theories are constructed and how they may be evaluated. I do think that there is not all that much interesting theory being produced in IR currently, but I think that's because it's really hard to come up with original theory. Not just in IR... theoretical development moves slowly in all sciences. Most of the work is in pushing at the edges of existing theory in more-or-less straightforward empirical ways.*** It takes something approaching genius to develop original theories and let's face it... most graduate students (and professors, practitioners, and lay-people) just don't possess the necessary quality whether they've been professionalized or not.

If there is one thing that I wish I had spent more time on in my graduate classes it would be taking the philosophy of science more seriously. I suspect, from other things he's written, that Nexon would agree. Other than the first-semester scope and methods class -- which came too soon to be of much use, and doesn't seem to be conducted super-rigorously almost anywhere -- attention paid to the philosophy of science has been spotty, sporadic, and mostly ad hoc. This is, I believe, a problem with social science more generally and not just graduate instruction at the top 20 (or so) IR grad programs. But such a focus would almost necessarily have to lead to more professionalization rather than less.

*When we discuss what "IR" wants in terms of hiring decisions, it seems relevant to note that hiring decisions are generally made by departments as a whole and not just the IR people. I'm sure that in many cases faculty from other disciplines will defer to IR people when making IR hires, but I imagine that there's quite a lot of variation there.

**I'm not sure he does either. He seems to be saying different things at different times. I think what he'd like to say is that a strong methods training would be a very good thing if it didn't lead to the sort of professionalization that he doesn't like. But he doesn't quite say that, and I can't tell if it's really true.

***On a somewhat related point, tangential to my post as well as Nexon's: if I was more familiar with the literature I think I'd like to write a post arguing that political "science" must mean something specific. Political practice is different from political science, and I think there are a lot of uses in keeping the distinction between normative and positive theory intact. The falsifiability criterion seems to me to be as good as any. I think there are good reasons for arguing that certain types of theory is not political "science" without diminishing its utility for people who are interested in politics, theories about the world, etc. The word "science", if it is to have any meaning, must distinguish between styles of inquiry based on their approach, their methodology. I'm not sure where that line falls, but it must exist.

Monday, January 16, 2012

Academic Publishers Are Evil

. Monday, January 16, 2012
4 comments

Yeah, nothing new. There seems to have been a recent uptick in people getting angry about it. This rant in particular was pretty satisfying. And while there are some positive trends towards increasing access to research -- e.g. JSTOR is moving towards open access -- in general the barriers to dissemination of research are silly.

Because I have access to university facilities, usually I find things like journal access to be more of an annoyance than anything. I have to log onto the university's library's web page, navigate through five or six screens, enter passwords a few times, and then I get the article. That's annoying, but at the end of the day I get access to almost everything for free.

Almost everything. I currently want to read an article in the newest issue of Political Science Quarterly, but my university's library only has online access for PSQ issues that are at least six months old. So I can't read this article. And I can't find an ungated version anywhere else. I guess I could go to the physical library and try to navigate the hundreds of journals on the racks, but by the time I find it (assuming I do), check it out, and get back to my office I will have spent half an hour or more of my time, which is probably about as long as it would take me to read the article. Plus I won't be able to keep an electronic copy to reference in the future unless I scan it.

In this case making their material difficult to read is bad for the author and publisher as well, because I likely would have blogged the article. That's (a very small amount of) free publicity, now lost. I might have assigned it to my class, as I'm looking for one more current reading to add to the syllabus on this topic. But not if my students can't find it. And -- not that they really care -- I can't imagine ever submitting any of my own work to a journal where I know that no one will be able to read it until it's lost currency.

Perhaps the intention of the policy is to motivate me to pay for a subscription to PSQ. Instead it's motivated me to ignore it entirely. It's not like I don't have other things to read.

Monday, November 7, 2011

Making a Mystery Where None Exists

. Monday, November 7, 2011
2 comments

Ryan Avent, at Free Exchange:

It is remarkable to me how readily old, successful professionals dismiss the labour-market difficulties of young adults as the product of their poorly-chosen majors and general lack of ambition, and on what flimsy evidence they're prepared to base these views. There are now 3.3m unemployed workers between the ages of 25 and 34. That's more than twice the level in 2007. There are over 2m unemployed college graduates of all ages; nearly three times the level of 2007. There are many millions more that are underemployed—unwillingly working less than full-time or unwillingly working in a job outside their field which pays less than jobs in their field. As far as I know, the distribution of college majors didn't swing dramatically from quantitative fields to art history over the past half decade.

Meanwhile, the Wall Street Journal provides us with a handy interactive graphic examining unemployment rates by major according to the 2010 Census. Coming in toward the top of the list and ahead of "art history and criticism" are the sorts of degrees you'd expect, like those falling into "miscellaneous fine arts", but also "computer administration management and security", "engineering and industrial management", "international business", "electrical and mechanic repairs and technologies", "materials engineering and materials science", "genetics", "neuroscience", "biochemical sciences", and "computer engineering". I bet those graduates are all trying to break into puppetry!
Avent is correct that this recession has driven up unemployment among college graduates, but their rates of unemployment remain roughly half the national average, better than half the average of those with just high school degrees, and better than one-third of the average of those without a high school degree. (Those with postgraduate degrees are in even better shape.) Those with freshly-minted bachelor's degrees but little experience and few professional connections aren't doing as well those with many years in the professional world, as one would expect, but it still seems clear that having an advanced degree greatly enhances your ability to remain employed.

I agree that the WSJ's graphic is handy, but I see different things in it than Avent does. Here are the top professions by median wages (click for larger):



And here are those by lowest unemployment rate (click for larger):



There's a lot of quant degrees on both lists. The rest are mainly high-skill services. No humanities, no puppitry, no arts of any kind. (I'd guess that the low rates of unemployment -- albeit with fairly low wages -- in teaching and student counseling are related to the strength of those unions in the public sector, but I can't do any better than guess. And I'd wager that some of the surprisingly high rates of unemployment in some technical fields are related to educations that are out of date, but again that's just a guess.) Compare these charts to the data in this recent post by Alex Tabarrok and it seems pretty hard to deny that many students are not achieving degrees that give them an advantage in labor markets.




Monday, October 3, 2011

Thoughts on an Article I Haven't Read

. Monday, October 3, 2011
6 comments

That would be this one, which tells me in the headline and subtitle that North Carolina "grooms its best students to be good teachers". I strongly suspect that this is not true empirically, and I certainly hope it is not. Teaching requires basic competence of the subject material plus the ability to lesson-plan effectively and communicate well. While this is not an easy job relative to many other tasks, it's not on the same level of difficulty of oh, say, developing new medical procedures, inventing new technologies, or devising and testing new theories of human interaction.* Given that, I'd rather our best students focus on the most difficult tasks and/or those with the highest social benefit, while our capable-but-definitely-not-the-best students focus on getting first graders to color inside the lines or getting eighth graders to dissect a frog without vomiting.**

*The inclusion of the latter is me puffing out my chest, in case you couldn't tell.

**Not sure if I have those activities assigned to the proper class because I skipped 8th grade and never dissected a frog, so I assume that's when that happened.

Friday, September 23, 2011

Understanding

. Friday, September 23, 2011
0 comments

Angus Deaton has some new research that may help explain why political scientists are all neurotic:

According to Deaton’s analysis, the very act of thinking about politics makes Americans feel less happy and satisfied with their lives — an effect that’s almost as big as being unemployed.  
“People appear to dislike politics and politicians so much that prompting them to think about them has a very large downward effect on their assessment of their own lives,” he writes. “The effect of asking the political questions on well-being is only a little less than the effect of someone becoming unemployed, so that to get the same effect on average well-being, three-quarters of the population would have to lose their jobs.”
Welp, that explains a lot. Here's the paper (pdf).

Tuesday, September 20, 2011

Advice

. Tuesday, September 20, 2011
0 comments

Fabio Rojas:

For example, when I was in graduate school, I often obsessed about work even when I was on vacation. But over the years, I learned to do what I want with whom I want and not to care about what people think. Not caring about what other people think is an important life skill. Just relax as much as you can and enjoy life.
I imagine that gets easier after tenure.

Saturday, September 3, 2011

Universities Are Not (Only) About Education

. Saturday, September 3, 2011
3 comments


Angus writes:

College football is a mess, with Ohio State and The U providing the latest "scandals" and with the pattern of conference jumping we've seen lately.

I think it's time to split big time football from academics. Dissolve the NCAA. Pay the players. Don't even force them to be students if they don't want to be students. Treat college football like an age 21 and under pro league. The schools rent out their facilities, names, supporters, etc. and the football program is separate from the school itself, just like the food service program.

I've long viewed college football, and college athletics in general, as a sort of "loss leader" for the university. A prominent sports program raises the university's status. Saying that it doesn't do much for the university's core mission only makes sense if you think the core mission is efficiently allocate resources towards the best educational environment possible.

But it clearly isn't. Universities exist for a host of reasons, most related to status and social networking rather than actual education. Which is why so many people are willing to pay huge premia to go to 4-year universities for basic classes rather than 2-year colleges, even though the class quality will usually be comparable or even better at the 2-year schools (b/c of smaller class sizes, professional teachers rather than researchers teaching those classes, same texts and curricula, etc.). And why many people are willing to pay even higher premia to go to flagship 4-year colleges rather than Eastern Small Town State, even though the actual education will be very similar.

So having high-profile sports programs does serve universities' core mission: it raises the university's status, and that attracts students and other sources of funding.

Tuesday, August 16, 2011

Social Science, Democracy, and Being Contrary

. Tuesday, August 16, 2011
3 comments

Fabio Rojas:

It’s often thought that social science is a tool for progress and democracy. Overall, I agree. However, there’s a sense in which social science is anti-democratic:

- Social scientists may discover that popular behaviors have bad outcomes.
- Social scientists may discover that popular government policies have bad outcomes.
- Social scientists privilege experts over the “person in the street.”
- Social scientists may find that policies favoring certain political, social or corporate actors may be bad.

You might think of this as the Ibsen view of politics and social science. And you see this already. It’s now a ritual among some politicians to trash social scientists.


I believe my thoughts on social science are pretty idiosyncratic, but it has always been part of my thinking that social scientists should have a pretty strong streak of contrarianism in them. This can manifest itself in different ways. There's the lack of one-handed economists, eg, but what I really mean was once articulated by Christopher Hitchens like so: "Sit me down across a table with an ashtray and a bottle on it, and cue the other person to make an argument, and I am programmed by the practice of a lifetime to take a contrary position." This is not contrarianism for its own sake, but instead a constant probing and refinement of argument and evidence, as if it were a battle. Which it is. In any case it's an impulse for me, as fellow students and professors who have been unfortunate enough to have me in seminars would no doubt attest. More than anything else that impulse is what attracted me to academia in the first place.

Other social scientists seem not to share this impulse, instead enjoying the pursuit of consensus. And there's nothing wrong with that. I certainly don't want to disparage hard-won insights from social science that enjoy broad acceptance among experts. There's usually very good reasons why they are broadly accepted. But the enduring works in almost any social science field -- some of my favorites in political economy are Hume, Mill, Smith, and Marx -- are those which seek to overthrow the received wisdom rather than reinforce it. Even intellectual failures can be much more interesting than successes.

Rojas' post reminded of Joshua Tucker's post on social science and torture from awhile back:

My original thought was that good social science research that shows that torture does not extract useful intelligence information would be the final nail in the coffin in any public argument in support of torture. But what happens if one of us gets access to the relevant data, does the empirical analysis, and then discovers the opposite: that torture does lead to useful intelligence information. What do you do then? Sit on the results? Would any political science journal publish such a paper? How would that look in a tenure review? (“Right, she’s the one who said torture was valuable…”).


I couldn't imagine not publishing that paper if I'd written it. Not because I think such a finding would suggest that we should torture more, but because I think that moral and intellectual progress comes from tackling difficult questions head-on rather than shrinking from them. And because I strongly believe that all beliefs, especially core beliefs, must be carefully weighed against the best arguments in opposition to them. Without that there's no point in any intellectual enterprise. I'm very much a Mill-ian in that respect.

I agree with Rojas that social scientists often avow the merits of democracy despite the fact that much of our own research, behavior, and intuition suggests that democracy has deep flaws along many dimensions. Many of our assumptions about democracy do not actually make much sense in theory, and the empirical record is equally spotty. I think that many social scientists are too slow to ask who benefits from political institutions, and to quick to assume that in the case of democracy the answer is more or less "everyone". But we know that isn't true. I was recently reading debates from state legislatures over constitutional reforms (at the state level) in America in the 1820s, and it was amazing how up-front some statesmen were in arguing that democracy threatened their interests, while their opponents admitted that that was indeed the point. Of course the whole point of constitutions as legal documents is to restrict democratic tendencies. So why should we start from the assumption that democracy is somehow normatively "good"? Good for who?

I don't really have a takeaway point for this post, other than I'd like to see more social scientists really question core assumptions, admit (and then defend) the normative biases in their work, and give more value to interesting intellectual experiments, even interesting intellectual failures.

Saturday, July 9, 2011

We Are All Mostly Irrelevant QOTD

. Saturday, July 9, 2011
0 comments

Tim Harford:

Approximately 3,000 scientific articles are published per day – roughly one every 10 seconds of a working day. We can now expect that these papers will, each year, cite around five million previous publications. And the rate of production of scientific papers is quadrupling every generation. (All these estimates are based on data from the Institute for Scientific Information.) The percentage of human knowledge that one scientist can absorb is rapidly heading towards zero. This side of a new Dark Age, there will never be another Da Vinci.


Whenever Tom Coburn proposes cutting NSF funding and everyone squeals something like this comes to my mind. And, like Harford, I support public (and private!) research funding, but we need to be realistic about what exactly the return to that investment is. The old romantic ideal of scholar-polymath is over, and that was inevitable. Moreover, it's a good thing; it means that knowledge has progressed massively, to the point where no single person can absorb it all. We all have to fill our niches, and then try to make our niche-work known so it has some relevance for the outside world. So let's just recognize that we're working on the margins and try to do the best with that that we can.

Meanwhile human progress marches on.

International Political Economy at the University of North Carolina: Academia
 

PageRank

SiteMeter

Technorati

Add to Technorati Favorites