Monday, February 17, 2014

Brief considerations on policy relevance, rigor, and political science

In a piece in the New York Times that I won't bother linking to, the Kristof dude laments that political science is not sufficiently relevant, yadda yadda. Most readers will know about it anyway. Usual stuff, probably not even worth discussing. I have the impression that the NYT has an article like that (with optional snippets of Perestroika supporters thrown in) every month. Eric Voeten has already provided a response, which lists important policy-relevant work that political science has produced recently.

My take is going to be different. From the privileged position of someone who, at least temporarily, failed in his career as a political scientist, I see things from a different angle. In fact, due to circumstances not always under my control, I found myself working as an empirical social scientist for what are first and foremost policy-minded projects. Hence I can offer some insights on what "policy people" might mean when they complain about political science not being policy relevant, and what they might mean when they complain about "math" or "methodological complications." In my view, while Kristof's achievement is that of implicitly combining the two perspectives in one single fetid mix, these are two somewhat distinct perspectives.

First of all, some policy-minded people resent the fact that we are slow. Every paper that appears in a top journal takes a significant amount of work, including multiple conference and seminar presentations and several rounds of revision, and in the end addresses only a very small, well-defined, question, with stringent scope conditions attached. But policy people often need a lot of advice, and fast. And, most importantly, they misunderstand the radical difference that exists between a hunch and an evidence-based claim.

I'll give an example. One can read ten years of the APSR, and find there fewer claims than those a billionaire and a journalist can cram into a single book. Try reading the book. As the saying goes, el sapiente sa poco, l'ignorante el sa massa, ma el mona sa tuto. In a few hundred pages, the authors demonstrate they they have an answer to basically every important question about development, democracy, culture, and institutions that has been puzzling humanity for two-and-a-half millennia; that has occupied minds like those of Aristotle, Adam Smith, and Immanuel Kant; and that keeps thousands of living political scientists and economists busy seven days a week. [The authors go as far as sketching a complicated institutional framework for a new "intelligent governance", with specific provisions for elections, agenda setting, etc. There's even a Quadrumvirate (!). This is something that would be at best mildly amusing in its megalomaniac childishness, if it were not tragic: the book is on some "best book" list in the Financial Times (!)]

In sum, what "policy people" do not understand is the huge rift between assertions or hunches, and claims like this or this or this (just to pick some cool pieces from the most recent issues of the top polisci journals). And what people like the authors of the book linked above probably also do not understand is that Laura, Kate, and Cyrus, and most of my colleagues for that matter, could write ten books of assertions a year, if so inclined. It's not that they are slow because they are not smart enough; they produce knowledge at the correct pace, given that they aim at making valid claims.

The second problem policy people have with is that rigor appears, to them, as a straitjacket that impedes providing support for their favorite foregone conclusion. Deductive work (e.g., game theoretic modeling) and statistical analysis enforce rigor by construction. So in part, the hostility towards "math" work is just hostility towards rigor. For example, over the past months I have received reminders from the higher-ups about how one project I am working on is "not an academic project." It was not clear to me what this meant. I kept saying that academic rigor is needed to find the correct answer, and that the policy community should work based on the best available knowledge. But then I starting getting hunches about what "not being an academic project" really meant: in fact, I have a strong impression that one result of the data analysis seems to be already known, even before collecting the data. It's a minor one though: just that "democracy is not good for governance." Good luck with that.


2 comments:

  1. One of my responses to the Kristof piece was that political science is, if anything, becoming more policy relevant because the increasing use of experiments and the more rigorous focus on causal identification often necessarily involve studying interventions that could in fact be affected by policy (as compared to studying more structural variables that cannot be changed through policy levers). The articles that you highlight are certainly in this vein.

    ReplyDelete
  2. Matt: Indeed, if the complaints were motivated by a genuine concern for applied insights, the policy community should be happier about the recent intervention-based research. The problems are, again, two: a) intervention-based research is perceived as "too narrow"; b) it shows - if done properly - what actually works, not what one would want to work. Armchair speculation is preferred, then, because it can be arbitrarily broad, and it can more easily reach the desired foregone conclusion.

    ReplyDelete