Archive

Lab projects

Over the past few months we have worked on regularly updating our irregular leadership change models and forecasts in order to provide monthly 6-month ahead forecasts of the probability of irregular leadership change in a large number of countries–but excluding the US–worldwide. Part of that effort has been the occasional glance back at our previous predictions, and particularly more in-depth examinations for notable cases that we missed or got right, to see whether we can improve our modeling as a result. This note is one of these glances back: a postmortem of our Yemen predictions for the first half of 2015.

To provide some background, the ILC forecasts are generated from an ensemble of seven thematic1 split-population duration models. For more details on how this works or what irregular leadership changes are and how we code them, take a look at our R&P paper or this longer arXiv writeup.

We made a couple of changes this year, notably adding data for the 1990’s, which in turn cascaded into more changes because of the variation in ICEWS event data volume. This delayed things a bit, but eventually we were able to generate new forecasts for the time period from January to June 2015, using data up to December 2014. Here were the top predictions:

Country 6-month Prob.
Burkina Faso 0.058
Egypt 0.055
Ukraine 0.044
India 0.038
Somalia 0.038
Afghanistan 0.035
Nigeria 0.030

Read More

Alexander Noyes and Sebastian Elischer wrote about good coups on Monkey Cage a few weeks ago, in the shadow of fallout from the LaCour revelations. Good coups namely are those that lead to democratization, rather than outcomes one might more commonly associate with coups, like military rule, dictatorship, or instability. Elischer, although on the whole less optimistic about good coups than Noyes, writes:

There is some good news for those who want to believe in “good coups.” A number of military interventions in Africa have led to competitive multiparty elections, creating a necessary condition for successful democratization. These cases include the often (perhaps too often)-cited Malian coup of 1991, the Lesotho coup of 1991, the Nigerien coups of 1999 and 2000, the Guinean coup of 2008, the Malian coup of 2012 and potentially Burkina Faso’s 2014 coup, among others.

Here is a quick look at the larger picture. I took the same Powell and Thyne data on coups that is referenced in the blog posts and added the Polity data on regimes to it.1 Specifically, I added the Polity score 7 days before a coup, and 1 and 2 years afterwards, although I’ll focus on the changes 2 years later. The Polity score measures, on a scale from -10 to 10, how autocratic or democratic a regime is. The scale is in turn based on a larger number of items coded by the Polity project. It’s not quite an ordinal or interval scale, in part because there are a couple of special codes for regimes that are in transition or where a country is occupied or without a national government (failed state). Rather than exclude these special scores or convert them to Polity scores, I grouped the Polity scores into several broader categories from autocracy to full democracy, and kept the special codes under the label “unstable”, which may or may not be a good description for them.

The overwhelming pattern for all 227 successful coups that the data cover is that things stay the same (0.41 of cases) or get worse (0.40 of cases). The plot below shows the number of times specific category-to-category switches took place, with the regime 7 days before a successful coup on the y-axis, and the regime 2 years later on the x-axis. It’s really just a slightly more fancy version of a transition matrix.2

2-year-transitions-ssa-90on

Read More

This post was written by Jay Ulfelder and originally appeared on Dart-Throwing Chimp. The work it describes is part of the NSF-funded MADCOW project to automate the coding of common political science datasets.

Guess what? Text mining isn’t push-button, data-making magic, either. As Phil Schrodt likes to say, there is no Data Fairy.

I’m quickly learning this point from my first real foray into text mining. Under a grant from the National Science Foundation, I’m working with Phil Schrodt and Mike Ward to use these techniques to develop new measures of several things, including national political regime type.

I wish I could say that I’m doing the programming for this task, but I’m not there yet. For the regime-data project, the heavy lifting is being done by Shahryar Minhas, a sharp and able Ph.D. student in political science at Duke University, where Mike leads the WardLab. Shahryar and I are scheduled to present preliminary results from this project at the upcoming Annual Meeting of the American Political Science Association in Washington, DC (see here for details).

When we started work on the project, I imagined a relatively simple and mostly automatic process running from location and ingestion of the relevant texts to data extraction, model training, and, finally, data production. Now that we’re actually doing it, though, I’m finding that, as always, the devil is in the details. Here are just a few of the difficulties and decision points we’ve had to confront so far.

Read More

Improvised explosive devices, or IEDs, were extensively used during the US wars in Iraq and Afghanistan, causing half of all US and coalition casualties despite increasingly sophisticated countermeasures. Although both of these wars have come to a close, it is unlikely that the threat of IEDs will disappear. If anything, their success implies that US and European forces are more likely to face them in similar future conflicts. As a result there is value in understanding the process by which they are employed, and being able to predict where and when they will be used. This is a goal we have been working on for some time now as part of a project funded by the Office of Naval Research, using SIGACT event data on IEDs and other forms of violence in Afghanistan.

expl-haz

Explosive hazards, which include IEDs, for our SIGACT data.

Read More

thai_coup_announcement

Thailand’s Army chief General Prayuth announces the coup on television on 22 May 2014. Source: SCMP

This morning (May 22nd, 2014, East Coast time), the Thai military staged a coup against the caretaker government that had been in power for the past several weeks, after months of protests and political turmoil directed at the government of Yingluck Shinawatra, who herself had been ordered to resign on 7 May by the judiciary. This follows a military coup in 2006, and more than a dozen successful or attempted coups before then.

We predicted this event last month, in a report commissioned by the CIA-funded Political Instability Task Force (which we can’t quite share yet). In the report, we forecast irregular regime changes, which include coups but also successful protest campaigns and armed rebellions, for 168 countries around the world for the 6-month period from April to September 2014. Thailand was number 4 on our list, shown below alongside our top 20 forecasts. It was number 10 on Jay Ulfelder’s 2014 coup forecasts. So much for our inability to forecast (very rare) political events, and the irrelevance of what we do.

Read More

The prediction community owes a great deal to Phil Tetlock, who has been involved in some of the largest and longest evaluations of expert forecasts to date. Tetlock is perhaps most widely known for his two-decade long study of political forecasters, which found that “foxes” (who know a little about a lot of different topics) typically outperform “hedgehogs” (who know a lot about one specific domain) in near-term forecasting. Over the last three years, Tetlock, Barbara Mellers, and Don Moore have led the Good Judgment Project, a large-scale forecasting tournament.

The Good Judgment Project began in mid-2011 as a forecasting tournament between five teams, sponsored by the US Government. (Read early coverage of the project from The Economist here.) Each of these teams had its own methods for leveraging the knowledge of its members to generate accurate forecasts about political and economic developments around the world. For example, the Good Judgment Team now assigns its forecasters to smaller teams of about a dozen members. This allows for collaboration in sharing information, discussing questions, and keeping each member motivated. Example questions include “What will the highest price of one ounce of gold be between January 1, 2014 and May 1, 2014?” or “Who will be the King of Saudi Arabia on March 15, 2014?” Predictions are scored both individually and as a team using Brier scores.

Season 3 of the tournament began this summer, and for the first time forecasters now have access to information from ICEWS, provided directly by the ICEWS project. ICEWS covers five events of interest (insurgency, rebellion, ethnic or religious violence, domestic political crisis, and international crisis) around the world on a monthly basis, and makes forecasts six months into the future. Two current Good Judgment questions related to ICEWS are:

  • Will Chad experience an onset of insurgency between October 2013 and March 2014?
  • Will Mozambique experience an onset of insurgency between October 2013 and March 2014?

Read More

GDELT (gdelt.utdallas.edu) is a global database of events which have been coded from vast quantities of publicly available text that is produced by the world’s new media. It has created a great deal of excitement in the social science community, especially within the field of international relations. But it has had wider visibility as well: in August 2013, there were 150,000 views of a map of protest activity around the world, based on the GDELT database.  Event data have been around for several decades, but the GDELT project has generated new interest.

ICEWS is an early warning system designed to help US policy analysts predict a variety of international crises to which the US might have to respond. These include international and domestic crises, ethnic and religious violence, as well as rebellion and insurgency. This project was created at the Defense Advanced Research Projects  Agency, but has since been funded (through 2013) by the Office of Naval Research. ICEWS also produces  a  rich corpus of text which is analyzed with powerful techniques  of automated event-data production.  Since GDELT and ICEWS are based on similar, though not identical methods and sources, it is interesting to compare them.

ICEWS data

ICEWS event data, gray line for stories and black line for events, 2001-2013

One area in which they are most conceptually different is that ICEWS follows a more traditional approach to event data in seeking to encode a chronology of events that reflects in some sense  the putative ground truth of what occurred. The figure on the right shows the corpus of stories in ICEWS (gray) and the resulting events (black): total events are fairly stable over time event though the number of media stories increases. GDELT is more concerned with getting a comprehensive catalogue of all media stories (and other text) on reported events, and the corpus of those media stories is increasing exponentially, as the figure below shows. As a result, the number of events in GDELT is also increasing over time, much more so than ICEWS.

Read More