Prediction as a device to evaluate theory

Given that explanation in the absence of prediction is ‘pre-scientific’ (Schrodt 2014), the strong emphasis on evaluating out-of-sample predictive performance allows us to put the theoretical and empirical model components underlying ViEWS to very demanding tests. When doing so, we will also explore why some statistically significant variables fail to improve predictive performance. One possible reason is over-fitting, which is common when data are sparse (Ward et al. 2010). Our methodological work on model selection and averaging will also shed light on whether predictive performance depends on specification problems. We will also explore how our integrated empirical models speak to the theoretical literature.


Kick-off meeting for the ViEWS project

The ViEWS project started up with a kick-off meeting on 17–18 January. The project, directed by Håvard Hegre and involving Hanne FjeldeLisa HultmanDesiree Nilsson, as well as an international team of researchers, will develop, test, and iteratively improve a pilot Violence Early-Warning System (ViEWS). It will provide early warnings for four forms of political violence: armed conflict involving states and rebel groups, armed conflict between non-state actors, violence against civilians, and forced population displacement, and apply these to specific actors, sub-national geographical units, and countries. The system will leverage the data resources of the UCDP and other data sources developed by the Department and the project’s international partners.

The project has five years of funding from the European Research Council and involves collaboration with the Peace Research Institute Oslo (PRIO)

Related publications