From June 6th to June 10th I was fortunate to be able to attend the 3rd international conference on Governance, Crime and Justice Statistics magnificently organised by the Center for Excellence in Statistical Information on Government, Crime, Victimization and Justice with support from the United Nations Office on Drugs and Crime and the Mexican Institute of Statistics and Geography. The programme brought together researchers and experts from universities and government agencies to discuss current progress in collecting data on crime and justice, continued challenges and novel solutions (see http://www.gsj.inegi.org.mx/programme.html). During four days of wide ranging conversations, the topics ranged from the difficulties of collecting standardised data on murder across different countries, through new procedures for sifting electronically among the exploding number of media outlets for reported cases of terrorism or arrest-related deaths, to the challenges of measuring corruption. Cumulatively, the sessions allowed an assessment of the state-of the art and discussion about ways forward.
What also struck me about the meetings were two things which were NOT discussed. First, speaker after speaker bemoaned the patchy availability of data: governments and other organisations cannot be compelled to collect, organise or publish data. And a related problem was the lack of comparability between data sets: for example, one country includes traffic deaths in murder statistics while another does not. From the perspective of research, quantitative approaches demand standardised data for all cases, but the production of statistical information on crime and justice does not comply with that methodological diktat. So in a way, the topic left silent was not about statistics on governance but about the governance of statistics. Adherence to the standards of methodology might require an authoritarian (or at least centralised) model of data collection; yet currently, the production of statistical data seems to be a model of anarchy – with varying degrees of organisation. Which might be the best model for statistical data production: organised anarchy; a federal arrangement; or a highly centralised bureaucracy?
Second, many papers presented the findings from research projects using different kinds of quantitative data, and most concluded with a call for further research. If murder rates had been compared with state-level social indicators; now it would be important to compare them with municipal-level indicators. If fear of crime was asked in relation the neighbourhood, now it needed to be asked in relation to the city centre. Calls such as these reflect, among other things, the inherent possibilities offered by quantitative research to develop multiple permutations of the measurement strategy by making just one change in any of the variables in the study. But how often does this call for additional research actually lead to new studies, particularly in light of the finite resources available for research (a common gripe); and would it be better to see this type of call as the ‘performance’ of a research project, which concludes by saying that the project really has not concluded (even though it effectively has)?
Of course, these two matters are linked. The multiple permutations available for data collection exercises exist alongside the social organisation of the data collection itself. Would a different model of organisation lead to a different style of research project ‘performance’?