One of the top journals in political science, the Journal of Politics, announced this week that all experimental research (laboratory, field, and survey experiments) will be required to be preregistered. For those unfamiliar, a “pre-analysis plan” or PAP consists of a document of how a researcher will collect and analyze data, which is submitted to a public repository before a project begins (for a great how-to-do tweet thread, see here). By committing to a plan of analysis in advance, among other things it prevents researchers from “cherry-picking” and only presenting data, hypotheses, and model specifications that were significant.
Research in historical political economy uses observational data, and is experimental only as it relates to design-based inference or natural experiments. Interestingly enough, the new JOP rules apply only to experimental papers, while arguably the ability to p-hack an observational study is potentially much higher. A brilliantly named study looks at publication bias in economics and finds evidence to this effect when comparing RCTs to observational and other studies (see figure below). As a result, one would think that PAPs would be MORE common in fields like historical political economy, that rely on observational data, but that’s not the case.
I think PAPs, and the exercise of creating them are useful (there are many great resources here, here, here, and here), and my goal here is NOT to resolve the debate regarding making them mandatory (see the lively discussion on Twitter and various blogs here, here, here).
Instead, I’d like to talk about how this affects research using historical data — some challenges with preregistering historical work, and why an interdisciplinary field like historical political economy may not always use PAPs. In particular, some features of historical work can’t exploit the purported advantages of PAPs (though TLDR; I suspect all HPE scholars should try them, because it’s a useful exercise). Also, there are some great new resources for doing observational PAPs, including the PAP-Q, which I’ll discuss below.
The Challenges of PAPs for Historical Research
First, PAPs are more difficult to execute when there is incomplete information about the case, context, or data structures. PAPs are ideally registered prior to data collection, but typically in a context where the researcher is familiar with the research setting in question. Historical cases are sometimes incomplete, and historical data can feature many holes — missing data; missing information about the case, culture, or time period in question; archival silences, etc. While some of these can be anticipated in a pre-analysis plan, the incomplete nature of historical cases hinder the ability of the researcher to structure and keep to specific hypotheses needed for pre-registration, and could make PAPs impractical (see my comments on exploratory research below).
Second, and ironically, you can also have the exact OPPOSITE concern with historical data and pre-registration — researchers most likely have seen some or all of the data before writing the preregistration plan (because by definition of historical data, the data collection happened in the past). This means there’s no way to guarantee that researchers haven’t “cheated” and written a PAP to match their informal data analysis. Because of this, historical PAPs might not be seen as credible, and therefore not useful. (A PAP timed before the release of previously unveiled archival records would bypass this concern). Further, as Volha Charnysh points out, historical datasets are exceptionally time intensive to create — this not only means it’s difficult to pre-specify the data collection over the course of years, but it also means these massive data sets are often used for many different projects.
Third, PAPs are best used in hypothesis testing, not necessarily exploratory research or theory generation. In this regard, preregistration is more applicable to some types of studies than others. Historical research is often both deductive AND inductive. Researchers often learn brand new facts about the case through data collection and archival records, which means the variables, theories, and hypotheses that would feature in a PAP might be constantly changing — not because of p-hacking, but because of contextual learning that happens with historical research. As a result, good deal of work in historical political economy would be outside of the scope of a typical PAP (and might not enjoy its advantages; this echoes claims like Gelman’s that preanalysis plans could inhibit exploratory work).
That being said, there’s also a case to be made that PAPs can successfully signal credible exploration. Said in another way, the researcher would have to define the “scope of the exploration” ex-ante, and so future readers could be confident it was, in fact, exploratory. And to be clear — once written, it’s always acceptable to deviate from the PAP as long as the researcher explains and justifies the deviation. However, for some historical projects, the potential of having to justify every exploratory research step might discourage such researchers from even attempting a PAP.
Finally, if preregistration is sometimes challenging with historical, observational research, there might be other, better solutions — including results-blind review called “preacceptance” (particularly combined with preregistration). Scholars are already arguing you can’t have one without the other (see here, or here), and historical papers can more easily meet the conditions for results-blind review. Preregistration could help bring a much needed emphasis on theory back to empirical fields; results-blind acceptance would encourage testing interesting and important questions, opposed to pre-registration which might skew towards finding consistent and/or significant results.
The Way Forward
All this being said, it’s completely possible to preregister studies in historical political economy. Many studies focus on complete historical cases, with clear hypotheses to test, and could more or less credibly preregister their analyses — for a great empirical overview of the conditions under which this is credible, see an article by Alan Jacobs.
There are also some great new advances in pre-registration with observational and qualitative data. These articles are providing a framework that can be adapted to a wider range of research methods. For example, Piñero and Rosenblatt (2016) have proposed a PAP-Q , with a comprehensive set of instructions that mitigate some of the challenges I discussed earlier in the post– among other things, the protocol is established before the trip to the archives, and there’s a section to delineate the inductive/deductive relationship.
Scholars such as Haven and Grootel (2019) are also proposing tailored preregistration templates; Kern and Fleditsch (2017) also make the case for qualitative preregistration, and there’s even a template set up, that researchers can use out of the gate. (Kudos to Jeff Jenkins for flagging these).
Finally, this preregistration debate builds on prior discussions about transparency in qualitative research. On the political science side, I’d like to take the time to promote some resources being developed to increase transparency in qualitative and mixed methods research. The Qualitative Transparency Deliberations (QTD, see http://www.qualtd.net/), formed by the Qualitative and Multi-Method Research (QMMR) section of the American Political Science Association (APSA) led to a new set of publications as well as the report from the deliberations. Relatedly, Moravcsik (2019) also presents a nice discussion on transparency in qualitative research.
I think it’s clear that HPE scholars should, when possible, try to prepare PAPs; preregistration is a useful tool (and thought experiment). One weakness of this tool is that there’s no uniform standard for how to create them or how to use them to evaluate research across a wide variety of methods (and variations in to what extent they are successful), but hopefully this will change. Preregistration also simply might not be practical or advantageous for historical data, and the context under which it is useful for historical research will depend on the particular study. But regardless of what research one does, there should be more transparency, and HPE scholars should definitely take note of this debate!