IQMR 2025

IQMR 2025 Modules

Below please find descriptions of the module sequences that will be offered at IQMR 2025. The module sequences are listed in alphabetical order (which does not match the order in which they will be offered).

Bayesian Inference for Qualitative Research

Tasha Fairfield

The way we intuitively approach qualitative research is similar to how we read detective novels. We consider different hypotheses to explain what happened—whether democratization in South Africa, or the death of Samuel Ratchett on the Orient Express—drawing on the literature we have read (e.g. theories of regime change, or other Agatha Christie mysteries) and any other salient knowledge we have. As we gather evidence and discover clues, we update our views about which hypothesis provides the best explanation—or we may introduce a new alternative that we think up along the way. Bayesianism provides a logically rigorous and intuitive framework that governs how we should revise our views about which hypothesis is more plausible, given our relevant prior knowledge and the evidence that we find during our investigation. Bayesianism is enjoying a revival across many fields, and it offers a powerful tool for improving inference and analytic transparency in qualitative research. The principles we will cover in this module can be applied to single case studies (within-case analysis), comparative case studies (cross-case analysis), and multi-method research that draws on both qualitative evidence and quantitative data. Throughout, we will work with examples and exercises drawn from published social science research.

Causal Inference from Causal Models

Alan M. Jacobs

This module explores the use of causal models to design and implement qualitative and mixed-method empirical strategies of causal inference. A great deal of recent methodological progress in the social sciences has focused on how features of a research design – such as randomization by the researcher or by nature - can allow for causal identification with minimal assumptions. Yet, for many of the questions of greatest interest to social scientists and policymakers, randomization or its close equivalents are unavailable. We are, in short, often forced to rely on beliefs about how the world works - that is, on models. Based on a book by Macartan Humphreys and Alan Jacobs, this module will examine how we can engage in systematic model-based causal inference. Specifically, we will explore how researchers can encode their prior knowledge in a probabilistic causal model (or Bayesian network) and an associated directed acyclic graph (DAG), use the model to make research design choices (including selecting cases and choosing observations), and draw inferences about causation at the level of both individual cases and populations, using both qualitative and quantitative data. While this module will focus on the methodology at a theoretical and intuitive level, students will then have the opportunity to take a series of online IQMR workshops later in the summer that teach the R package CausalQueries, which implements the approach.

Choosing Spatial Units of Analysis

Hillel David Soifer

Recent decades have seen major advances in the methodology of both qualitative and quantitative research, as scholarship has become much more purposive and precise both about the selection of cases and about the analysis of the data generated in the investigation of those cases. Yet the selection of cases and analysis of data is predicated on a prior, less studied, research design choice: the identification of the spatial unit of analysis. This issue, moreover, is no small detail: as geographers have shown, phenomena vary according to how spatial units are defined - this is what is known as the modifiable areal unit problem. As the size, shape, and location of the borders of a spatial unit change, the association we will find between social and political phenomena that we seek to study will change in fundamentally unpredictable ways. Indeed, even studies of individual-level attributes and behavior that seek to control for characteristics of context will be affected by how the researcher chooses to draw the boundaries within which context is measured. In short, the spatial units we choose affect the answers we get, and even the questions we ask. Yet even as the discipline of geography has been roiled by this issue for several decades, cognate social science disciplines have not grappled with it. The result is that we have little guidance for choosing an appropriate unit of analysis, and little sense of how choices about the units of analysis in existing studies might shape findings we take as robust instantiations of conventional wisdom. This module will explore these issues. We will first outline the depth of the problem in terms of the threats it poses to inference, and the range of scholarship to which it potentially applies. We will then turn to the role of theory in justifying the validity of the spatial units that we choose, and to ways to demonstrate the reliability of our findings through empirical analysis. Participants will have the opportunity to work through issues in their own research designs in addition to exploring the implications of this problem for their confidence in the findings of existing research in their areas of interest.

Comparative Historical Analysis

Marcus Kreuzer

We live in a constantly emerging world in which studying changes across time are just as crucial as analyzing differences across cases to understand our contemporary politics. Comparative historical analysis (CHA) has long studied such historical changes and made important contributions to our understanding of how to use time to study the past. It goes back to the 19th century classics and shares more recently its ambitions with American Political Development, historical institutionalism, and a long historical tradition in international relations. These approaches all point out that time is to the past what grammar is to language and maps are to space: an essential tool of analysis.

This module explores three distinct contributions that CHA makes to our understanding of time.

Overall, the module encourages students to spot elements of time that are hidden in their fields of research and explore how CHA can help them think about such elements more systematically, and thus enrich their analysis.

Designing and Conducting Fieldwork

Diana Kapiszewski, Lauren MacLean, Robert Mickey, and Jessie Trudeau

This module sequence discusses strategies for designing, planning, and conducting fieldwork in the social sciences. We begin by considering the multiple aspects of preparing for field research, and then discuss some practical elements – with intellectual implications – of operating in the field. On the second day we talk through key questions relating to research ethics – the importance of which is discussed throughout the module sequence – and consider two “more-interactive” forms of data collection, surveys and interviews. The third module continues the discussion of interview techniques, and also covers focus groups, as well as the types of observation in which all social scientists who conduct field research engage. Finally, the fourth module considers the conduct of archival research, and the various ways in which scholars iterate on their research design and field research design as they conduct fieldwork. Each session of each module is conducted with the understanding that participants have carefully read the assigned materials. The instructors present key points drawing on the assigned readings, other published work on field research, and the experiences they and others have had with managing fieldwork’s diverse challenges. Interaction and discussion in small and large groups is encouraged.

Ethnographic Methods

Sarah E. Parkinson & Kanisha Bond

How can ethnographic methodologies and methods inform the study of politics? The immersive, meaning-centric, everyday-life focus that ethnographic methodologies offer affords social scientists insight into processes, practices, and understandings of political worlds that might otherwise remain hidden or obscure. Sometimes portrayed as “a craft” rather than “a method” or caricatured as “just hanging out,” the research approaches we explore in the Ethnographic Methods sequence explore ethnography’s immense potential in the study of power and politics. It addresses questions such as: What commitments does an ethnographic researcher make to herself, her interlocutors, the communities she studies, and to the discipline? What are the various ways in which a researcher can situate herself in a research practice where she is fundamentally the instrument? How does a researcher develop a robust ethnographic approach to a project, whether as a guiding methodology for an entire book or as part of a multi-method endeavor? What types of data or evidence do those leveraging ethnographic methods generate, through which methods, and how do they analyse them? How have ethnographers challenged, expanded, and innovated upon the presumed fundamentals of ethnography?

Geographic Information Systems

Jonnell Robinson

The module sequence introduces participants to Geographic Information Systems (GIS) spatial data visualization and analysis. Six sessions provide participants with hands-on experience using ESRI’s ArcGIS software suite and a variety of open-source mapping programs including QGIS, Open Street Map, and Google My Maps. Participants will learn to locate and generate high quality spatial data, display mapped data using professional cartographic principles, perform basic spatial data analysis, and how to further hone their GIS skills. The modules also introduce critical GIS and reviews important ethical concerns when mapping socially constructed data. Participants are welcome to work with their own data during the mapping exercises. Participants will leave the module with the skills and confidence to create simple yet powerful maps.

Integrating Qualitative and Experimental Methods

Chris Carter, Charles Crabtree, Tesalia Rizzo Reyes, Guadalupe Tuñón

In this module sequence, we introduce natural and randomized experiments and discuss their strengths and limitations through a survey of recent examples from political science and economics. We introduce a common framework for understanding and assessing natural and randomized experiments based on the credibility of causal and statistical assumptions. We discuss tools for developing and accessing experimental designs, such as instrumental variable analysis, sampling principles, power analysis, data collection do’s and don’ts as well as a variety of robustness tests. We then discuss how to bolster the credibility of natural and randomized experiments in the design stage. We will focus on the role of “ex-ante” approaches to improve the quality and transparency of research designs, such as the use of pre-analysis plans. The module incorporates applied research and practical advice, especially on how to conduct fieldwork, collect data, and analyze the logistics and ethics surrounding experiments. We end the module by evaluating the promise and obstacles to the use of multi-method research in the analysis of natural and randomized experiments. We discuss how qualitative methods can help address some of the criticisms of experiments, as well as how experiments can bolster the inferences drawn from qualitative evidence.

Interpretation and History

Amel Ahmed

What is historical interpretation? In one sense interpretation is a part of all historical analysis. Typically we cannot observe history directly; we learn of it only through documents and artifacts that we have to make sense of. Historical interpretation is not separate from other modes of historical analysis but lies on a continuum. Emphasizing the interpretive aspects of historical analysis means that we do not take at face value the documentary evidence of history we encounter. We question the text as well as its source, we compare narratives, placing them in their historical context, we look for silences and gaps in evidence, as well as voices that may not be heard as easily. Importantly, we also interrogate our own objectives in questioning history and examine the ways in which they may shape our own narratives. Historical interpretation shares with other interpretive methods the search for meaning in subjects’ actions and utterances. But with historical interpretation, the distance of the researcher from the subject matter produces distinctive epistemological challenges and requires a methodological orientation aimed at achieving understanding without the possibility of direct engagement or immersion. In this module we will grapple with some of the dilemmas of historical interpretation including reading history, questioning history, analyzing history, and writing history. We will also engage with enduring epistemological debates about the nature of historical inquiry as well as the challenges of discerning historical lessons.

Interpretive Methods

Lisa Wedeen and William Mazzarella

This two-module sequence provides students with an introduction to various modes of discourse analysis and ideology critique. Students will learn to “read” texts while becoming familiar with contemporary thinking about interpretation, narrative, genre, and criticism. In the first four sessions we shall explore the following methods: Wittgenstein’s understanding of language as activity and its practical relevance to ordinary language-use analysis; Foucault’s “interpretive analytics” with hands-on exercises applying his genealogical method; and various versions (two sessions) of cultural Marxism—with specific attention to “ideology critique.” The last two classes will consider how anthropological discussions of participant observation can unsettle current versions of fieldwork in political science including the limits of standardized codes of ethics for ethnographic research.

Logic of Qualitative Methods

James Mahoney, Gary Goertz, and Laura Garcia Montoya

These modules cover many classic and standard topics of qualitative methodology, with a special focus on how to write a qualitative dissertation or manuscript for publication as a book at an excellent university press. We survey the key research design, case selection, and theoretical issues that arise with such a project. The sessions use logic and set theory as a foundation for discussing and elucidating qualitative methods. The individual topics for the first module include a regularity theory of causality, a session on concepts and a session on multi-method research designs including Large-N qualitative analysis (LNQA), and case study research. The second module focuses on process tracing. After an introduction to process tracing, the module zooms in into two key topics: causal mechanisms and counterfactual analysis. The module concludes by demonstrating how to move from theoretical frameworks to practical applications, using real research examples that integrate process tracing and critical event analysis.

Multi-Method Research

Jaye Seawright

This module looks at how to productively combine qualitative and quantitative methods. The first module focuses on multi-method designs that involve regression and related statistical techniques as the qualitative component, with various roles considered for qualitative components. We will discuss research designs for testing assumptions connected with measurement, confounding, and the existence of a hypothesized causal path. We will also analyze proposed rules for case selection, asking how cases should best be selected from a larger population. The second module considers multi-method designs involving other combinations of methods. We will discuss ways multi-method research can work in the context of random (or as-if random) assignment, exploring how to design case studies in conjunction with experimental or natural-experimental research. We will also will ask what tools from statistics and machine learning can add to causal inferences based on process tracing. We will also discuss mixed-method designs aimed at concept formation and measurement.

Process Tracing and Typological Theories

Andrew Bennett

This module begins with the philosophy of science of causal mechanisms and the basics of case study research design. These lay the foundations for exploring the inferential logic of process tracing, a key form of within-case analysis. We will use Bayesian probability as the underlying logic of process tracing, which entails assessing which hypothesis or theory provides the best explanation for the evidence at hand. We will cover practical advice for conducting process-tracing research as well as best practices for applying Bayesian reasoning in case study analysis. Finally, the module introduces typological theorizing as a way to address interaction effects and an aid in selecting cases for process tracing, and we will discuss examples of typological theories proposed in students’ own work as well as in published research.

Qualitative Causal Inference: From Fundamentals to Process Tracing Applications

Ezequiel Gonzalez-Ocantos and David Waldner

This module introduces students to Qualitative Causal Inference (QCI), a hybrid model of inference that has important implications for how we think about and implement Process Tracing. In Part I, David Waldner outlines the fundamentals of QCI, contrasting it to two alternative models of inference: design-based causal inference, associated with the ‘credibility revolution’ in quantitative social science, and the detective model of causal inference championed by some Process Tracing methodologists. In these sessions students will become familiar with the distinct understanding of ‘causal mechanisms’ that underpins QCI. They will also learn to specify causal graphs to achieve unit-level causal inference, with attention to the mitigation of three potential sources of bias: (1) spuriousness and endogeneity via the construction of a ‘front-door path’ that indicates causal continuity between X and Y; (2) confounding variable bias by checking for the presence of a ‘back-door path’ that produces a non-causal association between X and Y; and (3) measurement error by checking for the presence of a ‘side-door path’ that complicates taking the pre-treatment value of the outcome variable as a proxy measure of the counterfactual (and inherently unobservable) value of the outcome under control. In Part II, Ezequiel Gonzalez-Ocantos examines QCI’s implications for Process Tracing. The sessions (a) offer practical modelling guidelines, with a focus on the construction of ‘front-door paths;’ (b) discuss how causal graphs orient and discipline data collection; (c) present techniques for dealing with missing data, a key challenge to meet the ‘front-door’ criterion; and (d) propose mitigation strategies for bias introduced by the ‘back and side-door paths.’

Re-thinking Small-N Comparisons

Nicholas Rush Smith

Why do we compare? Typically, in political science research, causal inference is taken as the primary goal. Similarly, research that is generalizable to as many cases as possible tends to be valued more than research which can explain only a few. This unit will push past these assumptions in two ways. First, it will provide logics for generalization not rooted in ideas of statistical generalizability or mechanical reproduction. Second, it will expand the goals of comparison from causal inference to alternative practices like creative redescription or conceptual development. Third, we will explore how we can leverage strategies for rethinking comparison to address the practical challenges and unexpected discoveries that often upend pre-established research designs. When a “crisis of research design” strikes, how can researchers cope with partially implemented data collection plans to still generate meaningful theoretical and empirical insights? How can scholars salvage their research designs while maintaining methodological rigor? Finally, we will critique short research designs that will be provided in advance. Among other questions, we will ask ourselves: What kinds of claims can the author make with this research design and why? What are the limits on the kinds of claims they can make? How convincing is this research design? If you were on the selection committee of a funding agency, how would you rate this research design?

Text as Data

Fiona Shen-Bayh

What does it mean to transform texts into data? How do computers read and analyze qualitative information quantitatively? Do computational analyses of texts map onto qualitative understanding of human language? This unit explores these questions and more by introducing students to the “text as data” pipeline, beginning with the curation of digital texts and concluding with the measurement of political concepts in lexical terms. Our first lab-based session will examine what it means to transform a collection of documents into machine readable texts, after which we will cover step-by-step how to build and clean a digital corpus in Python. The next three lab-based sessions will examine a variety of quantitative approaches to analyzing a digital corpus, including counting, vectorizing, and embedding techniques. Our final session will conclude with a group discussion of text-based measurement strategies wherein we critically question whether such methods can produce reliable and valid measures of the concepts political scientists care about.

Unified Sessions

James Mahoney, Jaye Seawright, Lisa Wedeen

These “Unified Sessions,” which all IQMR participants attend, welcome them to the Institute and introduce some of its intellectual foci. Three faculty who have taught at IQMR since its inception will consider the epistemological diversity that underpins qualitative and multi-method research, consider some of the core methods used by scholars in the QMMR community, and think about ways that qualitative and quantitative methods can be combined. The sessions serve as general introductions to various of the module sequences that participants can take at IQMR.