Research

Ongoing Projects

Reproduction, Reproduction. Assessing the reproducibility in observational social science research (ongoing project with Katrin Auspurg, Andreas Schneck, and Laura Schächtele).

Reproducibility relies on open data and code. Even when data and code are available, though, studies my fail to reproduce for various reasons (e.g. technical hurdles, unclear documentation). This project provides a large-scale assessment of the reproducibility of quantitative observational social science research based on articles using data from the European Social Survey (ESS). For each selected article, we first perform a standardized computational reproduction. Secondly, articles that were successfully reproduced undergo a revised reproduction. During this process, we document the prevalence of computational and statistical errors (e.g., outlier correction, recoding of missing values) and assess their impact on the robustness of findings. This allows us to a) draw conclusions about aggregate levels of reproducibility in observational social science research, b) pinpoint key obstacles to reproducibility, and c) provide empirically based guidelines how to make research reproducible.

Publications

Krähmer D., Schächtele L., Schneck A. (2023). Care to share? Experimental evidence on code sharing behavior in the social sciences. PLOS One 18(8): e0289380. DOI: 10.1371/journal.pone.0289380

Transparency and peer control are cornerstones of good scientific practice and entail the replication and reproduction of findings. The feasibility of replications, however, hinges on the premise that original researchers make their data and research code publicly available. To investigate which factors influence researchers’ code sharing behavior upon request, we emailed code requests to 1,206 authors who published research articles based on data from the European Social Survey between 2015 and 2020. In this preregistered field experiment, we randomly varied three aspects of our code request’s wording: the overall framing of our request, the appeal why researchers should share their code, and the perceived effort associated with code sharing. Overall, 37.5% of successfully contacted authors supplied their analysis code. Of our experimental treatments, only framing affected researchers’ code sharing behavior, though in the opposite direction we expected: Scientists who received the negative wording alluding to the replication crisis were more likely to share their research code. Taken together, our results highlight that small-scale individual interventions will not suffice to ensure the availability of research code.
Read Open Access →

Krähmer D. (2023). MFCURVE: Stata module for plotting results from multifactorial research designs. Statistical Software Components S459224. Boston College Department of Economics.

Multifactorial research designs (e.g. factorial survey experiments, conjoint analysis) allow researchers to study the joint impact of multiple factors on an outcome. They are wide-spread, versatile, and epistemologically useful but notoriously hard to visualize. Even a simple design of two factors with three levels each yields 32 unique treatment combinations. More elaborate setups quickly spawn spiraling numbers of distinct treatment combinations, rendering the visualization of results difficult. The Stata command mfcurve provides a remedy. Mimicking the appearance of a specification curve, mfcurve produces a two-part chart: the graph’s upper panel displays average effects for all distinct treatment combinations; its lower panel indicates the presence or absence of any level given the respective treatment condition. This enables researchers to plot and inspect results from multifactorial designs much more comprehensively. Optional features include replacing point estimates with box plots and testing results for statistical significance.