Selected Working Papers

Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation

Pre-print available at arXiv.

With Joahim Baumann, Paul Röttger , Aleksandra Urman, Flor Miriam Plaza-del-Arco, Johannes B. Gruber, and Dirk Hovy

Paper Image
Large language models (LLMs) are rapidly transforming social science research by enabling the automation of labor-intensive tasks like data annotation and text analysis. However, LLM outputs vary significantly depending on the implementation choices made by researchers (e.g., model selection, prompting strategy, or temperature settings). Such variation can introduce systematic biases and random errors, which propagate to downstream analyses and cause Type I (false positive), Type II (false negative), Type S (wrong sign for significant effect), or Type M (correct but exaggerated effect) errors. We call this LLM hacking.

We quantify the risk of LLM hacking by replicating 37 data annotation tasks from 21 published social science research studies with 18 different models. Analyzing 13 million LLM labels, we test 2,361 realistic hypotheses to measure how plausible researcher choices affect statistical conclusions. We find incorrect conclusions based on LLM-annotated data in approximately one in three hypotheses for state-of-the-art (SOTA) models, and in half the hypotheses for small language models. While our findings show that higher task performance and better general model capabilities reduce LLM hacking risk, even highly accurate models do not completely eliminate it. The risk of LLM hacking decreases as effect sizes increase, indicating the need for more rigorous verification of findings near significance thresholds. Our extensive analysis of LLM hacking mitigation techniques emphasizes the importance of human annotations in reducing false positive findings and improving model selection. Surprisingly, common regression estimator correction techniques are largely ineffective in reducing LLM hacking risk, as they heavily trade off Type I vs. Type II errors.

Beyond accidental errors, we find that intentional LLM hacking is unacceptably simple. With few LLMs and just a handful of prompt paraphrases, anything can be presented as statistically significant. Overall, our findings advocate for a fundamental shift in LLM-assisted research practices, from viewing LLMs as convenient black-box annotators to seeing them as complex instruments that require rigorous validation. Based on our findings, we publish a list of practical recommendations to limit accidental and deliberate LLM hacking for various common tasks.

The Geographical Dimension of Group Based Appeals: Evidence from 50 Years of Swedish Parliamentary Speech

Revise and Resubmit, Political Geography

Paper Image
Which places do politicians appeal to, and how do they appeal to them? While previous research has emphasized how politicians appeal to different social groups, we know less about the geographical dimension of politicians' group-based appeals. In this paper I argue that politicians' use place-based appeals to display local awareness of their constituency, and that rural parliamentarians are most responsive. To test this claim, I use a new approach for identifying geographical mentions in speeches that combines named entity recognition and Geocoding. By analyzing fifty years of parliamentary speech in Sweden, I demonstrate that politicians are more likely to mention areas in their constituency, and that the topic of those speeches reflect local socioeconomic conditions. Moreover, I show that rural parliamentarians are more responsive to areas in their constituency, than urban parliamentarians. Moreover, my findings show that most speeches that use geographical appeals simultaneously mention social groups. Further analysis, show that an urban-rural divide in social group appeals has grown more pronounced in recent years. Overall, this study improves our understanding of geographical representation and how local conditions structure social group appeals.