Extracting Social Connections from Finnish Karelian Refugee Interviews Using LLMs

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of quantifying social integration in historical, low-resource linguistic contexts. Using 89,339 post-WWII Finnish-language interview transcripts from Karelian refugee families, it employs zero-shot extraction—per family unit—to identify organizational participation and personal hobbies, serving as proxy indicators of social integration. It presents the first systematic comparison of generative large language models (GPT-4, Llama-3-70B-Instruct) against supervised fine-tuning (FinBERT) on non-English historical text: GPT-4 achieves 88.8% F1 (near-human performance), Llama-3-70B attains 87.7%, both significantly outperforming baselines. A lightweight paradigm is proposed—fine-tuning FinBERT on synthetic data generated by LLMs—reaching 84.1% F1 with only 6K annotated samples and 86.3% with 30K. Contributions include: (1) empirical validation of open-weight LLMs’ high efficacy on resource-scarce historical texts; (2) establishment of a reusable NER evaluation framework for Nordic languages; and (3) a novel methodological approach to quantifying social integration in digital humanities.

Technology Category

Application Category

📝 Abstract
We performed a zero-shot information extraction study on a historical collection of 89,339 brief Finnish-language interviews of refugee families relocated post-WWII from Finnish Eastern Karelia. Our research objective is two-fold. First, we aim to extract social organizations and hobbies from the free text of the interviews, separately for each family member. These can act as a proxy variable indicating the degree of social integration of refugees in their new environment. Second, we aim to evaluate several alternative ways to approach this task, comparing a number of generative models and a supervised learning approach, to gain a broader insight into the relative merits of these different approaches and their applicability in similar studies. We find that the best generative model (GPT-4) is roughly on par with human performance, at an F-score of 88.8%. Interestingly, the best open generative model (Llama-3-70B-Instruct) reaches almost the same performance, at 87.7% F-score, demonstrating that open models are becoming a viable alternative for some practical tasks even on non-English data. Additionally, we test a supervised learning alternative, where we fine-tune a Finnish BERT model (FinBERT) using GPT-4 generated training data. By this method, we achieved an F-score of 84.1% already with 6K interviews up to an F-score of 86.3% with 30k interviews. Such an approach would be particularly appealing in cases where the computational resources are limited, or there is a substantial mass of data to process.
Problem

Research questions and friction points this paper is trying to address.

Extract social organizations from refugee interviews
Evaluate generative models for information extraction
Compare supervised learning with generative approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilized GPT-4 for social data extraction
Compared generative and supervised learning models
Fine-tuned FinBERT with GPT-4 data
🔎 Similar Papers
No similar papers found.
J
Joonatan Laato
TurkuNLP, Department of Computing, University of Turku, Finland
Jenna Kanerva
Jenna Kanerva
Department of Computing, University of Turku
Natural Language ProcessingMachine Learning
J
John Loehr
Lammi Biological Station, Faculty of Biological and Environmental Sciences, University of Helsinki, Finland
V
V. Lummaa
Department of Biology, University of Turku, Finland
Filip Ginter
Filip Ginter
University of Turku
language technologynatural language processing