🤖 AI Summary
This study investigates whether linguistic features can reliably characterize individual cognitive styles—particularly dynamic preference patterns during decision-making. Method: We propose an experiment-driven language–cognition mapping framework that integrates multi-attribute choice behavioral experiments with natural-language decision descriptions, extracting linguistic features from decision narratives and predicting cognitive style categories via machine learning (AUC ≈ 0.8). Contribution/Results: This work pioneers the deep integration of controlled cognitive experimentation with computational language modeling, eliminating reliance on subjective annotations and establishing a verifiable, reproducible objective evaluation paradigm. Results demonstrate that individuals’ language use in describing decisions effectively quantifies latent cognitive dispositions, offering a novel methodological foundation for interdisciplinary research at the intersection of cognitive science and computational linguistics.
📝 Abstract
While NLP models often seek to capture cognitive states via language, the validity of predicted states is determined by comparing them to annotations created without access the cognitive states of the authors. In behavioral sciences, cognitive states are instead measured via experiments. Here, we introduce an experiment-based framework for evaluating language-based cognitive style models against human behavior. We explore the phenomenon of decision making, and its relationship to the linguistic style of an individual talking about a recent decision they made. The participants then follow a classical decision-making experiment that captures their cognitive style, determined by how preferences change during a decision exercise. We find that language features, intended to capture cognitive style, can predict participants' decision style with moderate-to-high accuracy (AUC ~ 0.8), demonstrating that cognitive style can be partly captured and revealed by discourse patterns.