đ¤ AI Summary
This study investigates whether large language models (LLMs), after alignment to a specific online community, generalize behavior patterns reflecting that communityâs cognitive stanceârather than merely reproducing superficial correlations from training data. Method: We propose the âcognitive stance transferâ evaluation framework, integrating targeted factual knowledge deletion, multi-dimensional behavioral probing, and comparative analysis against community-specific corpora, empirically tested on RussianâUkrainian military discourse and U.S. partisan Twitter data. Contribution/Results: Even after systematic removal of key factual knowledge, aligned models consistently reproduce community-specific uncertainty-handling strategiesâincluding attribution biases, affective modulation, and argumentative structuresâdemonstrating that alignment encodes deep, persistent behavioral priors. This work provides the first empirical evidence that LLM community alignment induces transferable, entrenched cognitive stances, establishing a novel paradigm for studying alignment risks and socio-cognitive modeling.
đ Abstract
When large language models (LLMs) are aligned to a specific online community, do they exhibit generalizable behavioral patterns that mirror that community's attitudes and responses to new uncertainty, or are they simply recalling patterns from training data? We introduce a framework to test epistemic stance transfer: targeted deletion of event knowledge, validated with multiple probes, followed by evaluation of whether models still reproduce the community's organic response patterns under ignorance. Using Russian--Ukrainian military discourse and U.S. partisan Twitter data, we find that even after aggressive fact removal, aligned LLMs maintain stable, community-specific behavioral patterns for handling uncertainty. These results provide evidence that alignment encodes structured, generalizable behaviors beyond surface mimicry. Our framework offers a systematic way to detect behavioral biases that persist under ignorance, advancing efforts toward safer and more transparent LLM deployments.