🤖 AI Summary
How do pretrained language models like BERT disentangle narrative content from authorial style across layers?
Method: We construct a multi-style (e.g., fairy tale, science fiction) and multi-author narrative dataset using GPT-4, then apply PCA, MDS, and layer-wise activation visualization to analyze clustering behavior of intermediate representations across BERT layers.
Contribution/Results: We find that deeper BERT layers significantly enhance semantic clustering of narrative content—yielding compact, well-separated content clusters—while stylistic categories (e.g., genre) exhibit strong, statistically significant clustering, but genuine author identity shows negligible clustering. Stylistic rewriting induces emergent style clustering, yet native author identity lacks stable, layer-specific representation. This provides the first empirical evidence that BERT follows a “semantics-first” representational bias: deeper layers encode narrative meaning dominantly, without structurally isolating authorial style. Our findings offer novel insights into the representational inductive biases of pretrained language models.
📝 Abstract
This study investigates the internal mechanisms of BERT, a transformer-based large language model, with a focus on its ability to cluster narrative content and authorial style across its layers. Using a dataset of narratives developed via GPT-4, featuring diverse semantic content and stylistic variations, we analyze BERT's layerwise activations to uncover patterns of localized neural processing. Through dimensionality reduction techniques such as Principal Component Analysis (PCA) and Multidimensional Scaling (MDS), we reveal that BERT exhibits strong clustering based on narrative content in its later layers, with progressively compact and distinct clusters. While strong stylistic clustering might occur when narratives are rephrased into different text types (e.g., fables, sci-fi, kids' stories), minimal clustering is observed for authorial style specific to individual writers. These findings highlight BERT's prioritization of semantic content over stylistic features, offering insights into its representational capabilities and processing hierarchy. This study contributes to understanding how transformer models like BERT encode linguistic information, paving the way for future interdisciplinary research in artificial intelligence and cognitive neuroscience.