Layered Insights: Generalizable Analysis of Authorial Style by Leveraging All Transformer Layers

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of author style identification in cross-domain scenarios, this paper proposes a general style modeling approach based on multi-layer Transformer hidden-state fusion. We systematically reveal, for the first time, the layer-wise specialization of Transformer representations for stylistic features: lower layers capture surface-level linguistic patterns, middle layers model syntactic and rhetorical preferences, and upper layers encode abstract stylistic tendencies. Guided by this insight, we design an inter-layer attention-weighted aggregation mechanism and a style-sensitive feature disentanglement module to enable holistic, cross-layer style modeling. Evaluated on three cross-domain author attribution benchmarks, our method achieves significant improvements in out-of-domain generalization—setting new state-of-the-art accuracy and stability. These results empirically validate the critical role of hierarchical style representation in enhancing cross-domain robustness.

Technology Category

Application Category

📝 Abstract
We propose a new approach for the authorship attribution task that leverages the various linguistic representations learned at different layers of pre-trained transformer-based models. We evaluate our approach on three datasets, comparing it to a state-of-the-art baseline in in-domain and out-of-domain scenarios. We found that utilizing various transformer layers improves the robustness of authorship attribution models when tested on out-of-domain data, resulting in new state-of-the-art results. Our analysis gives further insights into how our model's different layers get specialized in representing certain stylistic features that benefit the model when tested out of the domain.
Problem

Research questions and friction points this paper is trying to address.

Improves authorship attribution robustness using transformer layers.
Evaluates model performance in in-domain and out-of-domain scenarios.
Analyzes layer specialization for stylistic feature representation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes all transformer layers for authorship attribution
Improves robustness in out-of-domain scenarios
Specializes layers for stylistic feature representation
🔎 Similar Papers
No similar papers found.