🤖 AI Summary
Existing benchmarks for semantic similarity assessment in computational literary studies lack fine-grained, domain-adapted evaluation protocols for long texts. Method: This paper introduces the first multidimensional semantic similarity evaluation dataset specifically designed for full-length novels. Grounded in author metadata, it defines 12 literature-specific dimensions—validated by digital humanities scholars—to mitigate risks of public data contamination. We further propose the first fine-grained, multidimensional embedding evaluation framework tailored to fictional long texts, emphasizing author-guided annotation and scholarly validation of construct validity. Results: Empirical evaluation reveals that mainstream embedding models exhibit strong bias toward superficial linguistic features and fail to capture deep literary semantic categories. This work bridges critical gaps in both data and methodology for fine-grained literary evaluation, establishing a rigorous, domain-grounded benchmark to guide future model development and refinement.
📝 Abstract
As language models become capable of processing increasingly long and complex texts, there has been growing interest in their application within computational literary studies. However, evaluating the usefulness of these models for such tasks remains challenging due to the cost of fine-grained annotation for long-form texts and the data contamination concerns inherent in using public-domain literature. Current embedding similarity datasets are not suitable for evaluating literary-domain tasks because of a focus on coarse-grained similarity and primarily on very short text. We assemble and release FICSIM, a dataset of long-form, recently written fiction, including scores along 12 axes of similarity informed by author-produced metadata and validated by digital humanities scholars. We evaluate a suite of embedding models on this task, demonstrating a tendency across models to focus on surface-level features over semantic categories that would be useful for computational literary studies tasks. Throughout our data-collection process, we prioritize author agency and rely on continual, informed author consent.