Adam's Law: Textual Frequency Law on Large Language Models

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the impact of textual frequency on large language model performance, addressing a notable gap in existing research. The authors propose the Textual Frequency Law (TFL), which advocates prioritizing high-frequency text in both prompting and fine-tuning. To operationalize this principle, they introduce a comprehensive optimization framework comprising sentence-level frequency estimation, input rewriting, Textual Frequency Distillation (TFD), and Curriculum-based Textual Frequency Training (CTFT). Experimental results demonstrate that this frequency-guided approach significantly enhances model performance across diverse tasks—including mathematical reasoning, machine translation, commonsense reasoning, and agent tool invocation—thereby validating the effectiveness and generalizability of leveraging textual frequency as a guiding signal in language model development.
📝 Abstract
While textual frequency has been validated as relevant to human cognition in reading speed, its relatedness to Large Language Models (LLMs) is seldom studied. We propose a novel research direction in terms of textual data frequency, which is an understudied topic, to the best of our knowledge. Our framework is composed of three units. First, this paper proposes Textual Frequency Law (TFL), which indicates that frequent textual data should be preferred for LLMs for both prompting and fine-tuning. Since many LLMs are closed-source in their training data, we propose using online resources to estimate the sentence-level frequency. We then utilize an input paraphraser to paraphrase the input into a more frequent textual expression. Next, we propose Textual Frequency Distillation (TFD) by querying LLMs to conduct story completion by further extending the sentences in the datasets, and the resulting corpora are used to adjust the initial estimation. Finally, we propose Curriculum Textual Frequency Training (CTFT) that fine-tunes LLMs in an increasing order of sentence-level frequency. Experiments are conducted on our curated dataset Textual Frequency Paired Dataset (TFPD) on math reasoning, machine translation, commonsense reasoning and agentic tool calling. Results show the effectiveness of our framework.
Problem

Research questions and friction points this paper is trying to address.

textual frequency
large language models
prompting
fine-tuning
sentence-level frequency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Textual Frequency Law
Textual Frequency Distillation
Curriculum Textual Frequency Training
sentence-level frequency
large language models
🔎 Similar Papers
No similar papers found.
H
Hongyuan Adam Lu
FaceMind Corporation
Z
Z. L.
FaceMind Corporation
V
Victor Wei
FaceMind Corporation
Z
Zefan Zhang
FaceMind Corporation
Z
Zhao Hong
FaceMind Corporation
Q
Qiqi Xiang
FaceMind Corporation
Bowen Cao
Bowen Cao
The Chinese University of Hong Kong
Wai Lam
Wai Lam
The Chinese University of Hong Kong
Text Mining and Machine LearningIntelligent Information Retrieval