Graph-Based Spectral Decomposition for Parameter Coordination in Language Model Fine-Tuning

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fine-tuning methods for large language models (LLMs) lack structural awareness during parameter updates, leading to suboptimal adaptation and instability. Method: This paper proposes the first graph-spectrum-theoretic framework for parameter co-optimization: it models model weights as a weighted graph and employs Laplacian spectral decomposition to obtain frequency-domain representations; then introduces a spectral regularization joint loss and a structure-aware gradient filtering mechanism to guide coordinated parameter updates in the spectral domain. The approach requires no additional trainable parameters and is natively compatible with parameter-efficient fine-tuning. Contribution/Results: Extensive experiments demonstrate that our method consistently outperforms state-of-the-art baselines across multi-task fine-tuning, few-shot generalization, and convergence stability. It significantly suppresses parameter perturbation while preserving pre-trained performance, thereby enhancing both fine-tuning quality and robustness.

Technology Category

Application Category

📝 Abstract
This paper proposes a parameter collaborative optimization algorithm for large language models, enhanced with graph spectral analysis. The goal is to improve both fine-tuning efficiency and structural awareness during training. In the proposed method, the parameters of a pre-trained language model are treated as nodes in a graph. A weighted graph is constructed, and Laplacian spectral decomposition is applied to enable frequency-domain modeling and structural representation of the parameter space. Based on this structure, a joint loss function is designed. It combines the task loss with a spectral regularization term to facilitate collaborative updates among parameters. In addition, a spectral filtering mechanism is introduced during the optimization phase. This mechanism adjusts gradients in a structure-aware manner, enhancing the model's training stability and convergence behavior. The method is evaluated on multiple tasks, including traditional fine-tuning comparisons, few-shot generalization tests, and convergence speed analysis. In all settings, the proposed approach demonstrates superior performance. The experimental results confirm that the spectral collaborative optimization framework effectively reduces parameter perturbations and improves fine-tuning quality while preserving overall model performance. This work contributes significantly to the field of artificial intelligence by advancing parameter-efficient training methodologies for large-scale models, reinforcing the importance of structural signal processing in deep learning optimization, and offering a robust, generalizable framework for enhancing language model adaptability and performance.
Problem

Research questions and friction points this paper is trying to address.

Improving fine-tuning efficiency and structural awareness in language models
Enabling frequency-domain modeling of parameters via graph spectral decomposition
Enhancing training stability and convergence with spectral regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph spectral analysis enhances parameter coordination
Laplacian spectral decomposition enables frequency-domain modeling
Spectral regularization and filtering improve training stability
🔎 Similar Papers
No similar papers found.
Hanlu Zhang
Hanlu Zhang
Stevens Institute of Technology
Computer scienceArtificial Intelligence
Y
Yumeng Ma
University of California, San Diego, La Jolla, USA
S
Shuo Wang
Purdue University, Indianapolis, USA; San Francisco State University, San Francisco, USA
G
Guiran Liu
San Francisco, USA
B
Binrong Zhu
San Francisco State University, San Francisco, USA