SAFT: Structure-Aware Fine-Tuning of LLMs for AMR-to-Text Generation

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of modeling graph-structured information in AMR-to-text generation using large language models (LLMs), this paper proposes Structure-Aware Fine-Tuning (SAFT). SAFT introduces, for the first time, a magnetic Laplacian operator to generate direction-sensitive positional encodings that precisely capture the directed topological structure of AMR graphs via graph signal processing. These encodings are seamlessly integrated into the input embedding space of pretrained LLMs through a linear projection—requiring no architectural modifications. The method enables end-to-end, structure-aware training. On the AMR 3.0 benchmark, SAFT achieves a BLEU score of 38.2, outperforming strong baselines by 3.5 points. Crucially, performance gains scale significantly with increasing AMR graph complexity, empirically validating both the effectiveness and scalability of the proposed structural encoding.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly applied to tasks involving structured inputs such as graphs. Abstract Meaning Representations (AMRs), which encode rich semantics as directed graphs, offer a rigorous testbed for evaluating LLMs on text generation from such structures. Yet, current methods often arbitrarily linearize AMRs, discarding key structural cues, or rely on architectures incompatible with standard LLMs. We introduce SAFT, a structure-aware fine-tuning approach that injects graph topology into pretrained LLMs without architectural changes. We compute direction-sensitive positional encodings from the magnetic Laplacian of transformed AMRs and project them into the embedding space of the LLM. While possibly applicable to any graph-structured inputs, we focus on AMR-to-text generation as a representative and challenging benchmark. SAFT sets a new state-of-the-art on AMR 3.0 with a 3.5 BLEU improvement over baselines. Gains scale with graph complexity, highlighting the value of structure-aware representations in enhancing LLM performance. SAFT offers a general and effective pathway for bridging structured data and language models.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AMR-to-text generation using structure-aware LLMs
Injecting graph topology into LLMs without architectural changes
Improving performance on complex graph-structured inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-aware fine-tuning without architectural changes
Direction-sensitive encodings from magnetic Laplacian
Projecting graph topology into LLM embedding space
🔎 Similar Papers
No similar papers found.