🤖 AI Summary
This work addresses the limitations of large language models in rewriting texts with entangled multiple intents and the heavy reliance of existing approaches on extensive labeled data. The authors propose Intention-Tuning, an intent-adaptive hierarchical fine-tuning framework that leverages an intent-driven dynamic layer selection mechanism within the parameter-efficient fine-tuning (PEFT) paradigm. By selectively activating only those model layers relevant to specific intents, Intention-Tuning effectively disentangles complex multi-intent representations and enhances rewriting quality even with limited annotated data. Experimental results demonstrate that the proposed method consistently outperforms current PEFT approaches across multiple text rewriting benchmarks, confirming its effectiveness and efficiency in low-resource scenarios.
📝 Abstract
Large Language Models (LLMs) have achieved impressive capabilities in various context-based text generation tasks, such as summarization and reasoning; however, their applications in intention-based generation tasks remain underexplored. One such example is revision generation, which requires the generated text to explicitly reflect the writer's actual intentions. Identifying intentions and generating desirable revisions are challenging due to their complex and diverse nature. Although prior work has employed LLMs to generate revisions with few-shot learning, they struggle with handling entangled multi-intent scenarios. While fine-tuning LLMs using intention-based instructions appears promising, it demands large amounts of annotated data, which is expensive and scarce in the revision community. To address these challenges, we propose Intention-Tuning, an intention-adaptive layer-wise LLM fine-tuning framework that dynamically selects a subset of LLM layers to learn the intentions and subsequently transfers their representations to revision generation. Experimental results suggest that Intention-Tuning is effective and efficient on small revision corpora, outperforming several PEFT baselines.