ASDA: Automated Skill Distillation and Adaptation for Financial Reasoning

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing training-free methods in complex financial multi-step reasoning tasks and the high cost of fine-tuning large language models. The authors propose a novel training-free framework that preserves model weights while leveraging a teacher model to analyze student model failures, cluster error patterns, and automatically generate structured skill artifacts—comprising reasoning workflows, code templates, and examples—which are dynamically injected during inference to enhance performance. This approach is the first to produce financial reasoning skills that are human-readable, version-controllable, and compliant with Agent Skills standards, thereby enabling auditability and reuse. Evaluated on the FAMMA benchmark, the method improves accuracy by 17.33% on arithmetic reasoning and 5.95% on non-arithmetic reasoning, substantially outperforming current training-free alternatives.

Technology Category

Application Category

📝 Abstract
Adapting large language models (LLMs) to specialized financial reasoning typically requires expensive fine-tuning that produces model-locked expertise. Training-free alternatives have emerged, yet our experiments show that leading methods (GEPA and ACE) achieve only marginal gains on the FAMMA financial reasoning benchmark, exposing the limits of unstructured text optimization for complex, multi-step domain reasoning. We introduce Automated Skill Distillation and Adaptation (ASDA), a framework that automatically generates structured skill artifacts through iterative error-corrective learning without modifying model weights. A teacher model analyzes a student model's failures on financial reasoning tasks, clusters errors by subfield and error type, and synthesizes skill files containing reasoning procedures, code templates, and worked examples, which are dynamically injected during inference. Evaluated on FAMMA, ASDA achieves up to +17.33% improvement on arithmetic reasoning and +5.95% on non-arithmetic reasoning, substantially outperforming all training-free baselines. The resulting skill artifacts are human-readable, version-controlled, and compatible with the Agent Skills open standard, offering any organization with a labeled domain dataset a practical and auditable path to domain adaptation without weight access or retraining.
Problem

Research questions and friction points this paper is trying to address.

financial reasoning
domain adaptation
training-free adaptation
large language models
structured skill artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated Skill Distillation
Training-free Adaptation
Structured Skill Artifacts
Financial Reasoning
Error-corrective Learning
🔎 Similar Papers
No similar papers found.
T
Tik Yu Yim
The University of Hong Kong, Hong Kong SAR, China
W
Wenting Tan
The University of Hong Kong, Hong Kong SAR, China
S
Sum Yee Chan
The University of Hong Kong, Hong Kong SAR, China
Tak-Wah Lam
Tak-Wah Lam
Professor, School of Computing & Data Science, University of Hong Kong
AlgorithmsBioinformaticsBig Data Analytics
S
Siu Ming Yiu
The University of Hong Kong, Hong Kong SAR, China