🤖 AI Summary
To address privacy risks posed by sensitive financial data in edge-deployed large language models (LLMs), this paper proposes a lightweight privacy-enhanced language model tailored for financial applications. Methodologically, it innovatively integrates differentially private fine-tuning with a streamlined network architecture to achieve both strong privacy guarantees and efficient on-device inference. The key contributions are: (1) the first systematic integration of rigorous differential privacy—formally satisfying $(varepsilon,delta)$-DP—into the LLM deployment pipeline for financial edge computing; (2) achieving performance comparable to full fine-tuning while maintaining model compactness (e.g., <50M parameters); and (3) empirical validation across multiple financial sentiment analysis benchmarks, demonstrating significantly stronger privacy protection than existing baselines and only marginal accuracy degradation (average drop <1.2%).
📝 Abstract
The integration of Large Language Models (LLMs) into financial technology (FinTech) has revolutionized the analysis and processing of complex financial data, driving advancements in real-time decision-making and analytics. With the growing trend of deploying AI models on edge devices for financial applications, ensuring the privacy of sensitive financial data has become a significant challenge. To address this, we propose DPFinLLM, a privacy-enhanced, lightweight LLM specifically designed for on-device financial applications. DPFinLLM combines a robust differential privacy mechanism with a streamlined architecture inspired by state-of-the-art models, enabling secure and efficient processing of financial data. This proposed DPFinLLM can not only safeguard user data from privacy breaches but also ensure high performance across diverse financial tasks. Extensive experiments on multiple financial sentiment datasets validate the effectiveness of DPFinLLM, demonstrating its ability to achieve performance comparable to fully fine-tuned models, even under strict privacy constraints.