🤖 AI Summary
In federated learning, zeroth-order stochastic gradient descent (ZO-SGD) enables scalar-level communication and is compatible with large language model (LLM) fine-tuning, but suffers from high variance and slow convergence; incorporating Hessian acceleration is hindered by sparse local data and dimension-independent communication constraints. This paper proposes HiSo, the first general framework that decouples scalar communication from zeroth-order gradient updates—enabling effective integration of global Hessian information while transmitting only a single scalar per round. We theoretically establish that HiSo’s convergence rate is independent of the global Lipschitz constant and achieves significant acceleration under the low-rank Hessian assumption typical of LLMs. Experiments demonstrate up to 3.2× faster convergence over state-of-the-art ZO-FL methods, while preserving minimal scalar-level communication overhead.
📝 Abstract
Recent dimension-free communication frameworks in Federated Learning (FL), such as DeComFL, significantly reduce per-round communication by transmitting only scalars via zeroth-order stochastic gradient descent (ZO-SGD). This method is particularly advantageous for federated fine-tuning of Large Language Models (LLMs). Yet, the high variance in ZO gradient estimation typically leads to slow convergence. Although leveraging Hessian information is known to enhance optimization speed, integrating this into FL presents significant challenges. These include clients' restrictions on local data and the critical need to maintain the dimension-free communication property. To overcome this limitation, we first introduce a generalized scalar-only communication FL framework that decouples dimension-free communication from standard ZO-SGD, enabling the integration of more advanced optimization strategies. Building on this framework, we propose HiSo, a fast federated fine-tuning method via Hessian-informed zeroth-order optimization and Scalar-only communication. Specifically, it leverages global curvature information to accelerate convergence while preserving the same minimal communication cost per round. Theoretically, we establish convergence guarantees that are independent of the global Lipschitz constant, and further show that HiSo achieves faster rates when the global Hessian exhibits a low effective rank -- a common phenomenon in LLMs. Extensive experiments on benchmark datasets and LLM fine-tuning tasks confirm that HiSo significantly outperforms existing ZO-based FL methods in both convergence speed and communication efficiency.