Low-resource domain adaptation while minimizing energy and hardware resource consumption

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the energy efficiency, GPU memory, and data scarcity bottlenecks in cross-domain adaptation of large language models (LLMs) under low-resource settings, this paper proposes a lightweight, sustainability-oriented domain adaptation paradigm. Methodologically, it introduces the first systematic integration of mixed-precision training (FP16/BF16), layer-wise precision adaptation, gradient accumulation, lightweight data parallelism, and LoRA-based fine-tuning—jointly optimizing computational efficiency and alignment with cultural value diversity. Experiments on a single NVIDIA A10 GPU demonstrate that the approach maintains comparable accuracy across multiple domains while reducing energy consumption by 47% and GPU memory usage by 32%. These improvements significantly enhance training feasibility and scalability in scenarios constrained by limited computational power, scarce labeled data, and strict energy budgets.

Technology Category

Application Category

📝 Abstract
Training Large Language Models (LLMs) is costly in terms of energy, hardware, and annotated data, often resulting in a positionality rooted in predominant cultures and values (Santy et al., 2023). Domain adaptation has emerged as a promising strategy to better align models with diverse cultural and value contexts (Hershcovich et al., 2022), but its computational cost remains a significant barrier, particularly for research groups lacking access to large-scale infrastructure. In this paper, we evaluate how the use of different numerical precisions and data parallelization strategies impacts both training speed (as a proxy to energy and hardware consumption) and model accuracy, with the goal of facilitating domain adaptation in low-resource environments. Our findings are relevant to any setting where energy efficiency, accessibility, or limited hardware availability are key concerns.
Problem

Research questions and friction points this paper is trying to address.

Minimize energy and hardware consumption in low-resource domain adaptation
Evaluate impact of numerical precisions on training speed and accuracy
Facilitate domain adaptation for groups with limited infrastructure access
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes low numerical precision for efficiency
Implements data parallelization to speed training
Optimizes for low-resource domain adaptation
🔎 Similar Papers
No similar papers found.
H
Hernán Maina
FAMAF, Universidad Nacional de Córdoba, CONICET, Argentina
N
N. Wolovick
FAMAF, Universidad Nacional de Córdoba, CONICET, Argentina
Luciana Benotti
Luciana Benotti
Universidad Nacional de Cordoba, Argentina
Natural Language ProcessingEthicsConversational AgentsLanguage ModelsEducation