🤖 AI Summary
To address weak long-text comprehension, difficulty in cross-concept reasoning, and insufficient answer interpretability in textbook question answering, this paper proposes a domain-adaptive approach that synergistically integrates retrieval augmentation and instruction tuning. Our core innovation is the first dynamic coupling of fine-grained textbook passage retrieval (based on BERT) with LoRA-finetuned LLaMA-2 instruction models within a RAG framework, enabling evidence-aware, adaptive confidence-weighted generation. This design supports multi-step reasoning and fine-grained factual integration for complex educational queries. On the TextbookQA benchmark, our method achieves a 12.7% absolute accuracy gain and reduces factual error rate by 34% over standard fine-tuning and naive RAG baselines. Results demonstrate that dynamic evidence integration significantly enhances both performance and trustworthiness of large language models in educational applications.