๐ค AI Summary
To address the limitations of complex multi-hop reasoning and comparative disease analysis in tobacco pest and disease management, this paper proposes a graph-enhanced large language model (LLM) reasoning framework. Methodologically: (1) we construct a high-quality, domain-specific tobacco knowledge graph; (2) we pioneer the integration of GraphRAG with domain knowledge graphs for agricultural AI reasoning; and (3) we incorporate graph neural networks (GNNs) for multi-granularity relational modeling, while fine-tuning ChatGLM via LoRA to enable graph-structure-aware retrieval-augmented generation. Our key contributions include automated, LLM-driven knowledge graph construction and structured, graph-guided deep reasoning. Experiments demonstrate significant improvements over baselines on multi-hop question answering and disease comparison tasksโachieving notably higher reasoning accuracy and greater logical depth. This work establishes a reusable technical paradigm for deploying domain-specific LLMs in precision agriculture.
๐ Abstract
This paper proposes a large language model (LLM) approach that integrates graph-structured information for knowledge reasoning in tobacco pest and disease control. Built upon the GraphRAG framework, the proposed method enhances knowledge retrieval and reasoning by explicitly incorporating structured information from a domain-specific knowledge graph. Specifically, LLMs are first leveraged to assist in the construction of a tobacco pest and disease knowledge graph, which organizes key entities such as diseases, symptoms, control methods, and their relationships. Based on this graph, relevant knowledge is retrieved and integrated into the reasoning process to support accurate answer generation. The Transformer architecture is adopted as the core inference model, while a graph neural network (GNN) is employed to learn expressive node representations that capture both local and global relational information within the knowledge graph. A ChatGLM-based model serves as the backbone LLM and is fine-tuned using LoRA to achieve parameter-efficient adaptation. Extensive experimental results demonstrate that the proposed approach consistently outperforms baseline methods across multiple evaluation metrics, significantly improving both the accuracy and depth of reasoning, particularly in complex multi-hop and comparative reasoning scenarios.