GIAT: A Geologically-Informed Attention Transformer for Lithology Identification

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of Transformer models in lithology identification from well-log data—namely, the lack of geological priors, poor interpretability, and constrained performance—by proposing a novel framework that integrates geological prior knowledge with attention mechanisms. Specifically, a class-level sequential correlation filter is designed to construct a geological relationship matrix, which is then incorporated as an attention bias into the self-attention computation to guide the model toward geologically consistent sequential patterns. Evaluated on two challenging datasets, the proposed method achieves a peak accuracy of 95.4%, significantly outperforming existing approaches. Moreover, it demonstrates enhanced explanation fidelity under perturbations and superior capability in producing geologically coherent predictions.

Technology Category

Application Category

📝 Abstract
Accurate lithology identification from well logs is crucial for subsurface resource evaluation. Although Transformer-based models excel at sequence modeling, their "black-box" nature and lack of geological guidance limit their performance and trustworthiness. To overcome these limitations, this letter proposes the Geologically-Informed Attention Transformer (GIAT), a novel framework that deeply fuses data-driven geological priors with the Transformer's attention mechanism. The core of GIAT is a new attention-biasing mechanism. We repurpose Category-Wise Sequence Correlation (CSC) filters to generate a geologically-informed relational matrix, which is injected into the self-attention calculation to explicitly guide the model toward geologically coherent patterns. On two challenging datasets, GIAT achieves state-of-the-art performance with an accuracy of up to 95.4%, significantly outperforming existing models. More importantly, GIAT demonstrates exceptional interpretation faithfulness under input perturbations and generates geologically coherent predictions. Our work presents a new paradigm for building more accurate, reliable, and interpretable deep learning models for geoscience applications.
Problem

Research questions and friction points this paper is trying to address.

lithology identification
well logs
Transformer
geological priors
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geologically-Informed Attention
Transformer
Lithology Identification
Attention Biasing
Interpretable Deep Learning
🔎 Similar Papers
No similar papers found.
Jie Li
Jie Li
China University of Mining and Technology
Emotion Recognition in Conversation
Q
Qishun Yang
College of Petroleum, China University of Petroleum-Beijing at Karamay, Karamay 834000, China
N
Nuo Li
College of Petroleum, China University of Petroleum-Beijing at Karamay, Karamay 834000, China