A Vision-Language Model for Focal Liver Lesion Classification

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of annotated medical imaging data and poor model generalizability in classifying focal liver lesions (FLLs), this paper proposes Liver-VLM, a dedicated vision-language model. Methodologically, it innovatively embeds fine-grained lesion category semantics into the text encoder, enabling semantic guidance at zero inference overhead. Leveraging a lightweight ResNet-18 visual backbone and a customized text encoder, Liver-VLM jointly optimizes image–text embedding alignment via cosine similarity and cross-entropy loss within the CLIP framework, thereby enhancing few-shot cross-modal matching accuracy. Evaluated on the MPCT-FLLs dataset, Liver-VLM significantly outperforms both CLIP and MedCLIP—particularly under few-shot settings—achieving notable gains in classification accuracy and AUC. This work establishes an efficient, deployable paradigm for low-resource hepatic imaging diagnosis.

Technology Category

Application Category

📝 Abstract
Accurate classification of focal liver lesions is crucial for diagnosis and treatment in hepatology. However, traditional supervised deep learning models depend on large-scale annotated datasets, which are often limited in medical imaging. Recently, Vision-Language models (VLMs) such as Contrastive Language-Image Pre-training model (CLIP) has been applied to image classifications. Compared to the conventional convolutional neural network (CNN), which classifiers image based on visual information only, VLM leverages multimodal learning with text and images, allowing it to learn effectively even with a limited amount of labeled data. Inspired by CLIP, we pro-pose a Liver-VLM, a model specifically designed for focal liver lesions (FLLs) classification. First, Liver-VLM incorporates class information into the text encoder without introducing additional inference overhead. Second, by calculating the pairwise cosine similarities between image and text embeddings and optimizing the model with a cross-entropy loss, Liver-VLM ef-fectively aligns image features with class-level text features. Experimental results on MPCT-FLLs dataset demonstrate that the Liver-VLM model out-performs both the standard CLIP and MedCLIP models in terms of accuracy and Area Under the Curve (AUC). Further analysis shows that using a lightweight ResNet18 backbone enhances classification performance, particularly under data-constrained conditions.
Problem

Research questions and friction points this paper is trying to address.

Classifying focal liver lesions accurately with limited annotated data
Leveraging vision-language models for multimodal medical image classification
Improving classification performance using lightweight backbone under data constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Vision-Language Model for lesion classification
Incorporates class info into text encoder efficiently
Uses cosine similarity to align image-text features
🔎 Similar Papers
No similar papers found.
Song Jian
Song Jian
Tsinghua University
Y
Yuchang Hu
School of Mathematical Sciences, Huaqiao University, Fujian, China
W
Wang Hui
School of Information Science and Engineering, Shandong University, Qingdao, China
C
Chen Yen-Wei
College of Information Science and Engineering, Ritsumeikan University Osaka, Japan