Annotation-Efficient Vision-Language Model Adaptation to the Polish Language Using the LLaVA Framework

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited adaptability of mainstream vision-language models to low-resource languages like Polish and their cultural contexts, as these models are predominantly trained on English data. Building upon the LLaVA-Next framework, the authors propose an efficient adaptation method that leverages fully automatic translation and lightweight filtering of existing English multimodal datasets, supplemented with synthetically generated Polish OCR text and culturally relevant samples. This approach enables the construction of a high-quality Polish vision-language model with minimal human annotation, substantially reducing labeling costs while enhancing both linguistic accuracy and cultural alignment. Evaluated on a Polish-adapted version of MMBench, the model outperforms LLaVA-1.6-Vicuna-13B by 9.5% and generates captions rated superior in human evaluations. The code and evaluation dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Most vision-language models (VLMs) are trained on English-centric data, limiting their performance in other languages and cultural contexts. This restricts their usability for non-English-speaking users and hinders the development of multimodal systems that reflect diverse linguistic and cultural realities. In this work, we reproduce and adapt the LLaVA-Next methodology to create a set of Polish VLMs. We rely on a fully automated pipeline for translating and filtering existing multimodal datasets, and complement this with synthetic Polish data for OCR and culturally specific tasks. Despite relying almost entirely on automatic translation and minimal manual intervention to the training data, our approach yields strong results: we observe a +9.5% improvement over LLaVA-1.6-Vicuna-13B on a Polish-adapted MMBench, along with higher-quality captions in generative evaluations, as measured by human annotators in terms of linguistic correctness. These findings highlight that large-scale automated translation, combined with lightweight filtering, can effectively bootstrap high-quality multimodal models for low-resource languages. Some challenges remain, particularly in cultural coverage and evaluation. To facilitate further research, we make our models and evaluation dataset publicly available.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
low-resource languages
multimodal adaptation
language bias
cultural context
Innovation

Methods, ideas, or system contributions that make the work stand out.

annotation-efficient
automated translation
vision-language model
low-resource language
LLaVA adaptation
🔎 Similar Papers
No similar papers found.
G
Grzegorz Statkiewicz
NASK National Research Institute, Warsaw, Poland
A
Alicja Dobrzeniecka
NASK National Research Institute, Warsaw, Poland
Karolina Seweryn
Karolina Seweryn
NASK - National Research Institute, Warsaw University of Technology
A
Aleksandra Krasnodębska
NASK National Research Institute, Warsaw, Poland
K
Karolina Piosek
NASK National Research Institute, Warsaw, Poland
K
Katarzyna Bogusz
NASK National Research Institute, Warsaw, Poland
Sebastian Cygert
Sebastian Cygert
NASK - National Research Institute, Politechnika Gdańska
computer visionmachine learningtrustworthy ML
Wojciech Kusa
Wojciech Kusa
NASK National Research Institute
Natural Language ProcessingInformation RetrievalMachine LearningLLMs