🤖 AI Summary
This work addresses the limited adaptability of mainstream vision-language models to low-resource languages like Polish and their cultural contexts, as these models are predominantly trained on English data. Building upon the LLaVA-Next framework, the authors propose an efficient adaptation method that leverages fully automatic translation and lightweight filtering of existing English multimodal datasets, supplemented with synthetically generated Polish OCR text and culturally relevant samples. This approach enables the construction of a high-quality Polish vision-language model with minimal human annotation, substantially reducing labeling costs while enhancing both linguistic accuracy and cultural alignment. Evaluated on a Polish-adapted version of MMBench, the model outperforms LLaVA-1.6-Vicuna-13B by 9.5% and generates captions rated superior in human evaluations. The code and evaluation dataset are publicly released.
📝 Abstract
Most vision-language models (VLMs) are trained on English-centric data, limiting their performance in other languages and cultural contexts. This restricts their usability for non-English-speaking users and hinders the development of multimodal systems that reflect diverse linguistic and cultural realities. In this work, we reproduce and adapt the LLaVA-Next methodology to create a set of Polish VLMs. We rely on a fully automated pipeline for translating and filtering existing multimodal datasets, and complement this with synthetic Polish data for OCR and culturally specific tasks. Despite relying almost entirely on automatic translation and minimal manual intervention to the training data, our approach yields strong results: we observe a +9.5% improvement over LLaVA-1.6-Vicuna-13B on a Polish-adapted MMBench, along with higher-quality captions in generative evaluations, as measured by human annotators in terms of linguistic correctness. These findings highlight that large-scale automated translation, combined with lightweight filtering, can effectively bootstrap high-quality multimodal models for low-resource languages. Some challenges remain, particularly in cultural coverage and evaluation. To facilitate further research, we make our models and evaluation dataset publicly available.