π€ AI Summary
This work addresses the unclear capability of multimodal large language models (MLLMs) in understanding electronic navigational charts (ENCs)βspecialized maritime data comprising standardized vector symbols, scale-dependent rendering, and precise geometric structures. To this end, we introduce ENC-Bench, the first benchmark for ENC comprehension, constructed from 840 real NOAA ENC charts. Through a calibrated vector-to-image rendering pipeline, we generate 20,490 expert-validated samples spanning a three-tier evaluation framework: perception, spatial reasoning, and maritime decision-making. Under a unified zero-shot protocol, we evaluate ten state-of-the-art MLLMs, revealing that even the best-performing model achieves only 47.88% accuracy. This highlights systemic deficiencies in symbol grounding, spatial computation, multi-constraint reasoning, and robustness to variations in illumination and scale.
π Abstract
Electronic Navigational Charts (ENCs) are the safety-critical backbone of modern maritime navigation, yet it remains unclear whether multimodal large language models (MLLMs) can reliably interpret them. Unlike natural images or conventional charts, ENCs encode regulations, bathymetry, and route constraints via standardized vector symbols, scale-dependent rendering, and precise geometric structure -- requiring specialized maritime expertise for interpretation. We introduce ENC-Bench, the first benchmark dedicated to professional ENC understanding. ENC-Bench contains 20,490 expert-validated samples from 840 authentic National Oceanic and Atmospheric Administration (NOAA) ENCs, organized into a three-level hierarchy: Perception (symbol and feature recognition), Spatial Reasoning (coordinate localization, bearing, distance), and Maritime Decision-Making (route legality, safety assessment, emergency planning under multiple constraints). All samples are generated from raw S-57 data through a calibrated vector-to-image pipeline with automated consistency checks and expert review. We evaluate 10 state-of-the-art MLLMs such as GPT-4o, Gemini 2.5, Qwen3-VL, InternVL-3, and GLM-4.5V, under a unified zero-shot protocol. The best model achieves only 47.88% accuracy, with systematic challenges in symbolic grounding, spatial computation, multi-constraint reasoning, and robustness to lighting and scale variations. By establishing the first rigorous ENC benchmark, we open a new research frontier at the intersection of specialized symbolic reasoning and safety-critical AI, providing essential infrastructure for advancing MLLMs toward professional maritime applications.