DEEP: Docker-based Execution and Evaluation Platform

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified and reproducible automated evaluation platform for machine translation and optical character recognition models. The authors propose a Docker-based evaluation framework that enables automatic execution and performance assessment of containerized models. The framework incorporates a clustering methodology driven by statistical significance testing to group models according to their empirical performance. To enhance interpretability, the system integrates a web-based visualization interface and features a modular architecture designed for extensibility. Experimental results demonstrate that the platform consistently and efficiently evaluates multiple models, accurately identifies performance clusters, and validates its practical utility and effectiveness across both machine translation and OCR tasks.

Technology Category

Application Category

📝 Abstract
Comparative evaluation of several systems is a recurrent task in researching. It is a key step before deciding which system to use for our work, or, once our research has been conducted, to demonstrate the potential of the resulting model. Furthermore, it is the main task of competitive, public challenges evaluation. Our proposed software (DEEP) automates both the execution and scoring of machine translation and optical character recognition models. Furthermore, it is easily extensible to other tasks. DEEP is prepared to receive dockerized systems, run them (extracting information at that same time), and assess hypothesis against some references. With this approach, evaluators can achieve a better understanding of the performance of each model. Moreover, the software uses a clustering algorithm based on a statistical analysis of the significance of the results yielded by each model, according to the evaluation metrics. As a result, evaluators are able to identify clusters of performance among the swarm of proposals and have a better understanding of the significance of their differences. Additionally, we offer a visualization web-app to ensure that the results can be adequately understood and interpreted. Finally, we present an exemplary case of use of DEEP.
Problem

Research questions and friction points this paper is trying to address.

comparative evaluation
machine translation
optical character recognition
model performance
automated assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Docker-based evaluation
automated model scoring
statistical significance clustering
machine translation evaluation
OCR benchmarking
🔎 Similar Papers
No similar papers found.
S
Sergio Gómez González
PRHLT Research Center - Universitat Politècnica de València
M
Miguel Domingo
PRHLT Research Center - Universitat Politècnica de València, ValgrAI - Valencian Graduate School and Research Network for Artificial Intelligence
Francisco Casacuberta
Francisco Casacuberta
Ad Honorem Professor, PRHLT, Polytechnic University of Valencia
Pattern RecognitionMachine TranslationMachine LearningMulti-modal InteractionVideo and image