🤖 AI Summary
This work addresses the lack of a unified and reproducible automated evaluation platform for machine translation and optical character recognition models. The authors propose a Docker-based evaluation framework that enables automatic execution and performance assessment of containerized models. The framework incorporates a clustering methodology driven by statistical significance testing to group models according to their empirical performance. To enhance interpretability, the system integrates a web-based visualization interface and features a modular architecture designed for extensibility. Experimental results demonstrate that the platform consistently and efficiently evaluates multiple models, accurately identifies performance clusters, and validates its practical utility and effectiveness across both machine translation and OCR tasks.
📝 Abstract
Comparative evaluation of several systems is a recurrent task in researching. It is a key step before deciding which system to use for our work, or, once our research has been conducted, to demonstrate the potential of the resulting model. Furthermore, it is the main task of competitive, public challenges evaluation. Our proposed software (DEEP) automates both the execution and scoring of machine translation and optical character recognition models. Furthermore, it is easily extensible to other tasks. DEEP is prepared to receive dockerized systems, run them (extracting information at that same time), and assess hypothesis against some references. With this approach, evaluators can achieve a better understanding of the performance of each model. Moreover, the software uses a clustering algorithm based on a statistical analysis of the significance of the results yielded by each model, according to the evaluation metrics. As a result, evaluators are able to identify clusters of performance among the swarm of proposals and have a better understanding of the significance of their differences. Additionally, we offer a visualization web-app to ensure that the results can be adequately understood and interpreted. Finally, we present an exemplary case of use of DEEP.