🤖 AI Summary
This work addresses the challenges of deploying large models and slow inference in Vision-Rich Document (VRD) understanding. We propose the first lightweight knowledge distillation (KD) framework tailored for Document Layout Analysis (DLA) and Document Image Classification (DIC). Methodologically, we systematically investigate cross-architecture (ResNet/ViT/DiT) and multi-capacity backbone distillation, introducing a tuned KD loss combined with MSE and SimKD, augmented by adaptive projection heads and hybrid response- and feature-based distillation. Notably, we are the first to explicitly model covariate shift in Document Understanding (DU) tasks and evaluate robustness via zero-shot layout-aware DocVQA. Experiments demonstrate that distilled lightweight models substantially narrow the mAP gap with teacher models—achieving comparable performance on DLA and DIC—while significantly enhancing layout-aware robustness in downstream zero-shot DocVQA.
📝 Abstract
This work explores knowledge distillation (KD) for visually-rich document (VRD) applications such as document layout analysis (DLA) and document image classification (DIC). While VRD research is dependent on increasingly sophisticated and cumbersome models, the field has neglected to study efficiency via model compression. Here, we design a KD experimentation methodology for more lean, performant models on document understanding (DU) tasks that are integral within larger task pipelines. We carefully selected KD strategies (response-based, feature-based) for distilling knowledge to and from backbones with different architectures (ResNet, ViT, DiT) and capacities (base, small, tiny). We study what affects the teacher-student knowledge gap and find that some methods (tuned vanilla KD, MSE, SimKD with an apt projector) can consistently outperform supervised student training. Furthermore, we design downstream task setups to evaluate covariate shift and the robustness of distilled DLA models on zero-shot layout-aware document visual question answering (DocVQA). DLA-KD experiments result in a large mAP knowledge gap, which unpredictably translates to downstream robustness, accentuating the need to further explore how to efficiently obtain more semantic document layout awareness.