🤖 AI Summary
Vision Transformers (ViTs) suffer performance degradation in image aesthetic assessment due to mandatory resizing to small, fixed resolutions—causing loss of critical compositional structure, fine-grained details, and original aspect ratios. To address this, we propose DynamicToken: a dynamic tokenization method that preserves composition, high-resolution fidelity, native aspect ratio, and multi-scale information—without cropping or geometric distortion—while remaining fully compatible with pretrained ViTs and standard positional encodings. Our approach integrates region-adaptive resolution sampling, multi-scale feature fusion, and lightweight token compression, and is embedded within a ViT fine-tuning framework. Evaluated across multiple aesthetic and quality assessment benchmarks, DynamicToken achieves up to 8.1% relative improvement over strong baselines, demonstrates superior generalization, and enables real-time inference even with lightweight ViT backbones.
📝 Abstract
The capacity of Vision transformers (ViTs) to handle variable-sized inputs is often constrained by computational complexity and batch processing limitations. Consequently, ViTs are typically trained on small, fixed-size images obtained through downscaling or cropping. While reducing computational burden, these methods result in significant information loss, negatively affecting tasks like image aesthetic assessment. We introduce Charm, a novel tokenization approach that preserves Composition, High-resolution, Aspect Ratio, and Multi-scale information simultaneously. Charm prioritizes high-resolution details in specific regions while downscaling others, enabling shorter fixed-size input sequences for ViTs while incorporating essential information. Charm is designed to be compatible with pre-trained ViTs and their learned positional embeddings. By providing multiscale input and introducing variety to input tokens, Charm improves ViT performance and generalizability for image aesthetic assessment. We avoid cropping or changing the aspect ratio to further preserve information. Extensive experiments demonstrate significant performance improvements on various image aesthetic and quality assessment datasets (up to 8.1 %) using a lightweight ViT backbone. Code and pre-trained models are available at https://github.com/FBehrad/Charm.