🤖 AI Summary
Existing attention mechanisms struggle to unify modeling across multimodal and multiscale data due to ad hoc heuristic designs lacking theoretical grounding.
Method: This paper proposes a hierarchical self-attention framework grounded in entropy minimization—a first-principles derivation of the optimal attention form that intrinsically incorporates hierarchical geometric priors. We mathematically model multiscale structure via explicit construction and enable efficient hierarchical attention computation through dynamic programming, ensuring full compatibility with standard Transformer architectures.
Contribution/Results: The method seamlessly integrates into pretrained models, supporting both zero-shot transfer and end-to-end fine-tuning. Experiments demonstrate significant improvements over state-of-the-art heuristic approaches on multimodal benchmarks, achieving superior inference efficiency and stronger generalization—without architectural modifications or additional parameters.
📝 Abstract
Transformers and their attention mechanism have been revolutionary in the field of Machine Learning. While originally proposed for the language data, they quickly found their way to the image, video, graph, etc. data modalities with various signal geometries. Despite this versatility, generalizing the attention mechanism to scenarios where data is presented at different scales from potentially different modalities is not straightforward. The attempts to incorporate hierarchy and multi-modality within transformers are largely based on ad hoc heuristics, which are not seamlessly generalizable to similar problems with potentially different structures. To address this problem, in this paper, we take a fundamentally different approach: we first propose a mathematical construct to represent multi-modal, multi-scale data. We then mathematically derive the neural attention mechanics for the proposed construct from the first principle of entropy minimization. We show that the derived formulation is optimal in the sense of being the closest to the standard Softmax attention while incorporating the inductive biases originating from the hierarchical/geometric information of the problem. We further propose an efficient algorithm based on dynamic programming to compute our derived attention mechanism. By incorporating it within transformers, we show that the proposed hierarchical attention mechanism not only can be employed to train transformer models in hierarchical/multi-modal settings from scratch, but it can also be used to inject hierarchical information into classical, pre-trained transformer models post training, resulting in more efficient models in zero-shot manner.