🤖 AI Summary
This work proposes Activation Prompt (AP), a novel prompting mechanism that extends beyond the input space to perturb intermediate activation maps within deep neural networks, addressing the performance and efficiency limitations of conventional visual prompting (VP) methods confined to input-layer modifications. By operating in the activation space, AP enables more flexible and effective transfer adaptation. The study further uncovers architecture-specific preferences for prompt placement and establishes a theoretical connection between AP and normalization-based tuning strategies. Extensive experiments across diverse architectures—including CNNs and Vision Transformers—and 29 benchmark datasets demonstrate that AP consistently outperforms existing parameter-efficient fine-tuning approaches in terms of accuracy, inference speed, parameter efficiency, memory footprint, and throughput.
📝 Abstract
Visual prompting (VP) has emerged as a popular method to repurpose pretrained vision models for adaptation to downstream tasks. Unlike conventional model fine-tuning techniques, VP introduces a universal perturbation directly into the input data to facilitate task-specific fine-tuning rather than modifying model parameters. However, there exists a noticeable performance gap between VP and conventional fine-tuning methods, highlighting an unexplored realm in theory and practice to understand and advance the input-level VP to reduce its current performance gap. Towards this end, we introduce a generalized concept, termed activation prompt (AP), which extends the scope of the input-level VP by enabling universal perturbations to be applied to activation maps within the intermediate layers of the model. By using AP to revisit the problem of VP and employing it as an analytical tool, we demonstrate the intrinsic limitations of VP in both performance and efficiency, revealing why input-level prompting may lack effectiveness compared to AP, which exhibits a model-dependent layer preference. We show that AP is closely related to normalization tuning in convolutional neural networks and vision transformers, although each model type has distinct layer preferences for prompting. We also theoretically elucidate the rationale behind such a preference by analyzing global features across layers. Through extensive experiments across 29 datasets and various model architectures, we provide a comprehensive performance analysis of AP, comparing it with VP and parameter-efficient fine-tuning baselines. Our results demonstrate AP's superiority in both accuracy and efficiency, considering factors such as time, parameters, memory usage, and throughput.