Edge-Optimized Multimodal Learning for UAV Video Understanding via BLIP-2

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving real-time video understanding with large vision-language models on resource-constrained drone edge devices. To this end, the authors propose a lightweight multimodal task platform that supports multitask video understanding without requiring fine-tuning. The approach integrates BLIP-2, YOLO-World, and YOLOv8-Seg, and introduces three key innovations: a content-aware keyframe sampling mechanism, a deep fusion strategy leveraging YOLO-based perception outputs, and a unified prompt optimization framework for multitask reasoning. Experimental results demonstrate that the proposed system substantially reduces computational overhead while maintaining contextually coherent and accurate predictions, enabling efficient and precise video understanding directly on edge devices for drone applications.

Technology Category

Application Category

📝 Abstract
The demand for real-time visual understanding and interaction in complex scenarios is increasingly critical for unmanned aerial vehicles. However, a significant challenge arises from the contradiction between the high computational cost of large Vision language models and the limited computing resources available on UAV edge devices. To address this challenge, this paper proposes a lightweight multimodal task platform based on BLIP-2, integrated with YOLO-World and YOLOv8-Seg models. This integration extends the multi-task capabilities of BLIP-2 for UAV applications with minimal adaptation and without requiring task-specific fine-tuning on drone data. Firstly, the deep integration of BLIP-2 with YOLO models enables it to leverage the precise perceptual results of YOLO for fundamental tasks like object detection and instance segmentation, thereby facilitating deeper visual-attention understanding and reasoning. Secondly, a content-aware key frame sampling mechanism based on K-Means clustering is designed, which incorporates intelligent frame selection and temporal feature concatenation. This equips the lightweight BLIP-2 architecture with the capability to handle video-level interactive tasks effectively. Thirdly, a unified prompt optimization scheme for multi-task adaptation is implemented. This scheme strategically injects structured event logs from the YOLO models as contextual information into BLIP-2's input. Combined with output constraints designed to filter out technical details, this approach effectively guides the model to generate accurate and contextually relevant outputs for various tasks.
Problem

Research questions and friction points this paper is trying to address.

UAV
edge computing
vision-language models
real-time video understanding
multimodal learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Edge-Optimized Multimodal Learning
BLIP-2
YOLO Integration
Key Frame Sampling
Prompt Optimization
🔎 Similar Papers
Y
Yizhan Feng
UR-LIST3N, University of Technology of Troyes, Troyes, France
H
H. Snoussi
UR-LIST3N, University of Technology of Troyes, Troyes, France
J
Jing Teng
Institute of Artificial Intelligence, North China Electric Power University, Beijing, China
J
Jian Liu
Institute of Artificial Intelligence, North China Electric Power University, Beijing, China
Y
Yuyang Wang
Institute of Artificial Intelligence, North China Electric Power University, Beijing, China
A
A. Cherouat
UR-GAMMA3, University of Technology of Troyes, Troyes, France
Tian Wang
Tian Wang
Beijing Normal University
Edge ComputingInternet of ThingsSensor Cloud