🤖 AI Summary
To address the trade-off among fine-grained information loss, accuracy degradation, and communication overhead caused by global image downsampling in edge-cloud collaborative inference for vision-language models (VLMs), this paper proposes the first attention- and entropy-driven two-stage collaborative inference framework. On the server side, internal visual attention maps of the VLM are leveraged to localize salient regions, and a minimum-output-token-entropy criterion dynamically determines whether to request high-fidelity local image patches; the client then adaptively uploads only those patches. The method tightly integrates attention-guided region selection, entropy-based confidence estimation, global-local joint inference, and lightweight communication control. Extensive experiments across multiple VLM architectures demonstrate that our approach reduces communication bandwidth by 37%–62% while improving inference accuracy by 1.2–3.8 percentage points over baselines—effectively mitigating accuracy loss induced by downsampling.
📝 Abstract
We propose a collaborative edge-to-server inference framework for vision-language models (VLMs) that reduces the communication cost while maintaining inference accuracy. In typical deployments, visual data captured at edge devices (clients) is transmitted to the server for VLM inference. However, resizing the original image (global image) to match the vision encoder's input resolution often discards fine-grained details, leading to accuracy degradation. To overcome this limitation, we design a two-stage framework. In the first stage, the server performs inference on the global image and identifies a region of interest (RoI) using the VLM's internal attention. The min-entropy of the output tokens is then computed as a confidence measure to determine whether retransmission is required. If the min-entropy exceeds a predefined threshold, the server requests the edge device to send a detail-preserved local image of the RoI. The server then refines its inference by jointly leveraging the global and local images. This selective retransmission strategy ensures that only essential visual content is transmitted. Experiments across multiple VLM architectures show that the proposed framework significantly reduces communication cost while maintaining inference accuracy.