🤖 AI Summary
Existing image semantic communication systems neglect regional importance disparities, leading to degraded reconstruction quality of critical visual content. To address this, we propose a generative semantic communication framework that first identifies key and non-key image regions. Key regions undergo image-oriented, fine-grained semantic encoding, while non-key regions are modeled as text and compressed—enabling semantic-aware heterogeneous representation. Our approach innovatively incorporates a region-importance-aware mechanism and integrates generative AI, image-text joint modeling, model quantization, and LoRA-based fine-tuning to support lightweight deployment. Experimental results demonstrate significant improvements over baselines: +12.3% SSIM (semantic fidelity) and +9.7% LPIPS (perceptual quality), achieving both efficient transmission and high-fidelity reconstruction.
📝 Abstract
The rapid development of generative artificial intelligence (AI) has introduced significant opportunities for enhancing the efficiency and accuracy of image transmission within semantic communication systems. Despite these advancements, existing methodologies often neglect the difference in importance of different regions of the image, potentially compromising the reconstruction quality of visually critical content. To address this issue, we introduce an innovative generative semantic communication system that refines semantic granularity by segmenting images into key and non-key regions. Key regions, which contain essential visual information, are processed using an image oriented semantic encoder, while non-key regions are efficiently compressed through an image-to-text modeling approach. Additionally, to mitigate the substantial storage and computational demands posed by large AI models, the proposed system employs a lightweight deployment strategy incorporating model quantization and low-rank adaptation fine-tuning techniques, significantly boosting resource utilization without sacrificing performance. Simulation results demonstrate that the proposed system outperforms traditional methods in terms of both semantic fidelity and visual quality, thereby affirming its effectiveness for image transmission tasks.