Pixels Under Pressure: Exploring Fine-Tuning Paradigms for Foundation Models in High-Resolution Medical Imaging

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of fine-tuning diffusion foundation models for high-resolution (512×512) medical image generation. We systematically compare full-parameter fine-tuning against multiple parameter-efficient fine-tuning (PEFT) methods, providing the first quantitative evaluation—on this scale of medical imagery—of their impact on generation quality (FID, Vendi score), semantic fidelity, and downstream utility. Experiments demonstrate that certain PEFT approaches significantly improve both realism and diversity under data-limited conditions; classifiers trained exclusively on synthetic images achieve up to a 4.2% accuracy gain on real-data classification. Our core contribution lies in uncovering the mapping between fine-tuning paradigms and multidimensional generation quality metrics in high-resolution medical imaging, and empirically validating the tangible downstream performance gains enabled by high-fidelity synthetic data.

Technology Category

Application Category

📝 Abstract
Advancements in diffusion-based foundation models have improved text-to-image generation, yet most efforts have been limited to low-resolution settings. As high-resolution image synthesis becomes increasingly essential for various applications, particularly in medical imaging domains, fine-tuning emerges as a crucial mechanism for adapting these powerful pre-trained models to task-specific requirements and data distributions. In this work, we present a systematic study, examining the impact of various fine-tuning techniques on image generation quality when scaling to high resolution 512x512 pixels. We benchmark a diverse set of fine-tuning methods, including full fine-tuning strategies and parameter-efficient fine-tuning (PEFT). We dissect how different fine-tuning methods influence key quality metrics, including Fréchet Inception Distance (FID), Vendi score, and prompt-image alignment. We also evaluate the utility of generated images in a downstream classification task under data-scarce conditions, demonstrating that specific fine-tuning strategies improve both generation fidelity and downstream performance when synthetic images are used for classifier training and evaluation on real images. Our code is accessible through the project website - https://tehraninasab.github.io/PixelUPressure/.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning foundation models for high-resolution medical imaging
Evaluating fine-tuning techniques on 512x512 image generation quality
Assessing synthetic image utility in data-scarce classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning diffusion models for high-resolution medical imaging
Benchmarking full and parameter-efficient fine-tuning methods
Evaluating synthetic images in downstream classification tasks
🔎 Similar Papers
No similar papers found.