Bodhi VLM: Privacy-Alignment Modeling for Hierarchical Visual Representations in Vision Backbones and VLM Encoders via Bottom-Up and Top-Down Feature Search

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving interpretable and generalizable privacy budget alignment across hierarchical representations in both vision backbones and vision-language models while injecting noise for privacy preservation. The authors propose the Bodhi VLM framework, which introduces a novel learnable and interpretable privacy alignment mechanism. By integrating bottom-up and top-down strategies to localize sensitive regions, and leveraging NCP and MDAV clustering to identify cross-layer sensitive concepts, the framework generates stable alignment signals through an Expectation-Maximization Privacy Assessment (EMPA) module. Unlike conventional post-hoc auditing approaches, Bodhi VLM demonstrates consistent effectiveness across diverse architectures—including YOLO, DETR, CLIP, and LLaVA—and significantly outperforms baseline methods based on Chi-square, Kullback–Leibler divergence, and Maximum Mean Discrepancy.

Technology Category

Application Category

📝 Abstract
Learning systems that preserve privacy often inject noise into hierarchical visual representations; a central challenge is to \emph{model} how such perturbations align with a declared privacy budget in a way that is interpretable and applicable across vision backbones and vision--language models (VLMs). We propose \emph{Bodhi VLM}, a \emph{privacy-alignment modeling} framework for \emph{hierarchical neural representations}: it (1) links sensitive concepts to layer-wise grouping via NCP and MDAV-based clustering; (2) locates sensitive feature regions using bottom-up (BUA) and top-down (TDA) strategies over multi-scale representations (e.g., feature pyramids or vision-encoder layers); and (3) uses an Expectation-Maximization Privacy Assessment (EMPA) module to produce an interpretable \emph{budget-alignment signal} by comparing the fitted sensitive-feature distribution to an evaluator-specified reference (e.g., Laplace or Gaussian with scale $c/ε$). The output is reference-relative and is \emph{not} a formal differential-privacy estimator. We formalize BUA/TDA over hierarchical feature structures and validate the framework on object detectors (YOLO, PPDPTS, DETR) and on the \emph{visual encoders} of VLMs (CLIP, LLaVA, BLIP). BUA and TDA yield comparable deviation trends; EMPA provides a stable alignment signal under the reported setups. We compare with generic discrepancy baselines (Chi-square, K-L, MMD) and with task-relevant baselines (MomentReg, NoiseMLE, Wass-1). Results are reported as mean$\pm$std over multiple seeds with confidence intervals in the supplementary materials. This work contributes a learnable, interpretable modeling perspective for privacy-aligned hierarchical representations rather than a post hoc audit only. Source code: \href{https://github.com/mabo1215/bodhi-vlm.git}{Bodhi-VLM GitHub repository}
Problem

Research questions and friction points this paper is trying to address.

privacy-alignment
hierarchical visual representations
vision-language models
privacy budget
feature perturbation
Innovation

Methods, ideas, or system contributions that make the work stand out.

privacy-alignment modeling
hierarchical visual representations
bottom-up and top-down feature search
EMPA
vision-language models
🔎 Similar Papers
2024-05-27arXiv.orgCitations: 1