LIFT: Latent Implicit Functions for Task- and Data-Agnostic Encoding

πŸ“… 2025-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing implicit neural representations (INRs) typically rely on global latent vectors, resulting in poor generalization and high computational overhead. To address this, we propose LIFTβ€”a novel framework that, for the first time, integrates meta-learning-driven multi-scale local implicit functions with a hierarchical latent generator, enabling unified, cross-modal, and task-agnostic signal encoding. We further introduce ReLIFT, a variant incorporating residual connections and expressive sinusoidal frequency encoding to jointly accelerate convergence and enhance model capacity. Extensive experiments demonstrate state-of-the-art performance on generative modeling and classification tasks, with significantly reduced computational cost; ReLIFT also exhibits superior efficiency and robustness in single-task signal representation and inverse problems. Key innovations include: (i) parallel local implicit modeling, (ii) hierarchical disentanglement of latent spaces, and (iii) a residual-enhanced, frequency-aware architecture.

Technology Category

Application Category

πŸ“ Abstract
Implicit Neural Representations (INRs) are proving to be a powerful paradigm in unifying task modeling across diverse data domains, offering key advantages such as memory efficiency and resolution independence. Conventional deep learning models are typically modality-dependent, often requiring custom architectures and objectives for different types of signals. However, existing INR frameworks frequently rely on global latent vectors or exhibit computational inefficiencies that limit their broader applicability. We introduce LIFT, a novel, high-performance framework that addresses these challenges by capturing multiscale information through meta-learning. LIFT leverages multiple parallel localized implicit functions alongside a hierarchical latent generator to produce unified latent representations that span local, intermediate, and global features. This architecture facilitates smooth transitions across local regions, enhancing expressivity while maintaining inference efficiency. Additionally, we introduce ReLIFT, an enhanced variant of LIFT that incorporates residual connections and expressive frequency encodings. With this straightforward approach, ReLIFT effectively addresses the convergence-capacity gap found in comparable methods, providing an efficient yet powerful solution to improve capacity and speed up convergence. Empirical results show that LIFT achieves state-of-the-art (SOTA) performance in generative modeling and classification tasks, with notable reductions in computational costs. Moreover, in single-task settings, the streamlined ReLIFT architecture proves effective in signal representations and inverse problem tasks.
Problem

Research questions and friction points this paper is trying to address.

Unifies task modeling across diverse data domains efficiently.
Addresses computational inefficiencies in existing INR frameworks.
Improves capacity and convergence in generative and classification tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LIFT uses meta-learning for multiscale information capture.
ReLIFT enhances LIFT with residual connections and frequency encodings.
LIFT achieves SOTA performance with reduced computational costs.
πŸ”Ž Similar Papers
No similar papers found.