T-MLP: Tailed Multi-Layer Perceptron for Level-of-Detail Signal Representation

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional MLPs lack native multi-scale (Level-of-Detail, LoD) signal modeling capability, hindering effective representation of hierarchical signals such as images and 3D shapes. To address this, we propose Tailed MLP (T-MLP), the first MLP architecture to enable intra-network, multi-level, fine-grained signal reconstruction and supervision by embedding multi-branch output heads—termed “tails”—within hidden layers. Our method introduces a hierarchical output structure, layer-wise customized loss functions, and a progressive training strategy, allowing each scale-specific branch to be optimized independently. Experiments demonstrate that T-MLP significantly outperforms state-of-the-art MLPs and neural radiance fields across multi-scale fitting, compression, and reconstruction tasks. Crucially, it achieves superior detail fidelity and generalization while maintaining controllable parameter count.

Technology Category

Application Category

📝 Abstract
Level-of-detail (LoD) representation is critical for efficiently modeling and transmitting various types of signals, such as images and 3D shapes. In this work, we present a novel neural architecture that supports LoD signal representation. Our architecture is based on an elaborate modification of the widely used Multi-Layer Perceptron (MLP), which inherently operates at a single scale and therefore lacks native support for LoD. Specifically, we introduce the Tailed Multi-Layer Perceptron (T-MLP) that extends the MLP by attaching multiple output branches, also called tails, to its hidden layers, enabling direct supervision at multiple depths. Our loss formulation and training strategy allow each hidden layer to effectively learn a target signal at a specific LoD, thus enabling multi-scale modeling. Extensive experimental results show that our T-MLP outperforms other neural LoD baselines across a variety of signal representation tasks.
Problem

Research questions and friction points this paper is trying to address.

Extends MLP for multi-scale signal representation
Enables level-of-detail modeling with multiple output branches
Improves neural LoD representation across various signal tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tailed MLP with multiple output branches
Direct supervision at multiple hidden layers
Multi-scale loss formulation and training
🔎 Similar Papers