🤖 AI Summary
This work addresses the scalability limitation of existing feed-forward 3D Gaussian splatting methods, whose pixel-aligned primitives grow quadratically with output resolution, hindering efficient 4K novel view synthesis. To overcome this bottleneck, we propose LGTM, a framework that introduces per-primitive texture mapping into feed-forward Gaussian splatting for the first time. By predicting compact geometric primitives and assigning learned textures to them, LGTM decouples geometric complexity from rendering resolution. This design eliminates the need for per-scene optimization and drastically reduces the number of required Gaussians while enabling high-quality 4K novel view synthesis, thereby breaking the resolution scalability barrier in feed-forward Gaussian splatting.
📝 Abstract
Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases. This fundamentally limits their scalability, making high-resolution synthesis such as 4K intractable. We introduce LGTM (Less Gaussians, Texture More), a feed-forward framework that overcomes this resolution scaling barrier. By predicting compact Gaussian primitives coupled with per-primitive textures, LGTM decouples geometric complexity from rendering resolution. This approach enables high-fidelity 4K novel view synthesis without per-scene optimization, a capability previously out of reach for feed-forward methods, all while using significantly fewer Gaussian primitives. Project page: https://yxlao.github.io/lgtm/