🤖 AI Summary
In open-vocabulary object navigation, existing end-to-end methods suffer from overfitting on small-scale simulated datasets, resulting in poor generalization and frequent collisions. To address this, we propose a lightweight Transformer architecture that fuses binary target mask encoding with RGB-only input, incorporates a semantic branch to enhance spatial localization, and introduces an entropy-adaptive loss modulation mechanism to dynamically balance imitation learning and reinforcement learning signals. An auxiliary segmentation loss is further integrated to improve representation robustness. Evaluated on the HM3D-OVON benchmark, our method achieves 40.1% success rate and 20.9% SPL, with performance on unseen categories matching that on seen ones. It reduces training samples by 33%, cuts collision frequency by 50%, and operates with only 130M parameters—significantly improving both generalization and navigation safety.
📝 Abstract
Open-vocabulary Object Goal Navigation requires an embodied agent to reach objects described by free-form language, including categories never seen during training. Existing end-to-end policies overfit small simulator datasets, achieving high success on training scenes but failing to generalize and exhibiting unsafe behaviour (frequent collisions). We introduce OVSegDT, a lightweight transformer policy that tackles these issues with two synergistic components. The first component is the semantic branch, which includes an encoder for the target binary mask and an auxiliary segmentation loss function, grounding the textual goal and providing precise spatial cues. The second component consists of a proposed Entropy-Adaptive Loss Modulation, a per-sample scheduler that continuously balances imitation and reinforcement signals according to the policy entropy, eliminating brittle manual phase switches. These additions cut the sample complexity of training by 33%, and reduce collision count in two times while keeping inference cost low (130M parameters, RGB-only input). On HM3D-OVON, our model matches the performance on unseen categories to that on seen ones and establishes state-of-the-art results (40.1% SR, 20.9% SPL on val unseen) without depth, odometry, or large vision-language models. Code is available at https://github.com/CognitiveAISystems/OVSegDT.