Sampling-Based Motion Planning with Scene Graphs Under Perception Constraints

πŸ“… 2026-03-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of enabling high-degree-of-freedom robots to simultaneously maintain persistent perception of multiple targets while executing complex motion plansβ€”a capability inadequately supported by existing approaches. The authors propose MOPS-PRM, a novel framework that integrates scene graphs with multi-object perception constraints by incorporating a perception-aware cost function into the sampling process of configuration space and embedding it within the Probabilistic Roadmap (PRM) construction. This integration implicitly enforces long-horizon perceptual constraints dictated by semantic relevance, spatial relationships, and viewpoint preferences. Experimental results demonstrate that, compared to baseline methods, MOPS-PRM achieves over a 36% increase in the average number of detected objects and approximately a 17% improvement in tracking success rate in both simulated and real-world environments, while maintaining comparable planning times and path lengths.

Technology Category

Application Category

πŸ“ Abstract
It will be increasingly common for robots to operate in cluttered human-centered environments such as homes, workplaces, and hospitals, where the robot is often tasked to maintain perception constraints, such as monitoring people or multiple objects, for safety and reliability while executing its task. However, existing perception-aware approaches typically focus on low-degree-of-freedom (DoF) systems or only consider a single object in the context of high-DoF robots. This motivates us to consider the problem of perception-aware motion planning for high-DoF robots that accounts for multi-object monitoring constraints. We employ a scene graph representation of the environment, offering a great potential for incorporating long-horizon task and motion planning thanks to its rich semantic and spatial information. However, it does not capture perception-constrained information, such as the viewpoints the user prefers. To address these challenges, we propose MOPS-PRM, a roadmap-based motion planner, that integrates the perception cost of observing multiple objects or humans directly into motion planning for high-DoF robots. The perception cost is embedded to each object as part of a scene graph, and used to selectively sample configurations for roadmap construction, implicitly enforcing the perception constraints. Our method is extensively validated in both simulated and real-world experiments, achieving more than ~36% improvement in the average number of detected objects and ~17% better track rate against other perception-constrained baselines, with comparable planning times and path lengths.
Problem

Research questions and friction points this paper is trying to address.

motion planning
perception constraints
high-DoF robots
multi-object monitoring
scene graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

perception-aware motion planning
scene graph
multi-object monitoring
high-DoF robots
sampling-based planning