🤖 AI Summary
This paper addresses the joint optimization of global crossing number (total edge crossings) and local crossing number (maximum crossings on any single edge) in graph drawing. We propose a reinforcement learning–based layout optimization method wherein an agent observes vertex coordinates and local crossing states, generates actions via a differentiable stress model, and employs a sparse reward mechanism specifically designed to enforce local crossing constraints—iteratively refining an initial stress-layout. Compared with conventional force-directed algorithms and classical crossing minimization approaches, our method achieves significantly reduced local crossing numbers across multiple benchmark graph datasets while maintaining low global crossing numbers. These results validate both the effectiveness of explicitly modeling local crossing constraints and the suitability of the reinforcement learning framework for this task. The approach advances high-quality, interpretable graph visualization by balancing local and global aesthetic criteria.
📝 Abstract
We present a novel approach to graph drawing based on reinforcement learning for minimizing the global and the local crossing number, that is, the total number of edge crossings and the maximum number of crossings on any edge, respectively. In our framework, an agent learns how to move a vertex based on a given observation vector in order to optimize its position. The agent receives feedback in the form of local reward signals tied to crossing reduction. To generate an initial layout, we use a stress-based graph-drawing algorithm. We compare our method against force- and stress-based (baseline) algorithms as well as three established algorithms for global crossing minimization on a suite of benchmark graphs. The experiments show mixed results: our current algorithm is mainly competitive for the local crossing number. We see a potential for further development of the approach in the future.