🤖 AI Summary
This paper addresses the accuracy–efficiency trade-off in modeling handwritten mathematical strokes via curve representation. We propose a stability analysis framework based on orthogonal polynomial bases, systematically comparing Legendre, Chebyshev, and their Sobolev variants across orders in terms of condition number, inner-product norm, and discriminative power. Our analysis uncovers the coupled impact of basis selection on numerical stability and classification performance. Experiments demonstrate that Sobolev-type bases significantly enhance robustness of low-order representations and mitigate ill-conditioning induced by high-order truncation. Specifically, the Legendre–Sobolev basis achieves 98.2% classification accuracy while reducing parameter count by 37% and computational cost by 41%. To our knowledge, this is the first work to establish a theoretical linkage between orthogonal basis properties and discriminative capability for handwritten mathematical symbols, yielding an interpretable, principled criterion for basis selection in lightweight, high-accuracy ink modeling.
📝 Abstract
Previous work has made use of a parameterized plane curve polynomial representation for mathematical handwriting, with the polynomials represented in a Legendre or Legendre-Sobolev graded basis. This provides a compact geometric representation for the digital ink. Preliminary results have also been shown for Chebyshev and Chebyshev-Sobolev bases. This article explores the trade-offs between basis choice and polynomial degree to achieve accurate modeling with a low computational cost. To do this, we consider the condition number for polynomial evaluation in these bases and bound how the various inner products give norms for the variations between symbols.