🤖 AI Summary
Existing diffusion-based handwritten text generation methods suffer from artifacts, limited stylistic diversity, and poor legibility—particularly for rare words and complex calligraphic styles—while also exhibiting training memorization and insufficient output diversity. To address these limitations, we propose Dual Orthogonal Guidance: a novel guidance mechanism that performs disentangled control in the latent space via orthogonal projections of positive and negative prompts, coupled with a triangular scheduling strategy to mitigate noise distortion under high guidance weights. This classifier-free approach significantly enhances generation stability and controllability. Extensive evaluations on DiffusionPen and One-DM demonstrate substantial improvements across multiple benchmarks in text clarity and stylistic diversity. Moreover, our method exhibits superior robustness and generalization to out-of-vocabulary words and challenging calligraphic styles, without requiring architectural modifications or additional supervision.
📝 Abstract
Diffusion-based Handwritten Text Generation (HTG) approaches achieve impressive results on frequent, in-vocabulary words observed at training time and on regular styles. However, they are prone to memorizing training samples and often struggle with style variability and generation clarity. In particular, standard diffusion models tend to produce artifacts or distortions that negatively affect the readability of the generated text, especially when the style is hard to produce. To tackle these issues, we propose a novel sampling guidance strategy, Dual Orthogonal Guidance (DOG), that leverages an orthogonal projection of a negatively perturbed prompt onto the original positive prompt. This approach helps steer the generation away from artifacts while maintaining the intended content, and encourages more diverse, yet plausible, outputs. Unlike standard Classifier-Free Guidance (CFG), which relies on unconditional predictions and produces noise at high guidance scales, DOG introduces a more stable, disentangled direction in the latent space. To control the strength of the guidance across the denoising process, we apply a triangular schedule: weak at the start and end of denoising, when the process is most sensitive, and strongest in the middle steps. Experimental results on the state-of-the-art DiffusionPen and One-DM demonstrate that DOG improves both content clarity and style variability, even for out-of-vocabulary words and challenging writing styles.