🤖 AI Summary
This study addresses the limitation of general-purpose semantic models (e.g., BERT, CLIP) in capturing font-specific perceptual associations for font impression modeling. We propose a spectral embedding method grounded in the co-occurrence structure of impression-word labels. Specifically, we construct a co-occurrence graph over impression tags derived from font datasets and apply spectral embedding to learn low-dimensional semantic vectors—ensuring semantically similar impression words are proximal in the embedded space. To our knowledge, this is the first work to explicitly incorporate label co-occurrence relationships into font impression representation, thereby significantly enhancing the model’s capacity to encode stylistic perceptual characteristics. Extensive experiments on impression-guided font generation and retrieval demonstrate that our approach outperforms BERT and CLIP baselines in both semantic consistency and generation quality.
📝 Abstract
Different font styles (i.e., font shapes) convey distinct impressions, indicating a close relationship between font shapes and word tags describing those impressions. This paper proposes a novel embedding method for impression tags that leverages these shape-impression relationships. For instance, our method assigns similar vectors to impression tags that frequently co-occur in order to represent impressions of fonts, whereas standard word embedding methods (e.g., BERT and CLIP) yield very different vectors. This property is particularly useful for impression-based font generation and font retrieval. Technically, we construct a graph whose nodes represent impression tags and whose edges encode co-occurrence relationships. Then, we apply spectral embedding to obtain the impression vectors for each tag. We compare our method with BERT and CLIP in qualitative and quantitative evaluations, demonstrating that our approach performs better in impression-guided font generation.