🤖 AI Summary
This study investigates how self-supervised speech models, such as S3M, compositionally encode phonemes and their contextual information within single-frame representations. Through phonological feature vector analysis and subspace orthogonality tests, the authors demonstrate that contextual phonemes at different relative positions are embedded in approximately orthogonal subspaces within the frame-level representation. Moreover, the model implicitly learns phoneme boundaries without explicit supervision. These findings reveal a compositional and orthogonally structured organization of position-dependent phonetic information in learned speech representations, thereby advancing our understanding of the contextual modeling mechanisms employed by self-supervised speech models.
📝 Abstract
Transformer-based self-supervised speech models (S3Ms) are often described as contextualized, yet what this entails remains unclear. Here, we focus on how a single frame-level S3M representation can encode phones and their surrounding context. Prior work has shown that S3Ms represent phones compositionally; for example, phonological vectors such as voicing, bilabiality, and nasality vectors are superposed in the S3M representation of [m]. We extend this view by proposing that phonological information from a sequence of neighboring phones is also compositionally encoded in a single frame, such that vectors corresponding to previous, current, and next phones are superposed within a single frame-level representation. We show that this structure has several properties, including orthogonality between relative positions, and emergence of implicit phonetic boundaries. Together, our findings advance our understanding of context-dependent S3M representations.