Dress Well via Fashion Cognitive Learning

📅 2022-08-01
🏛️ British Machine Vision Conference
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing fashion compatibility models struggle to incorporate users’ physical attributes (e.g., height, body shape) for personalized outfit recommendation. To address this, we propose the first fashion cognition learning paradigm grounded in individual physical characteristics and introduce an end-to-end Fashion Cognitive Network (FCN). FCN comprises two core components: a convolutional encoder that extracts visual-semantic embeddings of outfits, and a user-aware module that fuses appearance features. Additionally, we design a Multi-Label Graph Convolutional Network (ML-GCN) to jointly model fine-grained style semantics and fit compatibility. We curate a large-scale, real-world O4U dataset—comprising over 100K user-outfit interactions with annotated physical attributes and stylistic labels—to support training and evaluation. Extensive quantitative and qualitative experiments demonstrate that our method significantly outperforms state-of-the-art approaches in personalized outfit recommendation, achieving superior accuracy, interpretability, and generalization across diverse user demographics.
📝 Abstract
Fashion compatibility models enable online retailers to easily obtain a large number of outfit compositions with good quality. However, effective fashion recommendation demands precise service for each customer with a deeper cognition of fashion. In this paper, we conduct the first study on fashion cognitive learning, which is fashion recommendations conditioned on personal physical information. To this end, we propose a Fashion Cognitive Network (FCN) to learn the relationships among visual-semantic embedding of outfit composition and appearance features of individuals. FCN contains two submodules, namely outfit encoder and Multi-label Graph Neural Network (ML-GCN). The outfit encoder uses a convolutional layer to encode an outfit into an outfit embedding. The latter module learns label classifiers via stacked GCN. We conducted extensive experiments on the newly collected O4U dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.
Problem

Research questions and friction points this paper is trying to address.

Fashion recommendations require personal physical information integration
Learning relationships between outfit embeddings and individual appearance features
Developing cognitive fashion models for precise personalized style suggestions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fashion Cognitive Network learns outfit-person relationships
Outfit encoder uses convolutional layers for embedding
Multi-label GCN learns classifiers via stacked architecture
🔎 Similar Papers
No similar papers found.
K
Kaicheng Pang
School of Fashion and Textiles, The Hong Kong Polytechnic University, Laboratory for Artificial Intelligence in Design
Xingxing Zou
Xingxing Zou
School of Fashion and Textiles, The Hong Kong Polytechnic Univeristy
ai art
W
W. Wong
School of Fashion and Textiles, The Hong Kong Polytechnic University, Laboratory for Artificial Intelligence in Design