🤖 AI Summary
Traditional choice-based conjoint analysis relies on linear assumptions, limiting its ability to capture nonlinear structures in user preferences—resulting in inaccurate predictions and biased estimates of attribute contributions. To address this, we propose ConjointNet, the first neural architecture for conjoint analysis that incorporates representation learning via a dual-branch network. It employs customized embedding layers and end-to-end differentiable optimization under conjoint analysis constraints, explicitly modeling nonlinear attribute interactions and higher-order preference patterns. Evaluated on two real-world preference datasets, ConjointNet achieves over 5% higher prediction accuracy than state-of-the-art linear methods, while enabling interpretable analysis of nonlinear feature interactions. This work breaks the long-standing linearity paradigm in conjoint analysis and establishes a novel deep learning framework that jointly delivers superior predictive performance and model interpretability for consumer preference modeling.
📝 Abstract
Understanding consumer preferences is essential to product design and predicting market response to these new products. Choice-based conjoint analysis is widely used to model user preferences using their choices in surveys. However, traditional conjoint estimation techniques assume simple linear models. This assumption may lead to limited predictability and inaccurate estimation of product attribute contributions, especially on data that has underlying non-linear relationships. In this work, we employ representation learning to efficiently alleviate this issue. We propose ConjointNet, which is composed of two novel neural architectures, to predict user preferences. We demonstrate that the proposed ConjointNet models outperform traditional conjoint estimate techniques on two preference datasets by over 5%, and offer insights into non-linear feature interactions.