🤖 AI Summary
Multimodal classification of home-service requests (text–image pairs) on the Thumbtack platform poses challenges in accuracy, latency, cost, and calibration for production deployment.
Method: We systematically compare embedded Softmax models against LLM-based prompting approaches (GPT-4, Claude zero-shot/few-shot) on real user-submitted multimodal service descriptions. We design an end-to-end framework integrating text and image embeddings via fusion followed by Softmax classification.
Contribution/Results: Our embedded approach outperforms prompting across all dimensions: +49.5% accuracy; 81× and 14× lower text and image processing latency, respectively; 90% reduction in inference cost; and superior probability calibration enabling confidence-aware UX. Offline evaluation and online A/B testing show strong consistency, confirming the embedded method’s holistic advantages in precision, efficiency, cost-effectiveness, and reliability—marking the first empirical demonstration of such superiority in an industrial-scale, fine-grained multimodal classification setting.
📝 Abstract
Are traditional classification approaches irrelevant in this era of AI hype? We show that there are multiclass classification problems where predictive models holistically outperform LLM prompt-based frameworks. Given text and images from home-service project descriptions provided by Thumbtack customers, we build embeddings-based softmax models that predict the professional category (e.g., handyman, bathroom remodeling) associated with each problem description. We then compare against prompts that ask state-of-the-art LLM models to solve the same problem. We find that the embeddings approach outperforms the best LLM prompts in terms of accuracy, calibration, latency, and financial cost. In particular, the embeddings approach has 49.5% higher accuracy than the prompting approach, and its superiority is consistent across text-only, image-only, and text-image problem descriptions. Furthermore, it yields well-calibrated probabilities, which we later use as confidence signals to provide contextualized user experience during deployment. On the contrary, prompting scores are overly uninformative. Finally, the embeddings approach is 14 and 81 times faster than prompting in processing images and text respectively, while under realistic deployment assumptions, it can be up to 10 times cheaper. Based on these results, we deployed a variation of the embeddings approach, and through A/B testing we observed performance consistent with our offline analysis. Our study shows that for multiclass classification problems that can leverage proprietary datasets, an embeddings-based approach may yield unequivocally better results. Hence, scientists, practitioners, engineers, and business leaders can use our study to go beyond the hype and consider appropriate predictive models for their classification use cases.