🤖 AI Summary
This study addresses the task of receipt item classification by systematically evaluating the trade-offs among accuracy, response stability, and inference cost for four leading commercial large language models—Claude 3.7 Sonnet, Claude 4 Sonnet, Mixtral 8x7B Instruct, and Mistral 7B Instruct—deployed on AWS Bedrock in a production environment. For the first time within a real-world, production-oriented framework, it conducts a multidimensional comparative assessment of zero-shot and few-shot prompting strategies. The results demonstrate that Claude 3.7 Sonnet achieves the optimal balance between classification performance and token-based inference cost, offering a cost-effective model selection rationale for practical deployment. Furthermore, the study highlights the critical influence of prompting strategy on overall cost-efficiency, underscoring its importance in operational decision-making.
📝 Abstract
This paper presents a systematic, cost-aware evaluation of large language models (LLMs) for receipt-item categorisation within a production-oriented classification framework. We compare four instruction-tuned models available through AWS Bedrock: Claude 3.7 Sonnet, Claude 4 Sonnet, Mixtral 8x7B Instruct, and Mistral 7B Instruct. The aim of the study was (1) to assess performance across accuracy, response stability, and token-level cost, and (2) to investigate what prompting methods, zero-shot or few-shot, are especially appropriate both in terms of accuracy and in terms of incurred costs. Results of our experiments demonstrated that Claude 3.7 Sonnet achieves the most favourable balance between classification accuracy and cost efficiency.