Analysis of LLM Performance on AWS Bedrock: Receipt-item Categorisation Case Study

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the task of receipt item classification by systematically evaluating the trade-offs among accuracy, response stability, and inference cost for four leading commercial large language models—Claude 3.7 Sonnet, Claude 4 Sonnet, Mixtral 8x7B Instruct, and Mistral 7B Instruct—deployed on AWS Bedrock in a production environment. For the first time within a real-world, production-oriented framework, it conducts a multidimensional comparative assessment of zero-shot and few-shot prompting strategies. The results demonstrate that Claude 3.7 Sonnet achieves the optimal balance between classification performance and token-based inference cost, offering a cost-effective model selection rationale for practical deployment. Furthermore, the study highlights the critical influence of prompting strategy on overall cost-efficiency, underscoring its importance in operational decision-making.
📝 Abstract
This paper presents a systematic, cost-aware evaluation of large language models (LLMs) for receipt-item categorisation within a production-oriented classification framework. We compare four instruction-tuned models available through AWS Bedrock: Claude 3.7 Sonnet, Claude 4 Sonnet, Mixtral 8x7B Instruct, and Mistral 7B Instruct. The aim of the study was (1) to assess performance across accuracy, response stability, and token-level cost, and (2) to investigate what prompting methods, zero-shot or few-shot, are especially appropriate both in terms of accuracy and in terms of incurred costs. Results of our experiments demonstrated that Claude 3.7 Sonnet achieves the most favourable balance between classification accuracy and cost efficiency.
Problem

Research questions and friction points this paper is trying to address.

LLM evaluation
receipt-item categorisation
cost-aware analysis
prompting methods
AWS Bedrock
Innovation

Methods, ideas, or system contributions that make the work stand out.

cost-aware evaluation
receipt-item categorisation
large language models
prompting strategies
AWS Bedrock
🔎 Similar Papers
No similar papers found.
G
Gabby Sanchez
RMIT University, Melbourne, Australia
S
Sneha Oommen
RMIT University, Melbourne, Australia
C
Cassandra T. Britto
RMIT University, Melbourne, Australia
D
Di Wang
RMIT University, Melbourne, Australia
J
Jung-De Chiou
RMIT University, Melbourne, Australia
Maria Spichkova
Maria Spichkova
School of Computing Technologies, RMIT University, Australia
Software EngineeringHuman Aspects of Software EngineeringAI for SE