🤖 AI Summary
This study addresses the automation of financial reporting, focusing on the comparative suitability of open-source versus commercial large language models (LLMs) for analytical summarization and commentary generation—particularly concerning data compliance, engineering overhead, and system reliability. Method: Adopting a design science research methodology, we develop a multi-agent workflow integrating locally deployed open-source LLMs (e.g., Llama 3) with cloud-based commercial APIs (e.g., GPT-4o), augmented by domain-specific prompt templates, output validation mechanisms, and fault-tolerant architecture tailored to financial semantics. Contribution/Results: Empirical evaluation reveals that open-source LLMs require substantial engineering effort—including fine-tuning, retrieval augmentation, and output calibration—to achieve professional-grade accuracy and operational stability. In contrast, commercial LLMs deliver superior fluency and out-of-the-box performance but introduce material risks related to data privacy leakage and vendor lock-in. The study yields actionable, domain-adapted design principles and empirically grounded guidelines for LLM selection and systematic deployment in regulated verticals.
📝 Abstract
Context: Large Language Models (LLMs) enable automation of complex natural language processing across domains, but research on domain-specific applications like Finance remains limited. Objectives: This study explored open-source and commercial LLMs for financial report analysis and commentary generation, focusing on software engineering challenges in implementation. Methods: Using Design Science Research methodology, an exploratory case study iteratively designed and evaluated two LLM-based systems: one with local open-source models in a multi-agent workflow, another using commercial GPT-4o. Both were assessed through expert evaluation of real-world financial reporting use cases. Results: LLMs demonstrated strong potential for automating financial reporting tasks, but integration presented significant challenges. Iterative development revealed issues including prompt design, contextual dependency, and implementation trade-offs. Cloud-based models offered superior fluency and usability but raised data privacy and external dependency concerns. Local open-source models provided better data control and compliance but required substantially more engineering effort for reliability and usability. Conclusion: LLMs show strong potential for financial reporting automation, but successful integration requires careful attention to architecture, prompt design, and system reliability. Implementation success depends on addressing domain-specific challenges through tailored validation mechanisms and engineering strategies that balance accuracy, control, and compliance.