🤖 AI Summary
Large language models (LLMs) exhibit poorly understood capabilities in cryptography—a domain demanding rigorous mathematical reasoning and formal analysis—largely due to the absence of high-quality, domain-specific evaluation benchmarks.
Method: We introduce CryptoQA, the first large-scale, expert-curated question-answering dataset for cryptography, comprising 2.1 million question-answer pairs extracted from authoritative academic literature, enriched with structured contextual metadata. We propose a multidimensional evaluation framework assessing LLMs across factual accuracy, reverse derivation, formal proof generation, and citation consistency, grounded in expert-annotated gold-standard baselines.
Contribution/Results: Benchmarking 15 state-of-the-art LLMs reveals pervasive deficiencies in precise mathematical analysis and deductive reasoning. Fine-tuning on CryptoQA yields substantial performance gains across all dimensions, demonstrating its effectiveness in enhancing cryptographic reasoning and its broader generalizability to formal scientific domains.
📝 Abstract
Large language models (LLMs) excel at many general-purpose natural language processing tasks. However, their ability to perform deep reasoning and mathematical analysis, particularly for complex tasks as required in cryptography, remains poorly understood, largely due to the lack of suitable data for evaluation and training. To address this gap, we present CryptoQA, the first large-scale question-answering (QA) dataset specifically designed for cryptography. CryptoQA contains over two million QA pairs drawn from curated academic sources, along with contextual metadata that can be used to test the cryptographic capabilities of LLMs and to train new LLMs on cryptographic tasks. We benchmark 15 state-of-the-art LLMs on CryptoQA, evaluating their factual accuracy, mathematical reasoning, consistency, referencing, backward reasoning, and robustness to adversarial samples. In addition to quantitative metrics, we provide expert reviews that qualitatively assess model outputs and establish a gold-standard baseline. Our results reveal significant performance deficits of LLMs, particularly on tasks that require formal reasoning and precise mathematical knowledge. This shows the urgent need for LLM assistants tailored to cryptography research and development. We demonstrate that, by using CryptoQA, LLMs can be fine-tuned to exhibit better performance on cryptographic tasks.