ProToken: Token-Level Attribution for Federated Large Language Models

๐Ÿ“… 2026-01-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of tracing the client origin of individual tokens in federated large language model deploymentsโ€”a critical barrier to debugging, detecting malicious behavior, and ensuring fair incentive mechanisms. To this end, the authors propose ProToken, the first method enabling token-level, fine-grained client attribution under strict privacy constraints. Leveraging the signal concentration property in deeper Transformer layers, ProToken identifies key layers and applies gradient-based correlation weighting to pinpoint neuron activations most influential to token generation. Evaluated across four large language models and four domains (16 configurations in total), ProToken achieves an average attribution accuracy of 98%, maintaining high precision even as the number of clients scales, thereby effectively balancing traceability with privacy preservation.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated Learning (FL) enables collaborative training of Large Language Models (LLMs) across distributed data sources while preserving privacy. However, when federated LLMs are deployed in critical applications, it remains unclear which client(s) contributed to specific generated responses, hindering debugging, malicious client identification, fair reward allocation, and trust verification. We present ProToken, a novel Provenance methodology for Token-level attribution in federated LLMs that addresses client attribution during autoregressive text generation while maintaining FL privacy constraints. ProToken leverages two key insights to enable provenance at each token: (1) transformer architectures concentrate task-specific signals in later blocks, enabling strategic layer selection for computational tractability, and (2) gradient-based relevance weighting filters out irrelevant neural activations, focusing attribution on neurons that directly influence token generation. We evaluate ProToken across 16 configurations spanning four LLM architectures (Gemma, Llama, Qwen, SmolLM) and four domains (medical, financial, mathematical, coding). ProToken achieves 98% average attribution accuracy in correctly localizing responsible client(s), and maintains high accuracy when the number of clients are scaled, validating its practical viability for real-world deployment settings.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Large Language Models
Token-level Attribution
Client Provenance
Privacy-Preserving AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

token-level attribution
federated learning
provenance
large language models
gradient-based relevance
๐Ÿ”Ž Similar Papers
No similar papers found.