Networked Information Aggregation via Machine Learning

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses efficient information aggregation in distributed learning over directed acyclic graphs (DAGs), where agents possess only local features and predictions from parent nodes, aiming to approximate the globally optimal model. Method: We propose a topology-ordered collaborative training framework that integrates distributed learning with sequential modeling principles, supporting both linear and general hypothesis-class regression. Contribution/Results: We establish that graph depth—measured by the longest path in the DAG—is fundamental to aggregation capacity: shallow topologies (e.g., star graphs) exhibit inherent lower bounds on estimation error, whereas sufficiently deep paths enable local learning errors to converge to those of the full-feature centralized model. Experiments across diverse DAG structures empirically validate aggregation efficacy. Crucially, this work provides the first quantitative characterization linking graph depth to learnability in structured distributed learning, yielding both theoretical foundations and practical design principles for topology-aware distributed systems.

Technology Category

Application Category

📝 Abstract
We study a distributed learning problem in which learning agents are embedded in a directed acyclic graph (DAG). There is a fixed and arbitrary distribution over feature/label pairs, and each agent or vertex in the graph is able to directly observe only a subset of the features -- potentially a different subset for every agent. The agents learn sequentially in some order consistent with a topological sort of the DAG, committing to a model mapping observations to predictions of the real-valued label. Each agent observes the predictions of their parents in the DAG, and trains their model using both the features of the instance that they directly observe, and the predictions of their parents as additional features. We ask when this process is sufficient to achieve emph{information aggregation}, in the sense that some agent in the DAG is able to learn a model whose error is competitive with the best model that could have been learned (in some hypothesis class) with direct access to emph{all} features, despite the fact that no single agent in the network has such access. We give upper and lower bounds for this problem for both linear and general hypothesis classes. Our results identify the emph{depth} of the DAG as the key parameter: information aggregation can occur over sufficiently long paths in the DAG, assuming that all of the relevant features are well represented along the path, and there are distributions over which information aggregation cannot occur even in the linear case, and even in arbitrarily large DAGs that do not have sufficient depth (such as a hub-and-spokes topology in which the spoke vertices collectively see all the features). We complement our theoretical results with a comprehensive set of experiments.
Problem

Research questions and friction points this paper is trying to address.

Distributed learning in DAGs with partial feature access
Achieving information aggregation across networked agents
Bounds for linear and general hypothesis classes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed learning in DAG-structured networks
Agents use parent predictions as features
Depth of DAG enables information aggregation
🔎 Similar Papers
No similar papers found.