

— This paper studies probabilistic rates of convergence for consensus+innovations type of algorithms in random, generic networks. For each node, we find a lower and also a family of upper bounds on the large deviations rate function, thus enabling the computation of the exponential convergence rates for the events of interest on the iterates. Relevant applications include error exponents in distributed hypothesis testing, rates of convergence of beliefs in social learning, and inaccuracy rates in distributed estimation. The bounds on the rate function have a very particular form at each node: they are constructed as the convex envelope between the rate function of the hypothetical fusion center and the rate function corresponding to a certain topological mode of the node’s presence. We further show tightness of the discovered bounds for several cases, such as pendant nodes and regular networks, thus establishing the first proof of the large deviations principle for consensus+innovations and social learning in random networks. © 2023 IEEE.
| Engineering controlled terms: | Inference enginesLearning algorithms |
|---|---|
| Engineering uncontrolled terms | ConvergenceConvex analysisDistributed inferenceInaccuracy rateInference algorithmLarge deviationsNetwork topologyRate functionsSocial learningUpper Bound |
| Engineering main heading: | Network topology |
| Funding sponsor | Funding number | Acronym |
|---|---|---|
| Horizon 2020 Framework Programme See opportunities by H2020 | 957337 | H2020 |
This work was supported in part by the European Union’s Horizon 2020 Research and Innovation Program under Grant 957337.
Bajović, D.; The Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, United States;
© Copyright 2024 Elsevier B.V., All rights reserved.