Â鶹´«Ã½AV

Robust Algorithmic Recourse Under Model Multiplicity With Probabilistic Guarantees

Submitted by admin on Wed, 10/23/2024 - 01:52

There is an emerging interest in generating robust algorithmic recourse that would remain valid if the model is updated or changed even slightly. Towards finding robust algorithmic recourse (or counterfactual explanations), existing literature often assumes that the original model m and the new model M are bounded in the parameter space, i.e., $\|\text {Params}(M){-}\text {Params}(m)\|{\lt }\Delta $ . However, models can often change significantly in the parameter space with little to no change in their predictions or accuracy on the given dataset.

Straggler-Resilient Differentially Private Decentralized Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

We consider the straggler problem in decentralized learning over a logical ring while preserving user data privacy. Especially, we extend the recently proposed framework of differential privacy (DP) amplification by decentralization by Cyffers and Bellet to include overall training latency—comprising both computation and communication latency.

Summary Statistic Privacy in Data Sharing

Submitted by admin on Wed, 10/23/2024 - 01:52

We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e.g., mean, standard deviation). It achieves this by passing the data through a randomization mechanism. We propose summary statistic privacy, a metric for quantifying the privacy risk of such a mechanism based on the worst-case probability of an adversary guessing the distributional secret within some threshold.

On the Fundamental Limit of Distributed Learning With Interchangable Constrained Statistics

Submitted by admin on Wed, 10/23/2024 - 01:52

In the popular federated learning scenarios, distributed nodes often represent and exchange information through functions or statistics of data, with communicative processes constrained by the dimensionality of transmitted information. This paper investigates the fundamental limits of distributed parameter estimation and model training problems under such constraints. Specifically, we assume that each node can observe a sequence of i.i.d. sampled data and communicate statistics of the observed data with dimensionality constraints.

Information-Theoretic Tools to Understand Distributed Source Coding in Neuroscience

Submitted by admin on Wed, 10/23/2024 - 01:52

This paper brings together topics of two of Berger’s main contributions to information theory: distributed source coding, and living information theory. Our goal is to understand which information theory techniques can be helpful in understanding a distributed source coding strategy used by the natural world. Towards this goal, we study the example of the encoding of location of an animal by grid cells in its brain.

Secure Source Coding Resilient Against Compromised Users via an Access Structure

Submitted by admin on Wed, 10/23/2024 - 01:52

Consider a source and multiple users who observe the independent and identically distributed (i.i.d.) copies of correlated Gaussian random variables. The source wishes to compress its observations and store the result in a public database such that (i) authorized sets of users are able to reconstruct the source with a certain distortion level, and (ii) information leakage to non-authorized sets of colluding users is minimized. In other words, the recovery of the source is restricted to a predefined access structure.

Neural Distributed Source Coding

Submitted by admin on Wed, 10/23/2024 - 01:52

We consider the Distributed Source Coding (DSC) problem concerning the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. This seminal result was later extended to lossy compression of distributed sources by Wyner, Ziv, Berger, and Tung.

Controlled Privacy Leakage Propagation Throughout Overlapping Grouped Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

Federated Learning (FL) is the standard protocol for collaborative learning. In FL, multiple workers jointly train a shared model. They exchange model updates calculated on their data, while keeping the raw data itself local. Since workers naturally form groups based on common interests and privacy policies, we are motivated to extend standard FL to reflect a setting with multiple, potentially overlapping groups.

Long-Term Fairness in Sequential Multi-Agent Selection With Positive Reinforcement

Submitted by admin on Wed, 10/23/2024 - 01:52

While much of the rapidly growing literature on fair decision-making focuses on metrics for one-shot decisions, recent work has raised the intriguing possibility of designing sequential decision-making to positively impact long-term social fairness. In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback that increases the pool of under-represented applicants in future selection rounds, thus enhancing fairness in the long term.

Information Velocity of Cascaded Gaussian Channels With Feedback

Submitted by admin on Wed, 10/23/2024 - 01:52

We consider a line network of nodes, connected by additive white noise channels, equipped with local feedback. We study the velocity at which information spreads over this network. For transmission of a data packet, we give an explicit positive lower bound on the velocity, for any packet size. Furthermore, we consider streaming, that is, transmission of data packets generated at a given average arrival rate. We show that a positive velocity exists as long as the arrival rate is below the individual Gaussian channel capacity, and provide an explicit lower bound.