麻豆传媒AV

A Coherence Parameter Characterizing Generative Compressed Sensing With Fourier Measurements

Submitted by admin on Mon, 10/28/2024 - 01:24

In Bora et al. (2017), a mathematical framework was developed for compressed sensing guarantees in the setting where the measurement matrix is Gaussian and the signal structure is the range of a generative neural network (GNN). The problem of compressed sensing with GNNs has since been extensively analyzed when the measurement matrix and/or network weights follow a subgaussian distribution.

Deep Model-Based Architectures for Inverse Problems Under Mismatched Priors

Submitted by admin on Mon, 10/28/2024 - 01:24

There is a growing interest in deep model-based architectures (DMBAs) for solving imaging inverse problems by combining physical measurement models and learned image priors specified using convolutional neural nets (CNNs). For example, well-known frameworks for systematically designing DMBAs include plug-and-play priors (PnP), deep unfolding (DU), and deep equilibrium models (DEQ).

Symmetric Private Information Retrieval at the Private Information Retrieval Rate

Submitted by admin on Mon, 10/28/2024 - 01:24

We consider the problem of symmetric private information retrieval (SPIR) with user-side common randomness. In SPIR, a user retrieves a message out of $K$ messages from $N$ non-colluding and replicated databases in such a way that no single database knows the retrieved message index (user privacy), and the user gets to know nothing further than the retrieved message (database privacy), i.e., the privacy constraint between the user and the databases is symmetric.

Inventing Codes for Channels With Active Feedback via Deep Learning

Submitted by admin on Mon, 10/28/2024 - 01:24

Designing reliable codes for channels with feedback, which has significant theoretical and practical importance, is one of the long-standing open problems in coding theory. While there are numerous prior works on analytical codes for channels with feedback, the majority of them focus on channels with noiseless output feedback, where the optimal coding scheme is still unknown. For channels with noisy feedback, deriving analytical codes becomes even more challenging, and much less is known.

Denoising Generalized Expectation-Consistent Approximation for MR Image Recovery

Submitted by admin on Mon, 10/28/2024 - 01:24

To solve inverse problems, plug-and-play (PnP) methods replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN). Although such methods yield accurate solutions, they can be improved. For example, denoisers are usually designed/trained to remove white Gaussian noise, but the denoiser input error in PnP algorithms is usually far from white or Gaussian.

Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity

Submitted by admin on Mon, 10/28/2024 - 01:24

Federated learning is a novel paradigm that involves learning from data samples distributed across a large network of clients while the data remains local. It is, however, known that federated learning is prone to multiple system challenges including system heterogeneity where clients have different computation and communication capabilities. Such heterogeneity in clients鈥 computation speed has a negative effect on the scalability of federated learning algorithms and causes significant slow-down in their runtime due to slow devices (stragglers).

Balanced Nonadaptive Redundancy Scheduling

Submitted by admin on Mon, 10/28/2024 - 01:24

Distributed computing systems implement redundancy to reduce the job completion time and variability. Despite a large body of work about computing redundancy, the analytical performance evaluation of redundancy techniques in queuing systems is still an open problem. In this work, we take one step forward to analyze the performance of scheduling policies in systems with redundancy. In particular, we study the pattern of shared servers among replicas of different jobs.

Efficient Randomized Subspace Embeddings for Distributed Optimization Under a Communication Budget

Submitted by admin on Mon, 10/28/2024 - 01:24

We study first-order optimization algorithms under the constraint that the descent direction is quantized using a pre-specified budget of $R$ -bits per dimension, where $R \in (0,\infty)$ . We propose computationally efficient optimization algorithms with convergence rates matching the information-theoretic performance lower bounds for: (i) Smooth and Strongly-Convex objectives with access to an Exact Gradient oracle, as well as (ii) General Convex and Non-Smooth objectives with access to a Noisy Subgradient oracle.

Successive Approximation Coding for Distributed Matrix Multiplication

Submitted by admin on Mon, 10/28/2024 - 01:24

Coded distributed computing was recently introduced to mitigate the effect of stragglers on distributed computing systems. This paper combines ideas of approximate and coded computing to further accelerate computation. We propose successive approximation coding (SAC) techniques that realize a tradeoff between accuracy and speed, allowing the distributed computing system to produce approximations that increase in accuracy over time. If a sufficient number of compute nodes finish their tasks, SAC exactly recovers the desired computation.

Peer-to-Peer Variational Federated Learning Over Arbitrary Graphs

Submitted by admin on Mon, 10/28/2024 - 01:24

This paper proposes a federated supervised learning framework over a general peer-to-peer network with agents that act in a variational Bayesian fashion. The proposed framework consists of local agents where each of which keeps a local 鈥減osterior probability distribution鈥 over the parameters of a global model; the updating of the posterior over time happens in a local fashion according to two subroutines of: 1) variational model training given (a batch of) local labeled data, and 2) asynchronous communication and model aggregation with the 1-hop neighbors.