麻豆传媒AV

Theoretical Perspectives on Deep Learning Methods in Inverse Problems

Submitted by admin on Wed, 10/23/2024 - 01:52

In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution. While this line of works has predominantly been driven by practical algorithms and experiments, it has also given rise to a variety of intriguing theoretical problems. In this paper, we survey some of the prominent theoretical developments in this line of works, focusing in particular on generative priors, untrained neural network priors, and unfolding algorithms.

Local Decoding in Distributed Compression

Submitted by admin on Wed, 10/23/2024 - 01:52

A recent result says that the lossless compression of a single source $X^{n}$ is achievable with a strong locality property; any $X_{i}$ can be decoded from a constant number of compressed bits, with a vanishing in $n$ probability of error. By contrast, we show that for two separately encoded sources $(X^{n},Y^{n})$ , lossless compression and strong locality is generally not possible. Specifically, we show that for the class of 鈥渃onfusable鈥 sources, strong locality cannot be achieved whenever one of the sources is compressed below its entropy.

Tail Redundancy and its Characterization of Compression of Memoryless Sources

Submitted by admin on Wed, 10/23/2024 - 01:52

We formalize the tail redundancy of a collection ${\mathcal{ P}}$ of distributions over a countably infinite alphabet, and show that this fundamental quantity characterizes the asymptotic per-symbol minimax redundancy of universally compressing sequences generated i.i.d. from ${\mathcal{ P}}$ . Contrary to the worst case formulations of universal compression, finite single letter minimax (average case) redundancy of ${\mathcal{ P}}$ does not automatically imply that the expected minimax redundancy of describing length- $n$ strings sampled i.i.d.

Compressing Multisets With Large Alphabets

Submitted by admin on Wed, 10/23/2024 - 01:52

Current methods which compress multisets at an optimal rate have computational complexity that scales linearly with alphabet size, making them too slow to be practical in many real-world settings. We show how to convert a compression algorithm for sequences into one for multisets, in exchange for an additional complexity term that is quasi-linear in sequence length. This allows us to compress multisets of exchangeable symbols at an optimal rate, with computational complexity decoupled from the alphabet size.

Strategic Successive Refinement With Interdependent Decoders Cost Functions

Submitted by admin on Wed, 10/23/2024 - 01:52

In decentralized and decision-oriented communication paradigms, autonomous devices strategically implement information compression policies. In this work, we study a strategic communication game between an encoder and two decoders. An i.i.d. information source, observed by the encoder, is transmitted to the decoders via two perfect links, one reaching the first decoder only and the other reaching both decoders, as in the successive refinement setup.

Universal Gaussian Quantization With Side-Information Using Polar Lattices

Submitted by admin on Wed, 10/23/2024 - 01:52

We consider universal quantization with side information for Gaussian observations, where the side information is a noisy version of the sender鈥檚 observation with noise variance unknown to the sender. In this paper, we propose a universally rate optimal and practical quantization scheme for all values of unknown noise variance.