Â鶹´«Ã½AV

Understanding GANs in the LQG Setting: Formulation, Generalization and Stability

Submitted by admin on Wed, 10/23/2024 - 01:52

Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data. In this paper, we provide an understanding of basic issues surrounding GANs including their formulation, generalization and stability on a simple LQG benchmark where the generator is Linear, the discriminator is Quadratic and the data has a high-dimensional Gaussian distribution.

The Information Bottleneck Problem and its Applications in Machine Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

Inference capabilities of machine learning (ML) systems skyrocketed in recent years, now playing a pivotal role in various aspect of society. The goal in statistical learning is to use data to obtain simple algorithms for predicting a random variable Y from a correlated observation X. Since the dimension of X is typically huge, computationally feasible solutions should summarize it into a lower-dimensional feature vector T, from which Y is predicted.

MaxiMin Active Learning in Overparameterized Model Classes

Submitted by admin on Wed, 10/23/2024 - 01:52

Generating labeled training datasets has become a major bottleneck in Machine Learning (ML) pipelines. Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This paper proposes a new approach to active ML with nonparametric or overparameterized models such as kernel methods and neural networks.

Expression of Fractals Through Neural Network Functions

Submitted by admin on Wed, 10/23/2024 - 01:52

To help understand the underlying mechanisms of neural networks (NNs), several groups have studied the number of linear regions â„“ of piecewise linear (PwL) functions, generated by deep neural networks (DNN). In particular, they showed that â„“ can grow exponentially with the number of network parameters p, a property often used to explain the advantages of deep over shallow NNs.

Physical Layer Communication via Deep Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

Reliable digital communication is a primary workhorse of the modern information age. The disciplines of communication, coding, and information theories drive the innovation by designing efficient codes that allow transmissions to be robustly and efficiently decoded. Progress in near optimal codes is made by individual human ingenuity over the decades, and breakthroughs have been, befittingly, sporadic and spread over several decades. Deep learning is a part of daily life where its successes can be attributed to a lack of a (mathematical) generative model.

Extracting Robust and Accurate Features via a Robust Information Bottleneck

Submitted by admin on Wed, 10/23/2024 - 01:52

We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space. Our method builds upon the idea of the information bottleneck, by introducing an additional penalty term that encourages the Fisher information of the extracted features to be small when parametrized by the inputs. We present two formulations where the relevance of the features to output labels is measured using either mutual information or MMSE.

Functional Error Correction for Robust Neural Networks

Submitted by admin on Wed, 10/23/2024 - 01:52

When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the NeuralNet's performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the NeuralNet's performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits.