Â鶹´«Ã½AV

An Information-Theoretic Approach to Unsupervised Feature Selection for High-Dimensional Data

Submitted by admin on Wed, 10/23/2024 - 01:52

In this paper, we propose an information-theoretic approach to design the functional representations to extract the hidden common structure shared by a set of random variables. The main idea is to measure the common information between the random variables by Watanabe’s total correlation, and then find the hidden attributes of these random variables such that the common information is reduced the most given these attributes.

Energy-Reliability Limits in Nanoscale Feedforward Neural Networks and Formulas

Submitted by admin on Wed, 10/23/2024 - 01:52

Due to energy-efficiency requirements, computational systems are now being implemented using noisy nanoscale semiconductor devices whose reliability depends on energy consumed. We study circuit-level energy-reliability limits for deep feedforward neural networks (multilayer perceptrons) built using such devices, and en route also establish the same limits for formulas (boolean tree-structured circuits).

PacGAN: The Power of Two Samples in Generative Adversarial Networks

Submitted by admin on Wed, 10/23/2024 - 01:52

Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples. Despite remarkable improvements in generating realistic images, one of their major shortcomings is the fact that in practice, they tend to produce samples with little diversity, even when trained on diverse datasets. This phenomenon, known as mode collapse, has been the main focus of several recent advances in GANs.

A Fourier-Based Approach to Generalization and Optimization in Deep Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

The success of deep neural networks stems from their ability to generalize well on real data; however, et al. have observed that neural networks can easily overfit randomly-generated labels. This observation highlights the following question: why do gradient methods succeed in finding generalizable solutions for neural networks while there exist solutions with poor generalization behavior?

Sample Compression, Support Vectors, and Generalization in Deep Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

Even though Deep Neural Networks (DNNs) are widely celebrated for their practical performance, they possess many intriguing properties related to depth that are difficult to explain both theoretically and intuitively. Understanding how weights in deep networks coordinate together across layers to form useful learners has proven challenging, in part because the repeated composition of nonlinearities has proved intractable. This paper presents a reparameterization of DNNs as a linear function of a feature map that is locally independent of the weights.

Learning-Based Coded Computation

Submitted by admin on Wed, 10/23/2024 - 01:52

Recent advances have shown the potential for coded computation to impart resilience against slowdowns and failures that occur in distributed computing systems. However, existing coded computation approaches are either unable to support non-linear computations, or can only support a limited subset of non-linear computations while requiring high resource overhead. In this work, we propose a learning-based coded computation framework to overcome the challenges of performing coded computation for general non-linear functions.

Solving Inverse Problems via Auto-Encoders

Submitted by admin on Wed, 10/23/2024 - 01:52

Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms.