Â鶹´«Ã½AV

Video file
Explicit and Implicit Inductive Bias in Deep Learning
Presenter(s)
Presenter Profile Picture
Nathan Srebro
Toyota Technological Institute at Chicago

Â鶹´«Ã½AV ITW 2020,ÌýRiva del Garda, Italy
Tutorial
Ìý

Date

Abstract

Inductive bias (reflecting prior knowledge or assumptions) lies at the core of every learning system and is essential for allowing learning and generalization, both from a statistical perspective, and from a computational perspective. What is the inductive bias that drives deep learning? A simplistic answer to this question is that we learn functions representable by a given architecture. But this is not sufficient neither computational (as learning even modestly sized neural networks is intractable) nor statistically (since modern architectures are too large to ensure generalization). In this tutorial we will explore these considerations, how training humongous, even infinite, deep networks can ensure generalization, what function spaces such infinite networks might correspond to, and how the inductive bias is tightly tied to the local search procedures used to train deep networks.