"Learn to Compress & Compress to Learn" Workshop @ ISIT 2025
We are excited to announce the second edition of our workshop, now titled "Learn to Compress & Compress to Learn", at ISIT 2025.The workshop will be held onճܰ岹,June 26, 2025 (full day) inAnn Arbor, Michiganċċċċ.
Abstract
The rapid growth of global data has intensified the need for efficient compression, with deep learning techniques like VAEs, GANs, diffusion models, and implicit neural representations reshaping source coding.While learning-based neural compression outperforms traditional codecs across multiple modalities, challenges remain in computational efficiency, theoretical limits, and distributed settings. At the same time, compression has become a powerful tool for advancing broader learning objectives, including representation learning and model efficiency, playing a key role in training and generalization for large-scale foundation models. Techniques like knowledge distillation, model pruning, and quantization share common challenges with compression,highlighting the symbiotic relationship between these seemingly distant concepts. The intersection of learning, compression and information theory offers exciting new avenues for advancing both practical compression techniques and also our understanding of deep learning dynamics.
This workshop aims to unite experts from machine learning, computer science, and information theory to delve into the dual themes of learning-based compression and using compression as a tool for learning tasks.
Invited Talks
The program will feature invited talks from:
* (University of Pennsylvania)
* (University of Cambridge)
* (Apple)
* (Texas A&M University)
* (Chan Zuckerberg Initiative)
Call for Papers
We invite researchers from related fields to submit their latest work to the workshop. All accepted papers will be presented as posters during the poster session. Some papers will also be selected forspotlight presentations.
Topics of interest include, but are not limited to:
-
"Learn to Compress” – Advancing Compression with Learning
-
Learning-Based Data Compression:New techniques for compressing data (e.g., images, video, audio), model weights, and emerging modalities (e.g., 3D content and AR/VR applications).
-
Efficiency for Large-Scale Foundation Models:Accelerating training and inference for large-scale foundation models, particularly in distributed and resource-constrained settings
-
Theoretical Foundations of Neural Compression:Fundamental limits (e.g., rate-distortion bounds), distortion/perceptual/realism metrics, distributed compression, compression without quantization (e.g., channel simulation, relative entropy coding), and stochastic/probabilistic coding techniques.
-
-
"Compress to Learn” – Leveraging Principles of Compression to Improve Learning
-
Compression as a Tool for Learning:Leveraging principles of compression and source coding to understand and improve learning and generalization.
-
Compression as a Proxy for Learning:Understanding the information-theoretic role of compression in tasks like unsupervised learning, representation learning, and semantic understanding.
-
Interplay of Algorithmic Information Theory and Source Coding:Exploring connections between Algorithmic Information Theory concepts (e.g., Kolmogorov complexity, Solomonoff induction) and emerging source coding methods.
-
Submissions are due March 14. For more details, visit our .
We look forward to seeing you in Ann Arbor this June!
Important Dates
- Paper submission deadline:March 14, 2025(11:59 PM AoE, Anywhere on Earth).
- Decision notification:April 18, 2025
- Camera-ready paper deadline:May 1, 2025
- Workshop date:June 26, 2025
Organizing Committee
* (NYU)
* (University of Cambridge / Imperial College London)
* (Imperial College London)
* (۱)