麻豆传媒AV

Machine Learning and Compression Workshop @ NeurIPS 2024

Machine Learning and Compression Workshop @ NeurIPS 2024
The workshop solicits original research in the intersection of machine learning, data/model compression, and more broadly information theory. We look forward to your participation and contributions!

Machine learning and compression have been described as 鈥渢wo sides of the same coin鈥, and the exponential amount of data being generated in diverse domains underscores the need for improved compression as well as efficient AI systems. Leveraging deep generative models, recent machine learning-based methods have set new benchmarks for compressing images, videos, and audio. Despite these advances, many open problems remain, such as computational efficiency, performance guarantees, and channel simulation. Parallel advances in large-scale foundation models further spurred research in efficient AI techniques such as model compression and distillation. This workshop aims to bring together researchers in machine learning, data/model compression, and information theory. It will focus on enhancing compression techniques, accelerating large model training and inference, exploring theoretical limits, and integrating information-theoretic principles to improve learning and generalization. By bridging disciplines, we seek to catalyze the next generation of scalable, efficient information-processing systems.

Topics of interest include, but are not limited to,

  • Improvements in learning-based techniques for compressing data, model weights, implicit/learned representations of signals, and emerging data modalities.
  • Accelerating training and inference for large foundation models, potentially in distributed settings.
  • Theoretical understanding of neural compression methods, including but not limited to fundamental information-theoretic limits, perceptual/realism metrics, distributed compression and compression without quantization.
  • Understanding/improving learning and generalization via compression and information-theoretic principles.
  • Information-theoretic aspects of unsupervised learning and representation learning.

Submissions are due September 30th. For more details (including speakers, panelists, program, and call for papers), visit our workshop website:听https://neuralcompression.github.io/workshop24

The submission link will be available soon. We look forward to seeing you in Vancouver this December!

Important Dates:

  • Submission deadline: Sept 30, 2024

  • Notification date: Oct 9, 2024

  • Workshop date: Dec 14 or Dec 15 (TBD), 2024

Organizing Committee

  • Yibo Yang (UC Irvine)
  • Karen Ulrich (Meta AI)
  • Justus Will (UC Irvine)
  • Ezgi Ozyilkan (NYU)
  • Elza Erkip (NYU)
  • Stephan Mandt (UC Irvine)

More Information

Event Date
to Add to Calendar 2024-12-14 00:00:00 2024-12-15 00:00:00 Machine Learning and Compression Workshop @ NeurIPS 2024 The workshop solicits original research in the intersection of machine learning, data/model compression, and more broadly information theory. We look forward to your participation and contributions! Vancouver, Canada Ezgi Ozyilkan [email protected] America/New_York public
Event location
Vancouver, Canada
Event type
In-Person
Call For Papers Deadline
Sep 30, 2024
Contact name