"Learn to Compress & Compress to Learn" Workshop @ ISIT 2025
We are excited to announce the second edition of our workshop, now titled "Learn to Compress & Compress to Learn", at ISIT 2025.泭The workshop will be held on泭啦堯喝娶莽餃硃聆,泭June 26, 2025 (full day) in泭Ann Arbor, Michigan漍漍漍漍.
Abstract
The rapid growth of global data has intensified the need for efficient compression, with deep learning techniques like VAEs, GANs, diffusion models, and implicit neural representations reshaping source coding.泭While learning-based neural compression outperforms traditional codecs across multiple modalities, challenges remain in computational efficiency, theoretical limits, and distributed settings. At the same time, compression has become a powerful tool for advancing broader learning objectives, including representation learning and model efficiency, playing a key role in training and generalization for large-scale foundation models. Techniques like knowledge distillation, model pruning, and quantization share common challenges with compression,泭highlighting the symbiotic relationship between these seemingly distant concepts. The intersection of learning, compression and information theory offers exciting new avenues for advancing both practical compression techniques and also our understanding of deep learning dynamics.
This workshop aims to unite experts from machine learning, computer science, and information theory to delve into the dual themes of learning-based compression and using compression as a tool for learning tasks.
Invited Talks
The program will feature invited talks from:泭
*泭 (University of Pennsylvania)
* 泭(University of Cambridge)
* (Apple)
* (Texas A&M University)
* (Chan Zuckerberg Initiative)
Call for Papers
We invite researchers from related fields to submit their latest work to the workshop. All accepted papers will be presented as posters during the poster session. Some papers will also be selected for泭spotlight presentations.
Topics of interest include, but are not limited to:
-
"Learn to Compress Advancing Compression with Learning
-
Learning-Based Data Compression:泭New techniques for compressing data (e.g., images, video, audio), model weights, and emerging modalities (e.g., 3D content and AR/VR applications).
-
Efficiency for Large-Scale Foundation Models:泭Accelerating training and inference for large-scale foundation models, particularly in distributed and resource-constrained settings
-
Theoretical Foundations of Neural Compression:泭Fundamental limits (e.g., rate-distortion bounds), distortion/perceptual/realism metrics, distributed compression, compression without quantization (e.g., channel simulation, relative entropy coding), and stochastic/probabilistic coding techniques.
-
-
"Compress to Learn Leveraging Principles of Compression to Improve Learning
-
Compression as a Tool for Learning:泭Leveraging principles of compression and source coding to understand and improve learning and generalization.
-
Compression as a Proxy for Learning:泭Understanding the information-theoretic role of compression in tasks like unsupervised learning, representation learning, and semantic understanding.
-
Interplay of Algorithmic Information Theory and Source Coding:泭Exploring connections between Algorithmic Information Theory concepts (e.g., Kolmogorov complexity, Solomonoff induction) and emerging source coding methods.
-
Submissions are due March 14. For more details, visit our .
We look forward to seeing you in Ann Arbor this June!
Important Dates
- Paper submission deadline:泭March 14, 2025泭(11:59 PM AoE, Anywhere on Earth).
- Decision notification:泭April 18, 2025
- Camera-ready paper deadline:泭May 1, 2025
- Workshop date:泭June 26, 2025
Organizing Committee
* 泭(NYU)
* (University of Cambridge / Imperial College London)
* (Imperial College London)
* (捧喊惚)泭
泭
泭