Jump to Content

Language Modeling Is Compression

Published
View publication

Abstract

The established correlation between predictive ability andcompression in models forms the cornerstone of this research. Given that languagemodels exhibit strong predictive qualities, it is assumed that they will also excelat compression. This paper seeks to evaluate the efficacy of language models ascompressors and assess how they measure up against other prominent predictors.Furthermore, we delve into the constraints of these models and explore the potentialbenefits of reframing the AI problem from a compression standpoint, as opposedto a purely predictive one.

Authors

Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne*, Elliot Catt, Tim Genewein, cmattern , Matthew Aitchison, Laurent Orseau, Marcus Hutter, Joel Veness

*
External author

Venue

ICLR 2024