CSC   24412
CENTRO DE SIMULACION COMPUTACIONAL PARA APLICACIONES TECNOLOGICAS
Unidad Ejecutora - UE
capítulos de libros
Título:
Representation Learning and Information Bottleneck
Autor/es:
PABLO PIANTANIDA; LEONARDO REY VEGA
Libro:
Information-Theoretic Methods in Data Science
Editorial:
Cambridge University Press
Referencias:
Lugar: Cambridge; Año: 2020; p. 330 - 358
Resumen:
A grand challenge in representation learning is the development of computational algorithms that learn the different explanatory factors of variation behind high-dimensional data. Representation models (usually referred to as encoders) are often  determined for optimizing performance on training data when the real objective is to generalize well to other (unseen) data. The first part of this chapter  is devoted to provide an overview of and introduction to fundamental concepts in statistical learning theory and the Information Bottleneck principle. It serves as a mathematical basis for the technical results given in the second part, in which an upper bound to the generalization gap corresponding to the cross-entropy risk is given. When this penalty term times a suitable multiplier and the cross entropy empirical risk are minimized jointly, the problem is equivalent to optimizing the Information Bottleneck objective with respect to the empirical data distribution. This result provides an interesting connection between mutual information and generalization, and helps to explain why noise injection during the training phase can improve the generalization ability of encoder models and enforce invariances in the resulting representations.