CSC   24412
CENTRO DE SIMULACION COMPUTACIONAL PARA APLICACIONES TECNOLOGICAS
Unidad Ejecutora - UE
artículos
Título:
Compression-based regularization with an application to multitask learning
Autor/es:
PABLO PIANTANIDA; MATÍAS VERA; LEONARDO REY VEGA
Revista:
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
Editorial:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Referencias:
Año: 2018 vol. 12 p. 1063 - 1076
ISSN:
1932-4553
Resumen:
This paper investigates, from information theoretic grounds, a learning problem based on the principle that any regularity in a given dataset can be exploited to extract compact features from data, i.e., using fewer bits than needed to fully describe the data itself, in order to build meaningful representations of a relevant content (multiple labels). We begin studying a emph{multi-task learning} (MTL) problem from the average (over the tasks) of miss-classification probability point of view and linking it with the popular emph{cross-entropy} criterion. Our approach allows an information theoretic formulation of a MTL problem as a supervised learning framework in which the prediction models for several related tasks are learned jointly from common representations to achieve better generalization performance. More precisely, our formulation of the MTL problem can be interpreted as an emph{information bottleneck} problem with side information at the decoder.  Based on that, we present an iterative algorithm for computing the optimal trade-offs and and some of its convergence properties are studied. An important feature of this algorithm  is to provide a natural safeguard against overfitting, because it minimizes the average risk taking into account a penalization induced by the model complexity.  Remarkably, empirical results illustrate that there exists an optimal information rate minimizing the emph{excess risk} which depends on the nature and the amount of available training data. Applications to hierarchical text categorization and distributional word clusters are also investigated, extending previous works.