INVESTIGADORES
ECHEVESTE Rodrigo SebastiÁn
congresos y reuniones científicas
Título:
Self-stabilizing Learning Rules in Neural Models driven by Objective Functions
Autor/es:
ECHEVESTE, RODRIGO; GROS, CLAUDIUS
Reunión:
Conferencia; Bernstein Conference 2013; 2013
Resumen:
A large number of neuronal models accounting for intrinsic plasticity and associative synaptic learning mechanisms have been proposed in the past, successfully performing tasks such as principal component analysis or processing of natural images. These models, however, usually require either the addition of a weight decay term to the Hebbian learning rule, or an extra weight vector normalization step, to avoid unbounded weight growth. In the present work, learning rules for a neuronal model are derived from two objective functions. On the one hand, the neuron?s firing bias is adjusted by minimizing the Kullback-Leibler divergence with respect to an exponential output distribution. On the other hand, learning rules for the synaptic weights are obtained by minimizing a Fisher information that measures the sensitivity of the input distribution with respect to the growth of the synaptic weights. In this way, we obtain rules that both account for Hebbian/anti-Hebbian learning and stabilize the system to avoid unbounded weight growth. As a by-product of the derivation, a sliding threshold, similar to the one found in BCM models, is obtained for the learning rules. As a first application of these rules, the single neuron case is studied in the context of principal component analysis and linear discrimination. We observe that the weight vector aligns to the principal component when the input distribution has a single direction of maximal variance but, when presented with two directions of equal variance, the neuron tends to pick the one with larger negative Kurtosis. In particular, this fact allows the neuron to linearly separate bimodal inputs. Robustness to large input sizes (~ 1000 inputs) is also studied, observing that the neuron is still able to find the principal component in a distribution under these conditions.