A pruned higher-order network for knowledge extraction

Laurent Bougrain 1
1 CORTEX - Neuromimetic intelligence
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.
Type de document :
Communication dans un congrès
International Joint Conference on Neural Networks - IJCNN'02, May 2002, Honolulu, Hawaii, USA, 4 p, 2002
Liste complète des métadonnées

https://hal.inria.fr/inria-00099439
Contributeur : Publications Loria <>
Soumis le : mardi 26 septembre 2006 - 09:06:32
Dernière modification le : jeudi 11 janvier 2018 - 06:19:48

Identifiants

  • HAL Id : inria-00099439, version 1

Collections

Citation

Laurent Bougrain. A pruned higher-order network for knowledge extraction. International Joint Conference on Neural Networks - IJCNN'02, May 2002, Honolulu, Hawaii, USA, 4 p, 2002. 〈inria-00099439〉

Partager

Métriques

Consultations de la notice

203