Skip to Main content Skip to Navigation
Conference papers

A Representational MDL Framework for Improving Learning Power of Neural Network Formalisms

Abstract : Minimum description length (MDL) principle is one of the well-known solutions for overlearning problem, specifically for artificial neural networks (ANNs). Its extension is called representational MDL (RMDL) principle and takes into account that models in machine learning are always constructed within some representation. In this paper, the optimization of ANNs formalisms as information representations using the RMDL principle is considered. A novel type of ANNs is proposed by extending linear recurrent ANNs with nonlinear “synapse to synapse” connections. Most of the elementary functions are representable with these networks (in contrast to classical ANNs) and that makes them easily learnable from training datasets according to a developed method of ANN architecture optimization. Methodology for comparing quality of different representations is illustrated by applying developed method in time series prediction and robot control.
Document type :
Conference papers
Complete list of metadata

Cited literature [22 references]  Display  Hide  Download

https://hal.inria.fr/hal-01521390
Contributor : Hal Ifip <>
Submitted on : Thursday, May 11, 2017 - 5:10:15 PM
Last modification on : Thursday, March 5, 2020 - 5:41:35 PM
Long-term archiving on: : Saturday, August 12, 2017 - 1:48:11 PM

File

978-3-642-33409-2_8_Chapter.pd...
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Alexey Potapov, Maxim Peterson. A Representational MDL Framework for Improving Learning Power of Neural Network Formalisms. 8th International Conference on Artificial Intelligence Applications and Innovations (AIAI), Sep 2012, Halkidiki, Greece. pp.68-77, ⟨10.1007/978-3-642-33409-2_8⟩. ⟨hal-01521390⟩

Share

Metrics

Record views

158

Files downloads

459