HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Master thesis

Overfitting using the MDL principle

Abstract : This thesis investigates the experimental results obtained in the paper "An experimental and theoretical comparison of model selection methods" by Kearn, Mansour, Ng, and Ron, published in volume 27 of Machine Learning 1997 (referred to as [KMNR97]). These 'model selection methods' are used to find a 'good' model for some observed data. The data consists of pairs (x,y). Often the 'x-data' is regarded as given and the model is a function that maps each x-value to a y-value. We concentrate on the 'overfitting' behavior of the Two-Part Minimum Description Length (henceforth MDL) principle. Furthermore we also consider Guaranteed Risk Minimization (GRM), Predictive MDL and Cross Validation. We verify and extend the experiments as published in [KMNR97]. The conclusions of the analysis of these experiments lead to a second series of experiments. A second series of experiments was conducted to test under which conditions the conjecture holds that removing bad models from the model class can actually decrease performance. We conclude the following: 1. Using small model classes may lead to overfitting, if the sample is corrupted by noise. Using the, given, x-data to code models is one way to work with small model classes for small sample sizes. This phenomenon does not only affect Two-Part MDL, as suggested by the experimental results in [KMNR97], but also GRM. It is difficult to design a general learning algorithm that is guaranteed not to overfit the data. However, it became clear that, when we use Two-Part MDL or GRM, it is wise to keep the model class sample-independent in order to avoid overfitting. 2. Predictive MDL does not overfit on the interval function selection problem. However, it is non-parametric, its performance is not dependent on the size of the model class, it does not overfit much and it overfits only for small sample sizes. 3. The Two-Part MDL principle performs well on selecting a model class and thus the number of parameters, but does no necessarily provide good estimates for the parameter values, so for an actual model. It is better to use Two-Part MDL just to edtimate the number of parameters. Within the selected model class Maximum Likelihood can be used to find the best model with number of parameters suggested by Two-Part MDL.
Document type :
Master thesis
Complete list of metadata

Contributor : Jakob Verbeek Connect in order to contact the contributor
Submitted on : Wednesday, February 16, 2011 - 4:50:00 PM
Last modification on : Monday, September 25, 2017 - 10:08:04 AM
Long-term archiving on: : Tuesday, May 17, 2011 - 2:40:37 AM


Files produced by the author(s)


  • HAL Id : inria-00321524, version 1


Jakob Verbeek. Overfitting using the MDL principle. Machine Learning [cs.LG]. 1998. ⟨inria-00321524⟩



Record views


Files downloads