PAC Learning under Helpful Distributions

Abstract : A PAC teaching model -under helpful distributions - is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam razor theorem and its converse are proved. Some classical classes of boolean functions, such as Decision Lists, DNF and CNF formulas are proved learnable in this model. Comparisons with other teaching models are made: learnability in the Goldman and Mathias model implies PAC learnability under helpful distributions. Note that Decision lists and DNF are not known to be learnable in the Goldman and Mathias model. A new simple PAC model, where "simple" refers to Kolmogorov complexity, is introduced. We show that most learnability results obtained within previously defined simple PAC models can be simply derived from more general results in our model.
Type de document :
Article dans une revue
RAIRO - Theoretical Informatics and Applications (RAIRO: ITA), EDP Sciences, 2001, 35 (2), pp.129--148
Liste complète des métadonnées

https://hal.inria.fr/inria-00538888
Contributeur : Rémi Gilleron <>
Soumis le : mardi 23 novembre 2010 - 14:48:42
Dernière modification le : mardi 24 avril 2018 - 13:53:00

Identifiants

  • HAL Id : inria-00538888, version 1

Collections

Citation

François Denis, Rémi Gilleron. PAC Learning under Helpful Distributions. RAIRO - Theoretical Informatics and Applications (RAIRO: ITA), EDP Sciences, 2001, 35 (2), pp.129--148. 〈inria-00538888〉

Partager

Métriques

Consultations de la notice

107