Mixability in Statistical Learning

Abstract : Statistical learning and sequential prediction are two different but related for- malisms to study the quality of predictions. Mapping out their relations and trans- ferring ideas is an active area of investigation. We provide another piece of the puzzle by showing that an important concept in sequential prediction, the mixa- bility of a loss, has a natural counterpart in the statistical setting, which we call stochastic mixability. Just as ordinary mixability characterizes fast rates for the worst-case regret in sequential prediction, stochastic mixability characterizes fast rates in statistical learning. We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference. In the case of 0/1-loss, it reduces to the margin condition of Mammen and Tsybakov, and in the case that the model under consideration contains all possible predictors, it is equivalent to ordinary mixability.
Type de document :
Communication dans un congrès
Advances in Neural Information Processing Systems 25 (NIPS 2012), Dec 2012, Lake Tahoe, United States. 2012
Liste complète des métadonnées

Littérature citée [25 références]  Voir  Masquer  Télécharger

Contributeur : Tim Van Erven <>
Soumis le : mercredi 28 novembre 2012 - 12:14:28
Dernière modification le : jeudi 11 janvier 2018 - 06:22:14
Document(s) archivé(s) le : samedi 17 décembre 2016 - 16:21:24


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-00758202, version 1



Tim Van Erven, Peter D. Grünwald, Mark D. Reid, Robert C. Williamson. Mixability in Statistical Learning. Advances in Neural Information Processing Systems 25 (NIPS 2012), Dec 2012, Lake Tahoe, United States. 2012. 〈hal-00758202〉



Consultations de la notice


Téléchargements de fichiers