Skip to Main content Skip to Navigation
Conference papers

Mixability in Statistical Learning

Abstract : Statistical learning and sequential prediction are two different but related for- malisms to study the quality of predictions. Mapping out their relations and trans- ferring ideas is an active area of investigation. We provide another piece of the puzzle by showing that an important concept in sequential prediction, the mixa- bility of a loss, has a natural counterpart in the statistical setting, which we call stochastic mixability. Just as ordinary mixability characterizes fast rates for the worst-case regret in sequential prediction, stochastic mixability characterizes fast rates in statistical learning. We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference. In the case of 0/1-loss, it reduces to the margin condition of Mammen and Tsybakov, and in the case that the model under consideration contains all possible predictors, it is equivalent to ordinary mixability.
Complete list of metadata

Cited literature [25 references]  Display  Hide  Download

https://hal.inria.fr/hal-00758202
Contributor : Tim van Erven <>
Submitted on : Wednesday, November 28, 2012 - 12:14:28 PM
Last modification on : Wednesday, September 16, 2020 - 5:06:55 PM
Long-term archiving on: : Saturday, December 17, 2016 - 4:21:24 PM

File

stochmix.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00758202, version 1

Collections

Citation

Tim van Erven, Peter D. Grünwald, Mark D. Reid, Robert C. Williamson. Mixability in Statistical Learning. Advances in Neural Information Processing Systems 25 (NIPS 2012), Dec 2012, Lake Tahoe, United States. ⟨hal-00758202⟩

Share

Metrics

Record views

417

Files downloads

414