Bounds on the Risk for M-SVMs - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Applied Stochastic Models in Business and Industry Année : 2003

Bounds on the Risk for M-SVMs

Yann Guermeur
  • Fonction : Auteur
  • PersonId : 830806
André Elisseeff
  • Fonction : Auteur
Dominique Zelus
  • Fonction : Auteur

Résumé

Vapnik's statistical learning theory has mainly been developed for two types of problems: pattern recognition (computation of dichotomies) and regression (estimation of real-valued functions). Only in recent years has multi-class discriminant analysis been studied independently. Extending several standard results, among which a famous theorem by Bartlett, we have derived distribution-free uniform strong laws of large numbers devoted to multi-class large margin discriminant models. The capacity measure appearing in the confidence interval, a covering number, has been bounded from above in terms of a new generalized VC dimension. In this paper, the aforementioned theorems are applied to the architecture shared by all the multi-class SVMs proposed so far, which provides us with a simple theoretical framework to study them, compare their performance and design new machines.
Fichier non déposé

Dates et versions

inria-00099587 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00099587 , version 1

Citer

Yann Guermeur, André Elisseeff, Dominique Zelus. Bounds on the Risk for M-SVMs. Applied Stochastic Models in Business and Industry, 2003. ⟨inria-00099587⟩
44 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More