An information theoretic approach to finding word groups for text classification

Abstract : This thesis concerns finding the 'optimal' number of (non-overlapping) word groups for text classification. We present a method to select which words to cluster in word groups and how many such word groups to use on the basis of a set of pre-classified texts. The method involves a greedy search through the space of possible word groups. The criterion on which is navigated through this space is based on 'mutual information' and is known as 'Jensen Shannon divergence'. The criterion to decide which number of word groups to use is based on Rissanen's MDL Principle. We present empirical results that indicate that the proposed method performs well at its task. The prediction model used is based on the Naive Bayes model and the data set used for the experiments is a subset of the 20 Newsgroup data set.
Type de document :
Mémoires d'étudiants -- Hal-inria+
Machine Learning [cs.LG]. 2000
Liste complète des métadonnées
Contributeur : Jakob Verbeek <>
Soumis le : mercredi 16 février 2011 - 17:00:56
Dernière modification le : lundi 25 septembre 2017 - 10:08:04
Document(s) archivé(s) le : mardi 17 mai 2011 - 02:38:35


Fichiers produits par l'(les) auteur(s)


  • HAL Id : inria-00321519, version 1


Jakob Verbeek. An information theoretic approach to finding word groups for text classification. Machine Learning [cs.LG]. 2000. 〈inria-00321519〉



Consultations de la notice


Téléchargements de fichiers