Collaborative Filtering as a Multi-Armed Bandit

Frédéric Guillou 1, 2, 3, * Romaric Gaudel 2, 3 Philippe Preux 2, 3
* Auteur correspondant
3 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : Recommender Systems (RS) aim at suggesting to users one or several items in which they might have interest. Following the feedback they receive from the user, these systems have to adapt their model in order to improve future recommendations. The repetition of these steps defines the RS as a sequential process. This sequential aspect raises an exploration-exploitation dilemma, which is surprisingly rarely taken into account for RS without contextual information. In this paper we present an explore-exploit collaborative filtering RS, based on Matrix Factor-ization and Bandits algorithms. Using experiments on artificial and real datasets, we show the importance and practicability of using sequential approaches to perform recommendation. We also study the impact of the model update on both the quality and the computation time of the recommendation procedure.
Type de document :
Communication dans un congrès
NIPS'15 Workshop: Machine Learning for eCommerce, Dec 2015, Montréal, Canada. 〈〉
Liste complète des métadonnées
Contributeur : Romaric Gaudel <>
Soumis le : jeudi 14 janvier 2016 - 15:33:09
Dernière modification le : jeudi 11 janvier 2018 - 06:27:32
Document(s) archivé(s) le : vendredi 11 novembre 2016 - 06:07:44


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01256254, version 1


Frédéric Guillou, Romaric Gaudel, Philippe Preux. Collaborative Filtering as a Multi-Armed Bandit. NIPS'15 Workshop: Machine Learning for eCommerce, Dec 2015, Montréal, Canada. 〈〉. 〈hal-01256254〉



Consultations de la notice


Téléchargements de fichiers