A Non-negative Tensor Factorization Model for Selectional Preference Induction

Tim Van de Cruys 1
1 ALPAGE - Analyse Linguistique Profonde à Grande Echelle ; Large-scale deep linguistic processing
Inria Paris-Rocquencourt, UPD7 - Université Paris Diderot - Paris 7
Abstract : Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate tensor factorization methods to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that tensor factorization, and non-negative tensor factorization in particular, is a promising tool for NLP.
Type de document :
Article dans une revue
Natural Language Engineering, Cambridge University Press (CUP), 2010, 16 (4), pp.417-437. 〈10.1017/S1351324910000148〉
Liste complète des métadonnées

https://hal.inria.fr/inria-00546045
Contributeur : Tim Van de Cruys <>
Soumis le : lundi 13 décembre 2010 - 15:24:09
Dernière modification le : jeudi 15 novembre 2018 - 20:27:26

Lien texte intégral

Identifiants

Collections

Citation

Tim Van de Cruys. A Non-negative Tensor Factorization Model for Selectional Preference Induction. Natural Language Engineering, Cambridge University Press (CUP), 2010, 16 (4), pp.417-437. 〈10.1017/S1351324910000148〉. 〈inria-00546045〉

Partager

Métriques

Consultations de la notice

61