Skip to Main content Skip to Navigation
Journal articles

A Non-negative Tensor Factorization Model for Selectional Preference Induction

Tim van de Cruys 1
1 ALPAGE - Analyse Linguistique Profonde à Grande Echelle ; Large-scale deep linguistic processing
Inria Paris-Rocquencourt, UPD7 - Université Paris Diderot - Paris 7
Abstract : Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate tensor factorization methods to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that tensor factorization, and non-negative tensor factorization in particular, is a promising tool for NLP.
Document type :
Journal articles
Complete list of metadata
Contributor : Tim van de Cruys Connect in order to contact the contributor
Submitted on : Monday, December 13, 2010 - 3:24:09 PM
Last modification on : Wednesday, March 17, 2021 - 10:02:02 AM

Links full text




Tim van de Cruys. A Non-negative Tensor Factorization Model for Selectional Preference Induction. Natural Language Engineering, Cambridge University Press (CUP), 2010, 16 (4), pp.417-437. ⟨10.1017/S1351324910000148⟩. ⟨inria-00546045⟩



Record views