A Non-negative Tensor Factorization Model for Selectional Preference Induction

Tim van de Cruys 1
1 ALPAGE - Analyse Linguistique Profonde à Grande Echelle ; Large-scale deep linguistic processing
Inria Paris-Rocquencourt, UPD7 - Université Paris Diderot - Paris 7
Abstract : Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate tensor factorization methods to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that tensor factorization, and non-negative tensor factorization in particular, is a promising tool for NLP.
Document type :
Journal articles
Complete list of metadatas

https://hal.inria.fr/inria-00546045
Contributor : Tim van de Cruys <>
Submitted on : Monday, December 13, 2010 - 3:24:09 PM
Last modification on : Friday, January 4, 2019 - 5:33:24 PM

Links full text

Identifiers

Collections

Citation

Tim van de Cruys. A Non-negative Tensor Factorization Model for Selectional Preference Induction. Natural Language Engineering, Cambridge University Press (CUP), 2010, 16 (4), pp.417-437. ⟨10.1017/S1351324910000148⟩. ⟨inria-00546045⟩

Share

Metrics

Record views

90