A Non-negative Tensor Factorization Model for Selectional Preference Induction - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Journal Articles Natural Language Engineering Year : 2010

A Non-negative Tensor Factorization Model for Selectional Preference Induction

Abstract

Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate tensor factorization methods to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that tensor factorization, and non-negative tensor factorization in particular, is a promising tool for NLP.

Dates and versions

inria-00546045 , version 1 (13-12-2010)

Identifiers

Cite

Tim van de Cruys. A Non-negative Tensor Factorization Model for Selectional Preference Induction. Natural Language Engineering, 2010, 16 (4), pp.417-437. ⟨10.1017/S1351324910000148⟩. ⟨inria-00546045⟩
32 View
2 Download

Altmetric

Share

Gmail Facebook X LinkedIn More