Comparing Word Representations for Implicit Discourse Relation Classification

Chloé Braud 1, * Pascal Denis 2
* Corresponding author
1 ALPAGE - Analyse Linguistique Profonde à Grande Echelle ; Large-scale deep linguistic processing
Inria Paris-Rocquencourt, UPD7 - Université Paris Diderot - Paris 7
2 MAGNET - Machine Learning in Information Networks
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : This paper presents a detailed comparative framework for assessing the usefulness of unsupervised word representations for identifying so-called implicit discourse relations. Specifically, we compare standard one-hot word pair representations against low-dimensional ones based on Brown clusters and word embeddings. We also consider various word vector combination schemes for deriving discourse segment representations from word vectors, and compare representations based either on all words or limited to head words. Our main finding is that denser representations systematically outperform sparser ones and give state-of-the-art performance or above without the need for additional hand-crafted features.
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.inria.fr/hal-01185927
Contributor : Chloé Braud <>
Submitted on : Tuesday, August 25, 2015 - 3:57:32 PM
Last modification on : Thursday, April 4, 2019 - 1:30:57 AM
Long-term archiving on : Thursday, November 26, 2015 - 1:51:10 PM

File

cbraud_pdenis_emnlp15.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01185927, version 1

Citation

Chloé Braud, Pascal Denis. Comparing Word Representations for Implicit Discourse Relation Classification. Empirical Methods in Natural Language Processing (EMNLP 2015), Sep 2015, Lisbonne, Portugal. ⟨hal-01185927⟩

Share

Metrics

Record views

571

Files downloads

436