Skip to Main content Skip to Navigation
Conference papers

Domain adaptation for sequence labeling using hidden Markov models

Edouard Grave 1, 2 Guillaume Obozinski 3 Francis Bach 1, 2 
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique - ENS Paris, Inria Paris-Rocquencourt, CNRS - Centre National de la Recherche Scientifique : UMR8548
Abstract : Most natural language processing systems based on machine learning are not robust to domain shift. For example, a state-of-the-art syntactic dependency parser trained on Wall Street Journal sentences has an absolute drop in performance of more than ten points when tested on textual data from the Web. An efficient solution to make these methods more robust to domain shift is to first learn a word representation using large amounts of unlabeled data from both domains, and then use this representation as features in a supervised learning algorithm. In this paper, we propose to use hidden Markov models to learn word representations for part-of-speech tagging. In particular, we study the influence of using data from the source, the target or both domains to learn the representation and the different ways to represent words using an HMM.
Complete list of metadata

Cited literature [16 references]  Display  Hide  Download
Contributor : Edouard Grave Connect in order to contact the contributor
Submitted on : Friday, December 13, 2013 - 2:11:27 PM
Last modification on : Thursday, March 17, 2022 - 10:08:43 AM
Long-term archiving on: : Tuesday, March 18, 2014 - 12:35:35 PM


Files produced by the author(s)


  • HAL Id : hal-00918371, version 1
  • ARXIV : 1312.4092


Edouard Grave, Guillaume Obozinski, Francis Bach. Domain adaptation for sequence labeling using hidden Markov models. New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks (NIPS Workshop), Dec 2013, Lake Tahoe, United States. ⟨hal-00918371⟩



Record views


Files downloads