Skip to Main content Skip to Navigation
Conference papers

Comparing Statistical and Neural Models for Learning Sound Correspondences

Abstract : Cognate prediction and proto-form reconstruction are key tasks in computational historical linguistics that rely on the study of sound change regularity. Solving these tasks appears to be very similar to machine translation, though methods from that field have barely been applied to historical linguistics. Therefore, in this paper, we investigate the learnability of sound correspondences between a proto-language and daughter languages for two machine-translation-inspired models, one statistical, the other neural. We first carry out our experiments on plausible artificial languages, without noise, in order to study the role of each parameter on the algorithms respective performance under almost perfect conditions. We then study real languages, namely Latin, Italian and Spanish, to see if those performances generalise well. We show that both model types manage to learn sound changes despite data scarcity, although the best performing model type depends on several parameters such as the size of the training data, the ambiguity, and the prediction direction.
Complete list of metadata

Cited literature [18 references]  Display  Hide  Download
Contributor : Clémentine Fourrier Connect in order to contact the contributor
Submitted on : Thursday, April 2, 2020 - 4:11:07 PM
Last modification on : Wednesday, June 8, 2022 - 12:50:06 PM


Files produced by the author(s)


  • HAL Id : hal-02529929, version 1



Clémentine Fourrier, Benoît Sagot. Comparing Statistical and Neural Models for Learning Sound Correspondences. LT4HALA 2020 : First Workshop on Language Technologies for Historical and Ancient Languages, May 2020, Marseille, France. ⟨hal-02529929⟩



Record views


Files downloads