"LazImpa": Lazy and Impatient neural agents learn to communicate efficiently - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

"LazImpa": Lazy and Impatient neural agents learn to communicate efficiently

Résumé

Previous work has shown that artificial neural agents naturally develop surprisingly non-efficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, "LazImpa", where the speaker is made increasingly lazy, i.e. avoids long messages, and the listener impatient, i.e.,~seeks to guess the intended content as soon as possible.
Fichier principal
Vignette du fichier
2010.01878.pdf (2.31 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03070404 , version 1 (15-12-2020)

Identifiants

Citer

Mathieu Rita, Rahma Chaabouni, Emmanuel Dupoux. "LazImpa": Lazy and Impatient neural agents learn to communicate efficiently. CONLL 2020 - The SIGNLL Conference on Computational Natural Language Learning, Nov 2020, Virtual, France. ⟨hal-03070404⟩
40 Consultations
23 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More