Deep learning languages: a key fundamental shift from probabilities to weights?

François Coste 1
1 Dyliss - Dynamics, Logics and Inference for biological Systems and Sequences
Inria Rennes – Bretagne Atlantique , IRISA-D7 - GESTION DES DONNÉES ET DE LA CONNAISSANCE
Abstract : Recent successes in language modeling, notably with deep learning methods, coincide with a shift from probabilistic to weighted representations. We raise here the question of the importance of this evolution, in the light of the practical limitations of a classical and simple probabilistic modeling approach for the classification of protein sequences and in relation to the need for principled methods to learn non-probabilistic models.
Complete list of metadatas

https://hal.inria.fr/hal-02235207
Contributor : François Coste <>
Submitted on : Thursday, August 1, 2019 - 11:27:12 AM
Last modification on : Friday, September 13, 2019 - 9:49:21 AM
Long-term archiving on: Wednesday, January 8, 2020 - 4:42:24 PM

Files

probxit.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02235207, version 1
  • ARXIV : 1908.00785

Citation

François Coste. Deep learning languages: a key fundamental shift from probabilities to weights?. 2019. ⟨hal-02235207⟩

Share

Metrics

Record views

52

Files downloads

338