Skip to Main content Skip to Navigation
Journal articles

Generative grammar, neural networks, and the implementational mapping problem: Response to Pater

Ewan Dunbar 1, 2, 3 
Abstract : On the same list that ranked Syntactic Structures as the number one most influential twentieth-century work in cognitive science, the number two work cited is David Marr's Vision (Marr, 1982). Marr proposed that the only way to achieve a complete understanding of complex information processing systems like those found in the brain is to simultaneously analyze the system at three different levels of analysis: a computational level, where one formally defines the problem that the system is solving, specifying which inputs must map to which outputs; as well as an algorithmicrepresentational level, where one spells out a method for arriving at an output, given a particular input-a formal hypothesis about how the system encodes information, and how it manipulates it; spelled out at an implementational level, which details in what sense the physical system in question can actually be seen as carrying out the proposed algorithm. Generative grammar standardly proposes to analyse language cognition at the first two levels, leaving the problem of how such a system would actually be implemented in the brain open-leaving open, thus, the nature of the link between the algorithmic-representational theory and the physical implementation. Specifying and evaluting any such link is a difficult problem in itself-call it the implementational mapping problem. Pater's proposal for closer interdisciplinary integration between generative grammar and connectionist research will immediately run up against this problem too. Neural network models are not brains, or even brain models, but they are complex systems whose behaviour cannot by understood without some amount of analysis. There is an increasing recognition that we lack good methods for understanding how a given neural network model works-for giving a highlevel characterization of how the system encodes and manipulates information-and a recognition that, for several reasons, this is an untenable state of affairs. For the time being, this opacity poses serious limits on what we might hope to achieve in a closer integration between connectionist and formal linguistic theories. While we might be able to tell whether and how well a network model is able to learn some linguistic phenomenon, we currently are very limited in our tools for understanding what a given network has learned. Feedforward neural networks can in principle approximate any function with arbitrary precision. Thus, in principle, they can implement any grammar we can imagine. Clearly, this has the potential to make neural networks extremely useful tools for modelling learning in generative grammar. But their opacity prevents us from arriving at satisfactory answers to basic questions. Has the network arrived at a solution which is (a notational variant of) a grammar made possible under a specific linguistic theory T? Has it arrived at an approximation to such a grammar? Has it arrived at some alternate, unforeseen solution, possible under T, or does its solution fall outside the bounds of T? Can all possible parameter settings learnable in a given network be seen as licit grammars in T, or some approximations to such a grammar? Conversely, can all licit grammars in T be represented or approximated with a given network architecture? In order for the network to learn solutions in the scope of T, does it require special evidence, inaccessible to the child? Or are such grammars always learnable from realistic data? Before we can use neural networks to help us answer these fundamental questions about learnability of grammars, we need to seriously address the problem of how to construct and evaluate a mapping between a grammar defined in a linguistic formalism, on the one hand, and a neural network model, on the other, which may (or may not) encode linguistic knowledge in exactly the way predicted by that grammar.
Document type :
Journal articles
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-02274522
Contributor : Ewan Dunbar Connect in order to contact the contributor
Submitted on : Wednesday, June 22, 2022 - 3:43:04 PM
Last modification on : Sunday, June 26, 2022 - 5:35:24 AM

File

Pater_commentary.pdf
Files produced by the author(s)

Identifiers

Citation

Ewan Dunbar. Generative grammar, neural networks, and the implementational mapping problem: Response to Pater. Language, Linguistic Society of America, 2019, 95 (1), pp.e87-e98. ⟨10.1353/lan.2019.0013⟩. ⟨hal-02274522⟩

Share

Metrics

Record views

0

Files downloads

0