Prevalence and recoverability of syntactic parameters in sparse distributed memories
Abstract
We propose a new method, based on sparse distributed memory, for studying dependence relations between syntactic parameters in the Principles and Parameters model of Syntax. By storing data of syntactic structures of world languages in a Kanerva network and checking recoverability of corrupted data from the network, we identify two different effects: an overall underlying relation between the prevalence of parameters across languages and their degree of recoverability, and a finer effect that makes some parameters more easily recoverable beyond what their prevalence would indicate. The latter can be seen as an indication of the existence of dependence relations, through which a given parameter can be determined using the remaining uncorrupted data.
Additional Information
© Springer International Publishing AG 2017. This work was performed in the last author's Mathematical and Computational Linguistics lab and CS101/Ma191 class at Caltech. The last author was partially supported by NSF grants DMS-1201512 and PHY-1205440.Attached Files
Submitted - 1510.06342.pdf
Files
Name | Size | Download all |
---|---|---|
md5:809b8a271165a8bedab1c98a4ceb9aa8
|
457.4 kB | Preview Download |
Additional details
- Eprint ID
- 79010
- DOI
- 10.1007/978-3-319-68445-1_31
- Resolver ID
- CaltechAUTHORS:20170712-110411817
- NSF
- DMS-1201512
- NSF
- PHY-1205440
- Created
-
2017-07-12Created from EPrint's datestamp field
- Updated
-
2021-11-15Created from EPrint's last_modified field
- Series Name
- Lecture Notes in Computer Science
- Series Volume or Issue Number
- 10589