We derive the well-studied subregular classes of formal languages, which computationally characterize natural language typology, purely from the perspective of algorithmic learning problems.
We comment on non-human animals' ability to learn syntactic vs phonological dependencies in pattern-learning experiments.
We overview vowel harmony computationally, describing necessary and sufficient conditions on phonotactics, processes, and learning.
We comment on mathematical fallacies present in artificial grammar learning experiments and suggest how to integrate psycholinguistic and mathematical results.
We analyze the expressivity of a variety of recurrent encoder-decoder networks, showing they are limited to learning subsequential functions, and connecting RNNs with attention mechanisms to a class of deterministic 2-way transducers.
This chapter examines the brief but vibrant history of learnability in phonology.
I provide a vector space characterization of the Star-Free and Locally Threshold testable classes of formal languages, over arbitrary data structures.
We describe a partial order on the space of model-theoretic constraints and a learning algorithm for constraint inference.
We caution about confusing ignorance of biases with absence of biases in machine learning and linguistics, especially for neural networks.