I’m interested in the structure of words and sentences, and how they are represented in the brain.
I take a ‘non-lexicalist’ approach, which means that I think of structure above and below the word level (syntax and morphology) as the same mechanism. I also don’t think of the language system as being organized around words, or any kind of unified representation that groups together meaning, syntax, and form. This is more than a difference of terminology; the non-lexicalist approach has important implications for how we think about language production and comprehension. Even something as ‘simple’ as lexical retrieval can be reanalyzed, both representationally and algorithmically – what is lexical retrieval, if ‘words’ are not stored triads of meaning, syntax, and form? How and when would we expect to see effects of syntactic structure building, if words themselves can be syntactically complex in the same way as phrases? This kind of approach opens the door to many other interesting empirical questions that have yet to be explored.
This has led to several different projects, described below:
A non-lexicalist model of language production
In my current work with Ellen Lau, I am developing a ‘non-lexicalist’ model of language production. Many existing models of language production and comprehension rely on lexicalist theories of syntax, which assumes that structure above and below the word level (syntax and morphology) are fundamentally different in kind, and that lexical items necessarily include form, meaning, and syntax. A large amount of language data has been used to argue against these lexicalist approaches in syntactic theories such as Distributed Morphology and Nanosyntax, but these conclusions have not been extended to psycholinguistic or neurolinguistic models. To move away from lexicalism in models of language production, it is not enough to simply update the syntactic representations; in order to be fully compatible with the non-lexicalist approach, it is also necessary to reconsider the algorithms involved in language production. The model that we propose does not rely on a ‘lemma’ representation, but instead represents that knowledge as mappings between separate representations of meaning, syntax, and form. It also emphasizes the role of cognitive control mechanisms in linearizing speech, and generates predictions for aphasia and other acquired language disorders. By moving away from lexicalist assumptions, this model aligns better with contemporary syntactic theory and provides better cross-linguistic coverage. Email me for more details!
Modeling verb inflection deficits in non-fluent aphasia
Along with Naomi Feldman, I have also been using the non-lexicalist approach to reanalyze models of aphasia, testing our hypotheses using neural networks. Lexicalist models such as Pinker & Ullman’s dual-route model of the past-tense inflection argue that non-fluent aphasia involves a deficit to the grammar (because people with non-fluent aphasia tend to exhibit a greater deficit for regular inflections) while fluent aphasia involves a deficit to the lexicon (because people with fluent aphasia tend to exhibit a greater deficit for irregular inflections). With the non-lexicalist approach, this distinction no longer holds; both regular and irregular verbs involve the same kind of syntactic configuration in the past-tense, but involve different patterns in the mapping between syntactic units and form. The distribution of deficits observed in non-fluent aphasia (and the observed variability in inflection deficits cross-linguistically) may emerge as a result of a difference in frequency distributions and the complexity of transformations. To test this prediction, I trained a recurrent neural network on the English past-tense inflection and lesioned it to simulate non-fluent aphasia. The model was able to imitate the pattern of deficits observed for English speakers with non-fluent aphasia. See our CogSci proceedings paper for more details!
Zero morphology in on-line sentence processing
My undergraduate honors thesis, advised by Masaya Yoshida, explored the effects of zero derivation in categorially ambiguous words (such as “visit” or “lock”, which can be used as nouns or verbs), and how they are processed. One account would suggest that without overt morphology, both versions of “visit” should be processed in the same way; another account suggests that one form of “visit” is derived from the other, through ‘zero derivation’ (the addition of a phonologically null suffix), so the derived form should be processed slower than the base form due to the differences in morphosyntactic complexity. The study used eye tracking while reading to identify the processing costs associated with these words when placed into unambiguous and minimally different sentential contexts (“John expected to visit when the doctor called” / “John expected the visit when the doctor called”). This experiment found evidence in support of the second account, such that the covert morphosyntactic structure present in the zero-derived words incurs a reading time slowdown independent of base and surface category. This project has generated a lot of interesting questions in regard to word recognition and the kinds of morphological composition that occurs in on-line sentence processing. Are words stored and used as “chunks” of syntax? Or does the parser engage in complete morphological decomposition when a word is encountered? How much of the morphological structure is available to an active (and sometimes messy) parser? See our CUNY Poster, or contact me for more details.