My main research community is natural language processing (NLP), which lies at the intersection of computational linguistics and AI. NLP research is organized through the Association for Computational Linguistics and sister organizations, and is mostly published in the ACL Anthology.
My research centers on the linguistic foundations of NLP. I am most interested in computational, corpus-based ways of modeling how meaning is constructed (non)compositionally through language.
An extended research overview covers highlights of my published research to date.
Some topics I am excited about currently:
Linguistic Structure: How can general-purpose syntactic and semantic structures that represent compositionality be applied to corpora (design, treebanking, parsing, evaluation)? Frameworks: UD, CCG, AMR, UCCA, FrameNet. (Further details in this research overview from 2016.)
Adpositions: How do languages of the world convey relational meanings through grammatical connectives such as prepositions/postpositions? How can we describe these meanings in corpora and computational models? Framework: SNACS.
Metalanguage: How do people talk explicitly about language in domains such as linguistics, language learning, and law? How can we leverage such metalanguage in NLP?
Uncertainty, Rarity, and Noise: How can long-tail linguistic patterns/usages be identified, modeled, and evaluated? How can we model imperfections in linguistic data and in algorithmic predictions for improved analysis, learning, and interpretation?
For further details, refer to my recent publications or reach out to students in my lab.