### Dissertation

• Nathan Schneider (2014).

### Refereed

• Jakob Prange, and Nathan Schneider, and Lingpeng Kong (2022). Linguistic frameworks go toe-to-toe at neuro-symbolic language modeling. NAACL. [paper] [arXiv] We examine the extent to which, in principle, different syntactic and semantic graph representations can complement and improve neural language modeling. Specifically, by conditioning on a subgraph encapsulating the locally relevant sentence history, can a model make better next-word predictions than a pretrained sequential language model alone? With an ensemble setup consisting of GPT-2 and ground-truth graphs from one of 7 different formalisms, we find that the graph information indeed improves perplexity and other metrics. Moreover, this architecture provides a new way to compare different frameworks of linguistic representation. In our oracle graph setup, training and evaluating on English WSJ, semantic constituency structures prove most useful to language modeling performance—outpacing syntactic constituency structures as well as syntactic and semantic dependency structures. @InProceedings{semanticlm, address = {Seattle, Washington}, title = {Linguistic frameworks go toe-to-toe at neuro-symbolic language modeling}, booktitle = {Proc. of {NAACL}}, url = {https://arxiv.org/abs/2112.07874}, author = {Prange, Jakob and Schneider, Nathan and Kong, Lingpeng}, month = jul, year = {2022}, publisher = {Association for Computational Linguistics} }
• Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O’Gorman, Young-Suk Lee, Jeffrey Flanigan, Ramón Fernandez Astudillo, Radu Florian, Salim Roukos, and Nathan Schneider (2022). DocAMR: Multi-sentence AMR representation and evaluation. NAACL. [paper] [arXiv] Despite extensive research on parsing of English sentences into Abstract Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation. Taking advantage of a super-sentential level of coreference annotation from previous work, we introduce a simple algorithm for deriving a unified graph representation, avoiding the pitfalls of information loss from over-merging and lack of coherence from under merging. Next, we describe improvements to the Smatch metric to make it tractable for comparing document-level graphs and use it to re-evaluate the best published document-level AMR parser. We also present a pipeline approach combining the top-performing AMR parser and coreference resolution systems, providing a strong baseline for future research. @InProceedings{docamr, address = {Seattle, Washington}, title = {{DocAMR}: Multi-sentence {AMR} representation and evaluation}, booktitle = {Proc. of {NAACL}}, url = {https://arxiv.org/abs/2112.08513}, author = {Naseem, Tahira and Blodgett, Austin and Kumaravel, Sadhana and O'Gorman, Tim and Lee, Young-Suk and Flanigan, Jeffrey and {Fernandez Astudillo}, Ram\'{o}n and Florian, Radu and Roukos, Salim and Schneider, Nathan}, month = jul, year = {2022}, publisher = {Association for Computational Linguistics} }
• Tatsuya Aoyama and Nathan Schneider (2022). Probe-less probing of BERT’s layer-wise linguistic knowledge with masked word prediction. NAACL Student Research Workshop. [paper] The current study quantitatively (and qualitatively for an illustrative purpose) analyzes BERT’s layer-wise masked word prediction on an English corpus, and finds that (1) the layerwise localization of linguistic knowledge primarily shown in probing studies is replicated in a behavior-based design and (2) that syntactic and semantic information is encoded at different layers for words of different syntactic categories. Hypothesizing that the above results are correlated with the number of likely potential candidates of the masked word prediction, we also investigate how the results differ for tokens within multiword expressions. @InProceedings{bertpos, address = {Seattle, Washington}, title = {Probe-Less Probing of {BERT}'s Layer-Wise Linguistic Knowledge with Masked Word Prediction}, booktitle = {Proc. of the {NAACL} Student Research Workshop}, author = {Aoyama, Tatsuya and Schneider, Nathan}, month = jul, year = {2022}, publisher = {Association for Computational Linguistics} }
• Luke Gessler, Austin Blodgett, Joseph Ledford, and Nathan Schneider (2022). Xposition: An online multilingual database of adpositional semantics. LREC. [paper] [website] We present Xposition, an online platform for documenting adpositional semantics across languages in terms of supersenses (Schneider et al., 2018). More than just a lexical database, Xposition houses annotation guidelines, structured lexicographic documentation, and annotated corpora. Guidelines and documentation are stored as wiki pages for ease of editing, and described elements (supersenses, adpositions, etc.) are hyperlinked for ease of browsing. We describe how the platform structures information; its current contents across several languages; and aspects of the design of the web application that supports it, with special attention to how it supports datasets and standards that evolve over time. @InProceedings{xposition, address = {Marseille, France}, title = {{Xposition}: An Online Multilingual Database of Adpositional Semantics}, booktitle = {Proc. of {LREC}}, author = {Gessler, Luke and Blodgett, Austin and Ledford, Joseph and Schneider, Nathan}, month = jun, year = {2022}, publisher = {{ELRA}} }
• Aryaman Arora, Nitin Venkateswaran, and Nathan Schneider (2022). MASALA: Modelling and analysing the semantics of adpositions in linguistic annotation of Hindi. LREC. [paper] [arXiv] @InProceedings{hisnacs, address = {Marseille, France}, title = {{MASALA}: Modelling and Analysing the Semantics of Adpositions in Linguistic Annotation of {H}indi}, booktitle = {Proc. of {LREC}}, url = {https://arxiv.org/abs/2205.03955}, author = {Arora, Aryaman and Venkateswaran, Nitin and Schneider, Nathan}, month = jun, year = {2022}, publisher = {{ELRA}} }
• Yang Janet Liu, Jena D. Hwang, Nathan Schneider, and Vivek Srikumar (2022). Putting context in SNACS: A 5-way classification of adpositional pragmatic markers. Linguistic Annotation Workshop. [paper] The SNACS framework provides a network of semantic labels called supersenses for annotating adpositional semantics in corpora. In this work, we consider English prepositions (and prepositional phrases) that are chiefly pragmatic, contributing extra-propositional contextual information such as speaker attitudes and discourse structure. We introduce a preliminary taxonomy of pragmatic meanings to supplement the semantic SNACS supersenses, with guidelines for the annotation of coherence connectives, commentary markers, and topic and focus markers. We also examine annotation disagreements, delve into the trickiest boundary cases, and offer a discussion of future improvements. @InProceedings{snacscontext, address = {Marseille, France}, title = {Putting Context in {SNACS}: A 5-Way Classification of Adpositional Pragmatic Markers}, booktitle = {Proc. of the 16th Linguistic Annotation Workshop}, author = {Liu, Yang Janet and Hwang, Jena D. and Schneider, Nathan and Srikumar, Vivek}, month = jun, year = {2022}, publisher = {{ELRA}} }
• Shira Wein, Wai Ching Leung, Yifu Mu, and Nathan Schneider (2022). Effect of source language on AMR structure. Linguistic Annotation Workshop. [paper] The Abstract Meaning Representation (AMR) annotation schema was originally designed for English. But the formalism has since been adapted for annotation in a variety of languages. Meanwhile, cross-lingual parsers have been developed to derive English AMR representations for sentences from other languages—implicitly assuming that English AMR can approximate an interlingua. In this work, we investigate the similarity of AMR annotations in parallel data and how much the language matters in terms of the graph structure. We set out to quantify the effect of sentence language on the structure of the parsed AMR. As a case study, we take parallel AMR annotations from Mandarin Chinese and English AMRs, and replace all Chinese concepts with equivalent English tokens. We then compare the two graphs via the Smatch metric as a measure of structural similarity. We find that source language has a dramatic impact on AMR structure, with Smatch scores below 50% between English and Chinese graphs in our sample—an important reference point for interpreting Smatch scores in cross-lingual AMR parsing. @InProceedings{zh2enamr, address = {Marseille, France}, title = {Effect of Source Language on {AMR} Structure}, booktitle = {Proc. of the 16th Linguistic Annotation Workshop}, author = {Wein, Shira and Leung, Wai Ching and Mu, Yifu and Schneider, Nathan}, month = jun, year = {2022}, publisher = {{ELRA}} }
• Nathan Schneider and Amir Zeldes (2021). Mischievous nominal constructions in Universal Dependencies. Universal Dependencies Workshop. [paper] [arXiv] [slides] While the highly multilingual Universal Dependencies (UD) project provides extensive guidelines for clausal structure as well as structure within canonical nominal phrases, a standard treatment is lacking for many “mischievous” nominal phenomena that break the mold. As a result, numerous inconsistencies within and across corpora can be found, even in languages with extensive UD treebanking work, such as English. This paper surveys the kinds of mischievous nominal expressions attested in English UD corpora and proposes solutions primarily with English in mind, but which may offer paths to solutions for a variety of UD languages. @inproceedings{mncsUD, title = "Mischievous Nominal Constructions in {U}niversal {D}ependencies", author = "Schneider, Nathan and Zeldes, Amir", booktitle = "Proceedings of the Fifth Workshop on Universal Dependencies (UDW, SyntaxFest 2021)", month = dec, year = "2021", address = "Sofia, Bulgaria", url = "https://aclanthology.org/2021.udw-1.14", pages = "160--172", publisher = "Association for Computational Linguistics" }
• Luke Gessler and Nathan Schneider (2021). BERT has uncommon sense: similarity ranking for word sense BERTology. BlackboxNLP. [paper] [arXiv] [poster] An important question concerning contextualized word embedding (CWE) models like BERT is how well they can represent different word senses, especially those in the long tail of uncommon senses. Rather than build a WSD system as in previous work, we investigate contextualized embedding neighborhoods directly, formulating a query-by-example nearest neighbor retrieval task and examining ranking performance for words and senses in different frequency bands. In an evaluation on two English sense-annotated corpora, we find that several popular CWE models all outperform a random baseline even for proportionally rare senses, without explicit sense supervision. However, performance varies considerably even among models with similar architectures and pretraining regimes, with especially large differences for rare word senses, revealing that CWE models are not all created equal when it comes to approximating word senses in their native representations. @InProceedings{bertsenserank, author = {Gessler, Luke and Schneider, Nathan}, title = {{BERT} has Uncommon Sense: Similarity Ranking for Word Sense {BERT}ology}, booktitle = {Proceedings of the Fourth {BlackboxNLP} Workshop on Analyzing and Interpreting Neural Networks for {NLP}}, month = nov, year = {2021}, address = {Online and Punta Cana, Dominican Republic}, url = {https://aclanthology.org/2021.blackboxnlp-1.43}, pages = {539--547}, publisher = {Association for Computational Linguistics} }
• Emma Manning and Nathan Schneider (2021). Referenceless parsing-based evaluation of AMR-to-English generation. Eval4NLP. [paper] [slides] Honorable Mention paper Reference-based automatic evaluation metrics are notoriously limited for NLG due to their inability to fully capture the range of possible outputs. We examine a referenceless alternative: evaluating the adequacy of English sentences generated from Abstract Meaning Representation (AMR) graphs by parsing into AMR and comparing the parse directly to the input. We find that the errors introduced by automatic AMR parsing substantially limit the effectiveness of this approach, but a manual editing study indicates that as parsing improves, parsing-based evaluation has the potential to outperform most reference-based metrics. @InProceedings{amrgenevalsmatch, author = {Manning, Emma and Schneider, Nathan}, title = {Referenceless Parsing-Based Evaluation of {AMR}-to-{E}nglish Generation}, booktitle = {Proceedings of the 2nd Workshop on Evaluation and Comparison of {NLP} Systems}, month = nov, year = {2021}, address = {Online and Punta Cana, Dominican Republic}, url = {https://aclanthology.org/2021.eval4nlp-1.12}, pages = {114--122}, publisher = {Association for Computational Linguistics} }
• Zhuxin Wang, Jakob Prange, and Nathan Schneider (2021). Subcategorizing adverbials in Universal Conceptual Cognitive Annotation. LAW-DMR. [paper] [poster] Universal Conceptual Cognitive Annotation (UCCA) is a semantic annotation scheme that organizes texts into coarse predicate-argument structure, offering a broad coverage of semantic phenomena. At the same time, there is still need for a finer-grained treatment of many of the categories. The Adverbial category is of special interest, as it covers a wide range of fundamentally different meanings such as negation, causation, aspect, and event quantification. In this paper we introduce a refinement annotation scheme for UCCA’s Adverbial category, showing that UCCA Adverbials can indeed be subcategorized into at least 7 semantic types, and doing so can help clarify and disambiguate the otherwise coarse-grained labels. We provide a preliminary set of annotation guidelines, as well as pilot annotation experiments with high inter-annotator agreement, confirming the validity of the scheme. @InProceedings{uccaadv, author = {Wang, Zhuxin and Prange, Jakob and Schneider, Nathan}, title = {Subcategorizing Adverbials in {U}niversal {C}onceptual {C}ognitive {A}nnotation}, booktitle = {Proceedings of the Joint 15th Linguistic Annotation Workshop ({LAW}) and 3rd Designing Meaning Representations ({DMR}) Workshop}, month = nov, year = {2021}, address = {Online and Punta Cana, Dominican Republic}, url = {https://aclanthology.org/2021.law-1.10}, pages = {96--105}, publisher = {Association for Computational Linguistics} }
• Shira Wein and Nathan Schneider (2021). Classifying divergences in cross-lingual AMR pairs. LAW-DMR. [paper] [poster] Translation divergences are varied and widespread, challenging approaches that rely on parallel text. To annotate translation divergences, we propose a schema grounded in the Abstract Meaning Representation (AMR), a sentence-level semantic framework instantiated for a number of languages. By comparing parallel AMR graphs, we can identify specific points of divergence. Each divergence is labeled with both a type and a cause. We release a small corpus of annotated English-Spanish data, and analyze the annotations in our corpus. @InProceedings{amrdivergence, author = {Wein, Shira and Schneider, Nathan}, title = {Classifying Divergences in Cross-lingual {AMR} Pairs}, booktitle = {Proceedings of the Joint 15th Linguistic Annotation Workshop ({LAW}) and 3rd Designing Meaning Representations ({DMR}) Workshop}, month = nov, year = {2021}, address = {Online and Punta Cana, Dominican Republic}, url = {https://aclanthology.org/2021.law-1.6}, pages = {56--65}, publisher = {Association for Computational Linguistics} }
• Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend, and Vivek Srikumar (2021). Putting words in BERT’s mouth: navigating contextualized vector spaces with pseudowords. EMNLP. [paper] [arXiv] [slides] [data and code] Also presented at BlackboxNLP 2021. We present a method for exploring regions around individual points in a contextualized vector space (particularly, BERT space), as a way to investigate how these regions correspond to word senses. By inducing a contextualized “pseudoword” vector as a stand-in for a static embedding in the input layer, and then performing masked prediction of a word in the sentence, we are able to investigate the geometry of the BERT-space in a controlled manner around individual instances. Using our method on a set of carefully constructed sentences targeting ambiguous English words, we find substantial regularity in the contextualized space, with regions that correspond to distinct word senses; but between these regions there are occasionally “sense voids”—regions that do not correspond to any intelligible sense. @inproceedings{mapp, address = {Online and Punta Cana, Dominican Republic}, title = {Putting words in {BERT}'s mouth: navigating contextualized vector spaces with pseudowords}, url = {https://aclanthology.org/2021.emnlp-main.806}, booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing}, author = {Karidi, Taelin and Zhou, Yichu and Schneider, Nathan and Abend, Omri and Srikumar, Vivek}, month = nov, year = {2021}, pages = {10300--10313}, publisher = {Association for Computational Linguistics} }
• Michael Kranzlein, Nelson F. Liu, and Nathan Schneider (2021). Making heads and tails of models with marginal calibration for sparse tagsets. Findings of EMNLP. [paper] [arXiv] [slides] [code] Also presented at Eval4NLP 2021 and BlackboxNLP 2021. For interpreting the behavior of a probabilistic model, it is useful to measure a model’s calibration—the extent to which the model produces reliable confidence scores. We address the open problem of calibration for tagging models with sparse tagsets, and recommend strategies to measure and reduce calibration error in such models. We show that several post-hoc recalibration techniques all reduce calibration error across the marginal distribution for two existing sequence taggers. Moreover, we propose tag frequency grouping as a way to measure calibration error in different frequency bands. Further, recalibrating each group separately promotes a more equitable reduction of calibration error across the tag frequency spectrum. @inproceedings{calibration, address = {Online and Punta Cana, Dominican Republic}, title = {Making heads \emph{and} tails of models with marginal calibration for sparse tagsets}, url = {https://aclanthology.org/2021.findings-emnlp.423}, booktitle = {Findings of the Association for Computational Linguistics: {EMNLP} 2021}, author = {Kranzlein, Michael and Liu, Nelson F. and Schneider, Nathan}, month = nov, year = {2021}, pages = {4919--4928}, publisher = {Association for Computational Linguistics} }
• Nelson F. Liu, Daniel Hershcovich, Michael Kranzlein, and Nathan Schneider (2021). Lexical semantic recognition. MWE. [paper] [supplement] [arXiv] [slides] [code] In lexical semantics, full-sentence segmentation and segment labeling of various phenomena are generally treated separately, despite their interdependence. We hypothesize that a unified lexical semantic recognition task is an effective way to encapsulate previously disparate styles of annotation, including multiword expression identification / classification and supersense tagging. Using the STREUSLE corpus, we train a neural CRF sequence tagger and evaluate its performance along various axes of annotation. As the label set generalizes that of previous tasks (PARSEME, DiMSUM), we additionally evaluate how well the model generalizes to those test sets, finding that it approaches or surpasses existing models despite training only on STREUSLE. Our work also establishes baseline models and evaluation metrics for integrated and accurate modeling of lexical semantics, facilitating future work in this area. @inproceedings{lsr, title = "Lexical Semantic Recognition", author = "Liu, Nelson F. and Hershcovich, Daniel and Kranzlein, Michael and Schneider, Nathan", booktitle = "Proceedings of the 17th Workshop on Multiword Expressions (MWE 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.mwe-1.6", doi = "10.18653/v1/2021.mwe-1.6", pages = "49--56" }
• Austin Blodgett and Nathan Schneider (2021). Probabilistic, structure-aware algorithms for improved variety, accuracy, and coverage of AMR alignments. ACL-IJCNLP. [paper] [arXiv] [slides] [data and code] We present algorithms for aligning components of Abstract Meaning Representation (AMR) graphs to spans in English sentences. We leverage unsupervised learning in combination with heuristics, taking the best of both worlds from previous AMR aligners. Our unsupervised models, however, are more sensitive to graph substructures, without requiring a separate syntactic parse. Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy. We will release the aligner tool, as well as our alignment datasets, for use in research on AMR parsing, generation, and evaluation. @inproceedings{leamr, title = "Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of {AMR} Alignments", author = "Blodgett, Austin and Schneider, Nathan", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.257", doi = "10.18653/v1/2021.acl-long.257", pages = "3310--3321" }
• Emma Manning, Nathan Schneider, and Amir Zeldes (2021). A balanced and broadly targeted computational linguistics curriculum. Teaching NLP Workshop. [paper] [poster] This paper describes the primarily-graduate computational linguistics and NLP curriculum at Georgetown University, a U.S. research university that has seen significant growth in these areas in recent years. We discuss the principles behind our curriculum choices, including recognizing the various academic backgrounds and goals of our students; teaching a variety of skills with an emphasis on working directly with data; encouraging collaboration and interdisciplinary work; and including languages beyond English. We reflect on challenges we have encountered, such as the difficulty of teaching programming skills alongside NLP fundamentals, and discuss areas for future growth. @inproceedings{clcurric, title = "A Balanced and Broadly Targeted Computational Linguistics Curriculum", author = "Manning, Emma and Schneider, Nathan and Zeldes, Amir", booktitle = "Proceedings of the Fifth Workshop on Teaching NLP", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.teachingnlp-1.11", doi = "10.18653/v1/2021.teachingnlp-1.11", pages = "65--69" }
• Jakob Prange and Nathan Schneider (2021). Draw mir a sheep: a supersense-based analysis of German case and adposition semantics. KI - Künstliche Intelligenz 35(3):291–306. Special Issue on NLP and Semantics (Daniel Hershcovich and Lucia Donatelli, editors). [paper] [publisher] [data] Adpositions and case markers are ubiquitous in natural language and express a wide range of meaning relations that can be of crucial relevance for many NLP and AI tasks. However, capturing their semantics in a comprehensive yet concise, as well as cross-linguistically applicable way has remained a challenge over the years. To address this, we adapt the largely language-agnostic SNACS framework to German, defining language-specific criteria for identifying adpositional expressions and piloting a supersense-annotated German corpus. We compare our approach with prior work on both German and multilingual adposition semantics, and discuss our empirical findings in the context of potential applications. @article{desnacs, author = {Prange, Jakob and Schneider, Nathan}, title = {Draw \emph{mir} a sheep: a supersense-based analysis of {G}erman case and adposition semantics}, journal = {{KI} - K\"{u}nstliche Intelligenz}, volume = {35}, number = {2}, year = {2021}, url = {https://doi.org/10.1007/s13218-021-00712-y} }
• Jakob Prange, Nathan Schneider, and Vivek Srikumar (2021). Supertagging the long tail with tree-structured decoding of complex categories. TACL 9(March):243−260. Presented at EACL 2021. [paper] [publisher] [arXiv] [video] [code] Also presented at SCiL 2021 ([extended abstract], [publisher]). Although current CCG supertaggers achieve high accuracy on the standard WSJ test set, few systems make use of the categories’ internal structure that will drive the syntactic derivation during parsing. The tagset is traditionally truncated, discarding the many rare and complex category types in the long tail. However, supertags are themselves trees. Rather than give up on rare tags, we investigate constructive models that account for their internal structure, including novel methods for tree-structured prediction. Our best tagger is capable of recovering a sizeable fraction of the long-tail supertags and even generates CCG categories that have never been seen in training, while approximating the prior state of the art in overall tag accuracy with fewer parameters. We further investigate how well different approaches generalize to out-of-domain evaluation sets. @article{cxvccg, author = {Prange, Jakob and Schneider, Nathan and Srikumar, Vivek}, title = {Supertagging the Long Tail with Tree-Structured Decoding of Complex Categories}, journal = {Transactions of the Association for Computational Linguistics}, volume = {9}, month = mar, year = {2021}, publisher = {MIT Press}, url = {https://doi.org/10.1162/tacl_a_00364}, pages = {243--260} }
• Emma Manning, Shira Wein, and Nathan Schneider (2020). A human evaluation of AMR-to-English generation systems. COLING. [paper] [arXiv] [slides] [video] Also presented at EvalNLGEval 2020 ([extended abstract]). Most current state-of-the art systems for generating English text from Abstract Meaning Representation (AMR) have been evaluated only using automated metrics, such as BLEU, which are known to be problematic for natural language generation. In this work, we present the results of a new human evaluation which collects fluency and adequacy scores, as well as categorization of error types, for several recent AMR generation systems. We discuss the relative quality of these systems and how our results compare to those of automatic metrics, finding that while the metrics are mostly successful in ranking systems overall, collecting human judgments allows for more nuanced comparisons. We also analyze common errors made by these systems. @InProceedings{amrgenhumaneval, author = {Manning, Emma and Wein, Shira and Schneider, Nathan}, title = {A human evaluation of {AMR}-to-{E}nglish generation systems}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.coling-main.420}, pages = {4773--4786} }
• Daniel Hershcovich, Nathan Schneider, Dotan Dvir, Jakob Prange, Miryam de Lhoneux, and Omri Abend (2020). Comparison by conversion: reverse-engineering UCCA from syntax and lexical semantics. COLING. [paper] [arXiv] [poster] [code] Building robust natural language understanding systems will require a clear characterization of whether and how various linguistic meaning representations complement each other. To perform a systematic comparative analysis, we evaluate the mapping between meaning representations from different frameworks using two complementary methods: (i) a rule-based converter,and (ii) a supervised delexicalized parser that parses to one framework using only information from the other as features. We apply these methods to convert the STREUSLE corpus (with syntactic and lexical semantic annotations) to UCCA (a graph-structured full-sentence meaning representation). Both methods yield surprisingly accurate target representations, close to fully supervised UCCA parser quality—indicating that UCCA annotations are partially redundant with STREUSLE annotations. Despite this substantial convergence between frameworks, we find several important areas of divergence. @InProceedings{streusle2ucca, author = {Hershcovich, Daniel and Schneider, Nathan and Dvir, Dotan and Prange, Jakob and de Lhoneux, Miryam and Abend, Omri}, title = {Comparison by Conversion: Reverse-Engineering {UCCA} from Syntax and Lexical Semantics}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.coling-main.264}, pages = {2947--2966} }
• Jena D. Hwang, Na-Rae Han, Hanwool Choe, and Nathan Schneider (2020). K-SNACS: annotating Korean adposition semantics. DMR. [paper] [slides] [video] [data] While many languages use adpositions to encode semantic relationships between content words in a sentence (e.g., agentivity or temporality), the details of how adpositions work vary widely across languages with respect to both form and meaning. In this paper, we empirically adapt the SNACS framework (Schneider et al., 2018) to Korean, a language that is typologically distant from English—the language SNACS was based on. We apply the SNACS framework to annotate the highly popular novella The Little Prince with semantic supersense labels over all Korean postpositions. Thus, we introduce the first broad-coverage corpus annotated with Korean postposition semantics and provide a detailed analysis of the corpus with an apples-to-apples comparison between Korean and English annotations. @InProceedings{kosnacs, author = {Hwang, Jena D. and Choe, Hanwool and Han, Na-Rae and Schneider, Nathan}, title = {{K-SNACS}: Annotating {K}orean Adposition Semantics}, booktitle = {Proceedings of the Second International Workshop on Designing Meaning Representations}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.dmr-1.6}, pages = {53--66} }
• Michael Kranzlein, Emma Manning, Siyao Peng, Shira Wein, Aryaman Arora, and Nathan Schneider (2020). PASTRIE: a corpus of prepositions annotated with supersense tags in Reddit international English. Linguistic Annotation Workshop. [paper] [slides] [video] [data] We present the Prepositions Annotated with Supsersense Tags in Reddit International English (“PASTRIE”) corpus, a new dataset containing manually annotated preposition supersenses of English data from presumed speakers of four L1s: English, French, German, and Spanish. The annotations are comprehensive, covering all preposition types and tokens in the sample. Along with the corpus, we provide analysis of distributional patterns across the included L1s and a discussion of the influence of L1s on L2 preposition choice. @InProceedings{pastrie, author = {Kranzlein, Michael and Manning, Emma and Peng, Siyao and Wein, Shira and Arora, Aryaman and Schneider, Nathan}, title = {{PASTRIE}: A Corpus of Prepositions Annotated with Supersense Tags in {R}eddit International {E}nglish}, booktitle = {Proceedings of the 14th Linguistic Annotation Workshop}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, pages = {105--116}, url = {https://www.aclweb.org/anthology/2020.law-1.10} }
• Luke Gessler, Shira Wein, and Nathan Schneider (2020). Supersense and sensibility: proxy tasks for semantic annotation of prepositions. Linguistic Annotation Workshop. [paper] [slides] [video] Prepositional supersense annotation is time-consuming and requires expert training. Here, we present two sensible methods for obtaining prepositional supersense annotations indirectly by eliciting surface substitution and similarity judgments. Four pilot studies suggest that both methods have potential for producing prepositional supersense annotations that are comparable in quality to expert annotations. @InProceedings{pcrowdsourcing-pilot, author = {Gessler, Luke and Wein, Shira and Schneider, Nathan}, title = {Supersense and Sensibility: Proxy Tasks for Semantic Annotation of Prepositions}, booktitle = {Proceedings of the 14th Linguistic Annotation Workshop}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, pages = {117--126}, url = {https://www.aclweb.org/anthology/2020.law-1.11} }
• Jena D. Hwang, Nathan Schneider, and Vivek Srikumar (2020). Sprucing up supersenses: untangling the semantic clusters of accompaniment and purpose. Linguistic Annotation Workshop. [paper] [slides] [video] We reevaluate an existing adpositional annotation scheme with respect to two thorny semantic domains: accompaniment and purpose. ‘Accompaniment’ broadly speaking includes two entities situated together or participating in the same event, while ‘purpose’ broadly speaking covers the desired outcome of an action, the intended use or evaluated use of an entity, and more. We argue the policy in the SNACS scheme for English should be recalibrated with respect to these clusters of interrelated meanings without adding complexity to the overall scheme. Our analysis highlights tradeoffs in lumping vs. splitting decisions as well as the flexibility afforded by the construal analysis. @InProceedings{accomp-purpose, author = {Hwang, Jena D. and Schneider, Nathan and Srikumar, Vivek}, title = {Sprucing up Supersenses: Untangling the Semantic Clusters of Accompaniment and Purpose}, booktitle = {Proceedings of the 14th Linguistic Annotation Workshop}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, pages = {127--137}, url = {https://www.aclweb.org/anthology/2020.law-1.12} }
• Aryaman Arora and Nathan Schneider (2020). SNACS annotation of case markers and adpositions in Hindi. SIGTYP. [extended abstract] [video] The use of specific case markers and adpositions for particular semantic roles is idiosyncratic to every language. This poses problems in many natural language processing tasks such as machine translation and semantic role labelling. Models for these tasks rely on human-annotated corpora as training data. There is a lack of corpora in South Asian languages for such tasks. Even Hindi, despite being a resource-rich language, is limited in available labelled data. This extended abstract presents the in-progress annotation of case markers and adpositions in a Hindi corpus, employing the cross-lingual scheme proposed by Schneider et al. (2017), Semantic Network of Adposition and Case Supersenses (SNACS). The SNACS guidelines we developed also apply to Urdu. We hope to finalize this corpus and develop NLP tools making use of the dataset, as well as promote NLP for typologically similar South Asian languages.
• Sean Trott, Tiago Timponi Torrent, Nancy Chang, and Nathan Schneider (2020). (Re)construing meaning in NLP. ACL. [paper] [arXiv] [video] Also presented at UnImplicit 2021. Human speakers have an extensive toolkit of ways to express themselves. In this paper, we engage with an idea largely absent from discussions of meaning in natural language understanding—namely, that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed. We first define this phenomenon more precisely, drawing on considerable prior work in theoretical cognitive semantics and psycholinguistics. We then survey some dimensions of construed meaning and show how insights from construal could inform theoretical and practical work in NLP. @InProceedings{construal, author = {Trott, Sean and Timponi Torrent, Tiago and Chang, Nancy and Schneider, Nathan}, title = {(Re)construing Meaning in {NLP}}, booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, month = jul, year = {2020}, address = {Online}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.acl-main.462}, pages = {5170--5184} }
• Aryaman Arora, Luke Gessler, and Nathan Schneider (2020). Supervised grapheme-to-phoneme conversion of orthographic schwas in Hindi and Punjabi. ACL. [paper] [arXiv] [video] [code] Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification. @InProceedings{schwadeletion, author = {Arora, Aryaman and Gessler, Luke and Schneider, Nathan}, title = {Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in {H}indi and {P}unjabi}, booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, month = jul, year = {2020}, address = {Online}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.acl-main.696}, pages = {7791--7795} }
• Jakob Prange, Nathan Schneider, and Omri Abend (2019). Made for each other: Broad-coverage semantic structures meet preposition supersenses. CoNLL. [paper] [arXiv] [poster] [data] [code] Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) is a typologically-informed, broad-coverage semantic annotation scheme that describes coarse-grained predicate-argument structure but currently lacks semantic roles. We argue that lexicon-free annotation of the semantic roles marked by prepositions, as formulated by Schneider et al. (2018), is complementary and suitable for integration within UCCA. We show empirically for English that the schemes, though annotated independently, are compatible and can be combined in a single semantic graph. A comparison of several approaches to parsing the integrated representation lays the groundwork for future research on this task. @InProceedings{uccasnacs, author = {Prange, Jakob and Schneider, Nathan and Abend, Omri}, title = {Made for Each Other: Broad-coverage Semantic Structures Meet Preposition Supersenses}, booktitle = {Proceedings of the 23rd Conference on Computational Natural Language Learning}, month = nov, year = {2019}, address = {Hong Kong, China}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/K19-1017}, pages = {174--185} }
• Jakob Prange, Nathan Schneider, and Omri Abend (2019). Semantically constrained multilayer annotation: the case of coreference. DMR. [paper] [arXiv] [slides] [data] We propose a coreference annotation scheme as a layer on top of the Universal Conceptual Cognitive Annotation foundational layer, treating units in predicate-argument structure as a basis for entity and event mentions. We argue that this allows coreference annotators to sidestep some of the challenges faced in other schemes, which do not enforce consistency with predicate-argument structure and vary widely in what kinds of mentions they annotate and how. The proposed approach is examined with a pilot annotation study and compared with annotations from other schemes. @InProceedings{uccacoref, author = {Prange, Jakob and Schneider, Nathan and Abend, Omri}, title = {Semantically Constrained Multilayer Annotation: The Case of Coreference}, booktitle = {Proceedings of the First International Workshop on Designing Meaning Representations}, month = aug, year = {2019}, address = {Florence, Italy}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/W19-3319}, pages = {164--176} }
• Adi Shalev, Jena D. Hwang, Nathan Schneider, Vivek Srikumar, Omri Abend, and Ari Rappoport (2019). Preparing SNACS for subjects and objects. DMR. [paper] [poster] [data] Research on adpositions and possessives in multiple languages has led to a small inventory of general-purpose meaning classes that disambiguate tokens. Importantly, that work has argued for a principled separation of the semantic role in a scene from the function coded by morphosyntax. Here, we ask whether this approach can be generalized beyond adpositions and possessives to cover all scene participants—including subjects and objects—directly, without reference to a frame lexicon. We present new guidelines for English and the results of an interannotator agreement study. @InProceedings{subjobjsnacs, author = {Shalev, Adi and Hwang, Jena D. and Schneider, Nathan and Srikumar, Vivek and Abend, Omri and Rappoport, Ari}, title = {Preparing {SNACS} for Subjects and Objects}, booktitle = {Proceedings of the First International Workshop on Designing Meaning Representations}, month = aug, year = {2019}, address = {Florence, Italy}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/W19-3316}, pages = {141--147} }
• Austin Blodgett and Nathan Schneider (2019). An improved approach for semantic graph composition with CCG. IWCS. [paper] [arXiv] This paper builds on previous work using Combinatory Categorial Grammar (CCG) to derive a transparent syntax-semantics interface for Abstract Meaning Representation (AMR) parsing. We define new semantics for the CCG combinators that is better suited to deriving AMR graphs. In particular, we define relation-wise alternatives for the application and composition combinators: these require that the two constituents being combined overlap in one AMR relation. We also provide a new semantics for type raising, which is necessary for certain constructions. Using these mechanisms, we suggest an analysis of eventive nouns, which present a challenge for deriving AMR graphs. Our theoretical analysis will facilitate future work on robust and transparent AMR parsing using CCG. @InProceedings{ccgamr, author = {Blodgett, Austin and Schneider, Nathan}, title = {An improved approach for semantic graph composition with {CCG}}, booktitle = {Proceedings of the 13th International Conference on Computational Semantics}, month = may, year = {2019}, address = {Gothenburg, Sweden}, publisher = {Association for Computational Linguistics}, pages = {55--70}, url = {https://www.aclweb.org/anthology/W19-0405} }
• Yilun Zhu, Yang Liu, Siyao Peng, Austin Blodgett, Yushi Zhao, and Nathan Schneider (2019). Adpositional supersenses for Mandarin Chinese. SCiL. [extended abstract] [publisher] [arXiv] @InProceedings{zhu-scil-19, author = {Zhu, Yilun and Liu, Yang and Peng, Siyao and Blodgett, Austin and Zhao, Yushi and Schneider, Nathan}, title = {Adpositional Supersenses for {M}andarin {C}hinese}, booktitle = {Proceedings of the Society for Computation in Linguistics}, volume = {2}, month = jan, year = {2019}, address = {New York, New York, {USA}}, pages = {334--337} }
• Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider (2019). Tense and aspect semantics for sentential AMR. SCiL. [extended abstract] [publisher] @InProceedings{donatelli-scil-19, author = {Donatelli, Lucia and Regan, Michael and Croft, William and Schneider, Nathan}, title = {Tense and Aspect Semantics for Sentential {AMR}}, booktitle = {Proceedings of the Society for Computation in Linguistics}, volume = {2}, month = jan, year = {2019}, address = {New York, New York, {USA}}, pages = {346--348} }
• Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R. Moeller, Aviram Stern, Adi Bitan, and Omri Abend (2018). Comprehensive supersense disambiguation of English prepositions and possessives. ACL. [paper] [supplement] [slides] [data and code] Semantic relations are often signaled with prepositional or possessive marking—but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker’s lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. @InProceedings{pssdisambig, author = {Schneider, Nathan and Hwang, Jena D. and Srikumar, Vivek and Prange, Jakob and Blodgett, Austin and Moeller, Sarah R. and Stern, Aviram and Bitan, Adi and Abend, Omri}, title = {Comprehensive supersense disambiguation of {E}nglish prepositions and possessives}, booktitle = {Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics}, month = jul, year = {2018}, address = {Melbourne, Australia}, publisher = {Association for Computational Linguistics}, }
• Hannah Rohde, Alexander Johnson, Nathan Schneider, and Bonnie Webber (2018). Discourse coherence: concurrent explicit and implicit relations. ACL. [paper] [B&W plots] [data prompts] Theories of discourse coherence posit relations between discourse segments as a key feature of coherent text. Our prior work suggests that multiple discourse relations can be simultaneously operative between two segments for reasons not predicted by the literature. Here we test how this joint presence can lead participants to endorse seemingly divergent conjunctions (e.g., but and so) to express the link they see between two segments. These apparent divergences are not symptomatic of participant naivety or bias, but arise reliably from the concurrent availability of multiple relations between segments – some available through explicit signals and some via inference. We believe that these new results can both inform future progress in theoretical work on discourse coherence and lead to higher levels of performance in discourse parsing. @InProceedings{disadv-acl, author = {Rohde, Hannah and Johnson, Alexander and Schneider, Nathan and Webber, Bonnie}, title = {Discourse Coherence: Concurrent Explicit and Implicit Relations}, booktitle = {Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics}, month = jul, year = {2018}, address = {Melbourne, Australia}, publisher = {Association for Computational Linguistics}, }
• Ida Szubert, Adam Lopez, and Nathan Schneider (2018). A structured syntax-semantics interface for English-AMR alignment. NAACL-HLT. [paper] [slides] [data and code] Abstract Meaning Representation (AMR) annotations are often assumed to closely mirror dependency syntax, but AMR explicitly does not require this, and the assumption has never been tested. To test it, we devise an expressive framework to align AMR graphs to dependency graphs, which we use to annotate 200 AMRs. Our annotation explains how 97% of AMR edges are evoked by words or syntax. Previously existing AMR alignment frameworks did not allow for mapping AMR onto syntax, and as a consequence they explained at most 23%. While we find that there are indeed many cases where AMR annotations closely mirror syntax, there are also pervasive differences. We use our annotations to test a baseline AMR-to-syntax aligner, finding that this task is more difficult than AMR-to-string alignment; and to pinpoint errors in an AMR parser. We make our data and code freely available for further research on AMR parsing and generation, and the relationship of AMR to syntax. @InProceedings{amr2dep, author = {Szubert, Ida and Lopez, Adam and Schneider, Nathan}, title = {A Structured Syntax-Semantics Interface for {E}nglish-{AMR} Alignment}, booktitle = {Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, month = jun, year = {2018}, address = {New Orleans, Louisiana, USA}, publisher = {Association for Computational Linguistics} }
• Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah A. Smith (2018). Parsing tweets into Universal Dependencies. NAACL-HLT. [paper] [arXiv] [poster] [data] [software] We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016). We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-of-speech tagging, and labeled dependencies. Using the extended guidelines, we create a new tweet treebank for English (Tweebank v2) that is four times larger than the (unlabeled) Tweebank v1 introduced by Kong et al. (2014). We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets. Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD. To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one. Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-of-the-art on other treebanks in both accuracy and speed. @InProceedings{twparse2, author = {Liu, Yijia and Zhu, Yi and Che, Wanxiang and Qin, Bing and Schneider, Nathan and Smith, Noah A.}, title = {Parsing Tweets into {U}niversal {D}ependencies}, booktitle = {Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, month = jun, year = {2018}, address = {New Orleans, Louisiana, USA}, publisher = {Association for Computational Linguistics} }
• Austin Blodgett and Nathan Schneider (2018). Semantic supersenses for English possessives. LREC. [paper] [slides] We adapt an approach to annotating the semantics of adpositions to also include English possessives, showing that the supersense inventory of Schneider et al. (2017) works for the genitive ’s clitic and possessive pronouns as well as prepositional of. By comprehensively annotating such possessives in an English corpus of web reviews, we demonstrate that the existing supersense categories are readily applicable to possessives. Our corpus will facilitate empirical study of the semantics of the genitive alternation and the development of semantic disambiguation systems. @inproceedings{gensuper, address = {Miyazaki, Japan}, title = {Semantic Supersenses for {E}nglish Possessives}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation}, publisher = {{ELRA}}, author = {Blodgett, Austin and Schneider, Nathan}, editor = {Calzolari, Nicoletta and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Hasida, Koiti and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H\'{e}l\{e}ne and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios and Tokunaga, Takenobu}, month = may, year = {2018}, pages = {1529--1534} }
• Claire Bonial, Bianca Badarau, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Tim O’Gorman, Martha Palmer, and Nathan Schneider (2018). Abstract Meaning Representation of constructions: the more we include, the better the representation. LREC. [paper] [slides] We describe the expansion of the Abstract Meaning Representation (AMR) project to provide coverage for the annotation of certain types of constructions. Past AMR annotations generally followed a practice of assigning the semantic roles associated with an individual lexical item, as opposed to a flexible pattern or template of multiple lexical items, which characterizes constructions such as ‘The X-er, The Y-er’ (exemplified in the title). Furthermore, a goal of AMR is to provide consistent semantic representation despite language-specific syntactic idiosyncracies. Thus, representing the meanings associated with fully syntactic patterns required a novel annotation approach. As one strategy in our approach, we expanded the AMR lexicon of predicate senses, or semantic ‘rolesets,’ to include entries for a growing set of constructions. Despite the challenging practical and theoretical questions encountered, the additions and updates to AMR annotation described here ensure more comprehensive semantic representations capturing both lexical and constructional meaning. @inproceedings{amrcxns, address = {Miyazaki, Japan}, title = {{A}bstract {M}eaning {R}epresentation of Constructions: The More We Include, the Better the Representation}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation}, publisher = {{ELRA}}, author = {Bonial, Claire and Badarau, Bianca and Griffitt, Kira and Hermjakob, Ulf and Knight, Kevin and {O'Gorman}, Tim and Palmer, Martha and Schneider, Nathan}, editor = {Calzolari, Nicoletta and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Hasida, Koiti and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H\'{e}l\{e}ne and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios and Tokunaga, Takenobu}, month = may, year = {2018}, pages = {1677--1684} }
• Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider (2018). Annotation of tense and aspect semantics for sentential AMR. LAW-MWE-CxG. [paper] [slides] Although English grammar encodes a number of semantic contrasts with tense and aspect marking, these semantics are currently ignored by Abstract Meaning Representation (AMR) annotations. This paper extends sentence-level AMR to include a coarse-grained treatment of tense and aspect semantics. The proposed framework augments the representation of finite predications to include a four-way temporal distinction (event time before, up to, at, or after speech time) and several aspectual distinctions (including static vs. dynamic, habitual vs. episodic, and telic vs. atelic). This will enable AMR to be used for NLP tasks and applications that require sophisticated reasoning about time and event structure. @inproceedings{amrtenseaspect, address = {Santa Fe, New Mexico, USA}, title = {Annotation of Tense and Aspect Semantics for Sentential {AMR}}, booktitle = {Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions}, author = {Donatelli, Lucia and Regan, Michael and Croft, William and Schneider, Nathan}, month = aug, year = {2018}, publisher = {Association for Computational Linguistics} }
• Carlos Ramisch, Silvio Ricardo Cordeiro, Agata Savary, Veronika Vincze, Verginica Barbu Mititelu, Archna Bhatia, Maja Buljan, Marie Candito, Polona Gantar, Voula Giouli, Tunga Güngör, Abdelati Hawwari, Uxoa Iñurrieta, Jolanta Kovalevskaitė, Simon Krek, Timm Lichte, Chaya Liebeskind, Johanna Monti, Carla Parra Escartín, Behrang QasemiZadeh, Renata Ramisch, Nathan Schneider, Ivelina Stoyanova, Ashwini Vaidya, and Abigail Walsh (2018). Edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions. LAW-MWE-CxG. [paper] [slides] This paper describes the PARSEME Shared Task 1.1 on automatic identification of verbal multiword expressions. We present the annotation methodology, focusing on changes from last year’s shared task. Novel aspects include enhanced annotation guidelines, additional annotated data for most languages, corpora for some new languages, and new evaluation settings. Corpora were created for 20 languages, which are also briefly discussed. We report organizational principles behind the shared task and the evaluation metrics employed for ranking. The 17 participating systems, their methods and obtained results are also presented and analysed. @InProceedings{parseme1.1, author = {Ramisch, Carlos and Cordeiro, Silvio Ricardo and Savary, Agata and Vincze, Veronika and Barbu Mititelu, Verginica and Bhatia, Archna and Buljan, Maja and Candito, Marie and Gantar, Polona and Giouli, Voula and G{\"{u}}ng{\"{o}}r, Tunga and Hawwari, Abdelati and I{\~{n}}urrieta, Uxoa and Kovalevskait{\.{e}}, Jolanta and Krek, Simon and Lichte, Timm and Liebeskind, Chaya and Monti, Johanna and Parra Escart{\'{i}}n, Carla and QasemiZadeh, Behrang and Ramisch, Renata and Schneider, Nathan and Stoyanova, Ivelina and Vaidya, Ashwini and Walsh, Abigail}, title = {Edition 1.1 of the {PARSEME} {S}hared {T}ask on {A}utomatic {I}dentification of {V}erbal {M}ultiword {E}xpressions}, booktitle = {Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG 2018)}, month = aug, year = {2018}, address = {Santa Fe, New Mexico, USA}, publisher = {Association for Computational Linguistics} }
• Abigail Walsh, Claire Bonial, Kristina Geeraert, John P. McCrae, Nathan Schneider, and Clarissa Somers (2018). Constructing an annotated corpus of verbal MWEs for English. LAW-MWE-CxG. [paper] [poster] This paper describes the construction and annotation of a corpus of verbal MWEs for English, as part of the PARSEME Shared Task 1.1 on automatic identification of verbal MWEs. The criteria for corpus selection, the categories of MWEs used, and the training process are discussed, along with the particular issues that led to revisions in edition 1.1 of the annotation guidelines. Finally, an overview of the characteristics of the final annotated corpus is presented, as well as some discussion on inter-annotator agreement. @inproceedings{envmwe, address = {Santa Fe, New Mexico, USA}, title = {Constructing an Annotated Corpus of Verbal {MWE}s for {E}nglish}, booktitle = {Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions}, author = {Walsh, Abigail and Bonial, Claire and Geeraert, Kristina and McCrae, John P. and Schneider, Nathan and Somers, Clarissa}, month = aug, year = {2018}, publisher = {Association for Computational Linguistics} }
• Bonnie Webber, Hannah Rohde, Anna Dickinson, Annie Louis, and Nathan Schneider (2018). Hidden AND in plain sight? Implicit and explicit relations cooperate in the construction of meaning in discourse. GURT. [extended abstract] [slides] Prior work on coherence assumes discourse relations are either signaled explicitly (via discourse connectives) or left implicit. We investigate to what extent multiple discourse relations are present concurrently. A conjunction completion survey task, where conjunctions were elicited before adverbials, suggests redundancy and inference are crucial to meaning construction in discourse.
• Bonnie Webber, Hannah Rohde, Anna Dickinson, Annie Louis, and Nathan Schneider (2018). Explicit discourse connectives / implicit discourse relations. SCiL. [extended abstract] [slides] [publisher] @InProceedings{disadv-scil-18, author = {Webber, Bonnie and Rohde, Hannah and Dickinson, Anna and Louis, Annie and Schneider, Nathan}, title = {Explicit discourse connectives / implicit discourse relations}, booktitle = {Proceedings of the Society for Computation in Linguistics}, volume = {1}, month = jan, year = {2018}, address = {Salt Lake City, Utah, {USA}}, pages = {229--230} }
• Hannah Rohde, Anna Dickinson, Nathan Schneider, Annie Louis, and Bonnie Webber (2017). Exploring substitutability through discourse adverbials and multiple judgments. IWCS. [paper] [slides] In his systematic analysis of discourse connectives, Knott (1996) introduced the notion of substitutability and the conditions under which one connective (e.g., when) can substitute for another (e.g., if) to express the same meaning. Knott only uses examples which he constructed and judged himself. This paper describes a new multi-judgment study on naturally occurring passages, on which substitutability claims can be tested. While some of our findings support Knott’s claims, other pairs of connectives that Knott predicts to be exclusive are in fact judged to substitute felicitously for one another. These findings show that discourse adverbials in the immediate context play a role in connective choice. @InProceedings{disadv, author = {Rohde, Hannah and Dickinson, Anna and Schneider, Nathan and Louis, Annie and Webber, Bonnie}, title = {Exploring substitutability through discourse adverbials and multiple judgments}, booktitle = {Proceedings of the 12th International Conference on Computational Semantics}, month = sep, year = {2017}, address = {Montpellier, France}, publisher = {Association for Computational Linguistics} }
• Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O’Gorman, Vivek Srikumar, and Nathan Schneider (2017). Double trouble: the problem of construal in semantic annotation of adpositions. *SEM. [paper] [slides] Also presented at SCiL 2018 ([extended abstract], [publisher]). We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4,250 preposition tokens in a 55,000 word corpus of English. Attempts to apply the scheme to adpositions and case markers in other languages, as well as some problematic cases in English, have led us to reconsider the assumption that a preposition’s lexical contribution is equivalent to the role/relation that it mediates. Our proposal is to embrace the potential for construal in adposition use, expressing such phenomena directly at the token level to manage complexity and avoid sense proliferation. We suggest a framework to represent both the scene role and the adposition’s lexical function so they can be annotated at scale—supporting automatic, statistical processing of domain-general language—and sketch how this representation would allow for a simpler inventory of labels. @InProceedings{hwang-17, author = {Hwang, Jena D. and Bhatia, Archna and Han, {Na-Rae} and {O'Gorman}, Tim and Srikumar, Vivek and Schneider, Nathan}, title = {Double trouble: the problem of construal in semantic annotation of adpositions}, booktitle = {Proceedings of the 6th Joint Conference on Lexical and Computational Semantics}, pages = {178--188} month = aug, year = {2017}, address = {Vancouver, British Columbia, Canada}, publisher = {Association for Computational Linguistics}, }
• Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O’Gorman, Vivek Srikumar, and Nathan Schneider (2017). Coping with construals in broad-coverage semantic annotation of adpositions. AAAI Spring Symposium on Construction Grammar and NLU. [paper] [arXiv] [slides] [poster] We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4,250 preposition tokens in a 55,000 word corpus of English. Attempts to apply the scheme to adpositions and case markers in other languages, as well as some problematic cases in English, have led us to reconsider the assumption that a preposition’s lexical contribution is equivalent to the role/relation that it mediates. Our proposal is to embrace the potential for construal in adposition use, expressing such phenomena directly at the token level to manage complexity and avoid sense proliferation. We suggest a framework to represent both the scene role and the adposition’s lexical function so they can be annotated at scale—supporting automatic, statistical processing of domain-general language—and sketch how this representation would inform a constructional analysis. @article{hwang-17-2, title = {Coping with construals in broad-coverage semantic annotation of adpositions}, url = {http://arxiv.org/abs/1703.03771}, journal = {{arXiv}:1703.03771 [{cs.CL}]}, author = {Hwang, Jena D. and Bhatia, Archna and Han, {Na-Rae} and {O'Gorman}, Tim and Srikumar, Vivek and Schneider, Nathan}, month = mar, year = {2017}, note = {{arXiv}: 1703.03771} }
• Nathan Schneider and Chuck Wooters (2017). The NLTK FrameNet API: Designing for discoverability with a rich linguistic resource. EMNLP demo. [paper] [arXiv] [poster] A new Python API, integrated within the NLTK suite, offers access to the FrameNet 1.7 lexical database. The lexicon (structured in terms of frames) as well as annotated sentences can be processed programatically, or browsed with human-readable displays via the interactive Python prompt. @InProceedings{schneider-17, author = {Schneider, Nathan and Wooters, Chuck}, title = {The {NLTK} {FrameNet} {API}: Designing for discoverability with a rich linguistic resource}, booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, month = sep, year = {2017}, address = {Copenhagen, Denmark}, publisher = {Association for Computational Linguistics}, } [software]
• Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Meredith Green, Abhijit Suresh, Kathryn Conger, Tim O’Gorman, and Martha Palmer (2016). A corpus of preposition supersenses. Linguistic Annotation Workshop. [paper] [poster] We present the first corpus annotated with preposition supersenses, unlexicalized categories for semantic functions that can be marked by English prepositions (Schneider et al., 2015). The preposition supersenses are organized hierarchically and designed to facilitate comprehensive manual annotation. Our dataset is publicly released on the web. @InProceedings{psstcorpus, author = {Schneider, Nathan and Hwang, Jena D. and Srikumar, Vivek and Green, Meredith and Suresh, Abhijit and Conger, Kathryn and O'Gorman, Tim and Palmer, Martha}, title = {A Corpus of Preposition Supersenses}, booktitle = {Proceedings of the 10th Linguistic Annotation Workshop}, month = aug, year = {2016}, address = {Berlin, Germany}, pages = {99--109}, publisher = {Association for Computational Linguistics}, url = {http://aclweb.org/anthology/W16-1712} } [data]
• Nathan Schneider, Dirk Hovy, Anders Johannsen, and Marine Carpuat (2016). SemEval-2016 Task 10: Detecting Minimal Semantic Units and their Meanings (DiMSUM) . [paper] This task combines the labeling of multiword expressions and supersenses (coarse-grained classes) in an explicit, yet broad-coverage paradigm for lexical semantics. Nine systems participated; the best scored 57.7% F1 in a multi-domain evaluation setting, indicating that the task remains largely unresolved. An error analysis reveals that a large number of instances in the data set are either hard cases, which no systems get right, or easy cases, which all systems correctly solve. @inproceedings{dimsum-16, address = {San Diego, California, {USA}}, title = {\mbox{{SemEval}-2016} {T}ask~10: {D}etecting {M}inimal {S}emantic {U}nits and their {M}eanings ({DiMSUM})}, booktitle = {Proceedings of {SemEval}}, author = {Schneider, Nathan and Hovy, Dirk and Johannsen, Anders and Carpuat, Marine}, month = jun, year = {2016} }
• Nora Hollenstein, Nathan Schneider, and Bonnie Webber (2016). Inconsistency detection in semantic annotation. LREC. [paper] [slides] Inconsistencies are part of any manually annotated corpus. Automatically finding these inconsistencies and correcting them (even manually) can increase the quality of the data. Past research has focused mainly on detecting inconsistency in syntactic annotation. This work explores new approaches to detecting inconsistency in semantic annotation. Two ranking methods are presented in this paper: a discrepancy ranking and an entropy ranking. Those methods are then tested and evaluated on multiple corpora annotated with multiword expressions and supersense labels. The results show considerable improvements in detecting inconsistency candidates over a random baseline. Possible applications of methods for inconsistency detection are improving the annotation procedure and guidelines, as well as correcting errors in completed annotations. @inproceedings{annoinconsistency, address = {Portoro{\v z}, Slovenia}, title = {Inconsistency Detection in Semantic Annotation}, booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation}, publisher = {{ELRA}}, author = {Hollenstein, Nora and Schneider, Nathan and Webber, Bonnie}, editor = {Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios}, month = may, year = {2016}, pages = {3986--3990} }[data and software]
• Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer (2015). Frame-semantic role labeling with heterogeneous annotations. ACL-IJCNLP. [paper] [slides] We consider the task of identifying and labeling the semantic arguments of a predicate that evokes a FrameNet frame. This task is challenging because there are only a few thousand fully annotated sentences for supervised training. Our approach augments an existing model with features derived from FrameNet and PropBank and with partially annotated exemplars from FrameNet. We observe a 4% absolute increase in F1 versus the original model. @InProceedings{heterosrl, author = {Kshirsagar, Meghana and Thomson, Sam and Schneider, Nathan and Carbonell, Jaime and Smith, Noah A. and Dyer, Chris}, title = {Frame-Semantic Role Labeling with Heterogeneous Annotations}, booktitle = {Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing}, month = jul, year = {2015}, address = {Beijing, China}, publisher = {Association for Computational Linguistics} }[software]
• Lizhen Qu, Gabriela Ferraro, Liyuan Zhou, Weiwei Hou, Nathan Schneider, and Timothy Baldwin (2015). Big data small data, in domain out-of domain, known word unknown word: the impact of word representations on sequence labelling tasks. CoNLL. [paper] Word embeddings — distributed word representations that can be learned from unlabelled data — have been shown to have high utility in many natural language processing applications. In this paper, we perform an extrinsic evaluation of four popular word embedding methods in the context of four sequence labelling tasks: part-of-speech tagging, syntactic chunking, named entity recognition, and multiword expression identification. A particular focus of the paper is analysing the effects of task-based updating of word representations. We show that when using word embeddings as features, as few as several hundred training instances are sufficient to achieve competitive results, and that word embeddings lead to improvements over out-of-vocabulary words and also out of domain. Perhaps more surprisingly, our results indicate there is little difference between the different word embedding methods, and that simple Brown clusters are often competitive with word embeddings across all tasks we consider. @InProceedings{wordrepseq, author = {Qu, Lizhen and Ferraro, Gabriela and Zhou, Liyuan and Hou, Weiwei and Schneider, Nathan and Baldwin, Timothy}, title = {Big Data Small Data, In Domain Out-of Domain, Known Word Unknown Word: The Impact of Word Representations on Sequence Labelling Tasks}, booktitle = {Proceedings of the Nineteenth Conference on Computational Natural Language Learning}, month = jul, year = {2015}, address = {Beijing, China}, publisher = {Association for Computational Linguistics}, pages = {83--93}, url = {http://www.aclweb.org/anthology/K15-1009} }
• Nathan Schneider (2015). Struggling with English prepositional verbs.
• Nathan Schneider and Noah A. Smith (2015). A corpus and model integrating multiword expressions and supersenses. NAACL-HLT. [paper] [slides] This paper introduces a task of identifying and semantically classifying lexical expressions in running text. We investigate the online reviews genre, adding semantic supersense annotations to a 55,000 word English corpus that was previously annotated for multiword expressions. The noun and verb supersenses apply to full lexical expressions, whether single- or multiword. We then present a sequence tagging model that jointly infers lexical expressions and their supersenses. Results show that even with our relatively small training corpus in a noisy domain, the joint task can be performed to attain 70% class labeling F1. @InProceedings{sst, author = {Schneider, Nathan and Smith, Noah A.}, title = {A Corpus and Model Integrating Multiword Expressions and Supersenses}, booktitle = {Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, month = {May--June}, year = {2015}, address = {Denver, Colorado, USA}, publisher = {Association for Computational Linguistics}, pages = {1537--1547}, url = {http://www.aclweb.org/anthology/N15-1177} }[data and software]
• Nathan Schneider, Vivek Srikumar, Jena D. Hwang, and Martha Palmer (2015). A hierarchy with, of, and for preposition supersenses. Linguistic Annotation Workshop. [paper] [poster] English prepositions are extremely frequent and extraordinarily polysemous. In some usages they contribute information about spatial, temporal, or causal roles/relations; in other cases they are institutionalized, somewhat arbitrarily, as case markers licensed by a particular governing verb, verb class, or syntactic construction. To facilitate automatic disambiguation, we propose a general-purpose, broad-coverage taxonomy of preposition functions that we call supersenses: these are coarse and unlexicalized so as to be tractable for efficient manual annotation, yet capture crucial semantic distinctions. Our resource, including extensive documentation of the supersenses, many example sentences, and mappings to other lexical resources, will be publicly released. @InProceedings{pssts, author = {Schneider, Nathan and Srikumar, Vivek and Hwang, Jena D. and Palmer, Martha}, title = {A Hierarchy with, of, and for Preposition Supersenses}, booktitle = {Proceedings of the 9th Linguistic Annotation Workshop}, month = jun, year = {2015}, address = {Denver, Colorado, USA}, publisher = {Association for Computational Linguistics}, pages = {112--123}, url = {http://www.aclweb.org/anthology/W15-1612} } [resource]
• Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith (2014). A dependency parser for tweets. EMNLP. [paper] [slides] [video] We describe a new dependency parser for English tweets, Tweeboparser. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (Tweebank), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://cs.cmu.edu/~ark/TweetNLP. @inproceedings{twparse, address = {Doha, Qatar}, title = {A Dependency Parser for Tweets}, booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing}, publisher = {Association for Computational Linguistics}, author = {Kong, Lingpeng and Schneider, Nathan and Swayamdipta, Swabha and Bhatia, Archna and Dyer, Chris and Smith, Noah A.}, month = oct, year = {2014}, publisher = {Association for Computational Linguistics}, pages = {1001--1012}, url = {http://www.aclweb.org/anthology/D14-1108} }
• Sam Thomson, Brendan O’Connor, Jeffrey Flanigan, David Bamman, Jesse Dodge, Swabha Swayamdipta, Nathan Schneider, Chris Dyer, and Noah A. Smith (2014). CMU: Arc-Factored, Discriminative Semantic Dependency Parsing. . [paper] [poster] We present an arc-factored statistical model for semantic dependency parsing, as defined by the SemEval 2014 Shared Task 8 on Broad-Coverage Semantic Dependency Parsing. Our entry in the open track placed second in the competition. @inproceedings{chen-schneider-das-smith-10, author = {Thomson, Sam and O'Connor, Brendan and Flanigan, Jeffrey and Bamman, David and Dodge, Jesse and Swayamdipta, Swabha and Schneider, Nathan and Dyer, Chris and Smith, Noah A.}, title = {{CMU}: Arc-Factored, Discriminative Semantic Dependency Parsing}, booktitle = {Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)}, month = aug, year = {2014}, address = {Dublin, Ireland}, publisher = {Association for Computational Linguistics and Dublin City University}, pages = {176--180}, url = {http://www.aclweb.org/anthology/S14-2027} }
• Nathan Schneider, Emily Danchik, Chris Dyer, and Noah A. Smith (2014). Discriminative lexical semantic segmentation with gaps: running the MWE gamut. TACL 2(April):193−206. Presented at ACL 2014. [paper] [publisher] [poster] We present a novel representation, evaluation measure, and supervised models for the task of identifying the multiword expressions (MWEs) in a sentence, resulting in a lexical semantic segmentation. Our approach generalizes a standard chunking representation to encode MWEs containing gaps, thereby enabling efficient sequence tagging algorithms for feature-rich discriminative models. Experiments on a new dataset of English web text offer the first linguistically-driven evaluation of MWE identification with truly heterogeneous expression types. Our statistical sequence model greatly outperforms a lookup-based segmentation procedure, achieving nearly 60% F1 for MWE identification. @article{mwe, title = {Discriminative lexical semantic segmentation with gaps: running the {MWE} gamut}, volume = {2}, journal = {Transactions of the Association for Computational Linguistics}, author = {Schneider, Nathan and Danchik, Emily and Dyer, Chris and Smith, Noah A.}, month = apr, year = {2014}, pages = {193--206}, url = {http://doi.org/10.1162/tacl_a_00176} }
• There was a bug in the evaluation script affecting the Exact Match scores: cases where an MWE occurs within the gap of another MWE were not counted correctly. Here is the corrected version of table 3:
• The description of the link-based evaluation measure in section 3.2 overstates the relationship between this measure and the MUC criterion for coreference resolution. They are very similar but not identical: an example for which they give different results is explained in footnote 3 on p. 132 of my thesis.
[data and software]
• Dipanjan Das, Desai Chen, André F. T. Martins, Nathan Schneider, and Noah A. Smith (2014). Frame-semantic parsing. Computational Linguistics 40(1):9–56. [paper] [publisher] Frame semantics (Fillmore 1982) is a linguistic theory that has been instantiated for English in the FrameNet lexicon (Fillmore, Johnson, and Petruck 2003). We solve the problem of frame- semantic parsing using a two-stage statistical model that takes lexical targets (i.e., content words and phrases) in their sentential context and predicts frame-semantic structures. Given a target in context, the first stage disambiguates it to a semantic frame. This model employs latent variables and semi-supervised learning to improve frame disambiguation for targets unseen at training time. The second stage finds the target’s locally expressed semantic arguments. A fast exact dual decomposition algorithm collectively predicts all the arguments of a frame at once in order to respect declaratively stated linguistic constraints at inference time, resulting in qualitatively better structures than naïve local predictors. Both components are feature-based and discriminatively trained on a small set of annotated frame-semantic parses. On a benchmark dataset, the approach, along with a heuristic identifier of frame-evoking targets, outperforms prior state of the art by significant margins. Additionally, we present experiments on a much larger recent dataset and have released our accurate frame-semantic parser as open-source software. @article{cl2014, title = {Frame-semantic parsing}, volume = {40}, number = {1}, journal = {Computational Linguistics}, author = {Das, Dipanjan and Chen, Desai and Martins, Andr\'{e} F. T. and Schneider, Nathan and Smith, Noah A.}, month = mar, year = {2014}, pages = {9--56}, url = {http://www.mitpressjournals.org/doi/abs/10.1162/COLI_a_00163} } Meghana Kshirsagar, a Ph.D. student at Carnegie Mellon, pointed out an error in the reported results. For the experimental setting with gold frames, tabulated in rows 7 and 8 of Table 8, at inference time SEMAFOR was including gold spans along with the candidate set of automatic spans to be considered for argument identification, thus artificially inflating the precision, recall, and F1. The revised numbers are:
Naive decoding: Precision=0.78650 Recall=0.72848 Fscore=0.75638 (row 7)
Beam search decoding: Precision=0.80395 Recall=0.72839 Fscore=0.76431 (row 8)
Thus, results have the same trend, but the absolute numbers are lower. This problem also affects the numbers in Table 9, keeping the trends the same. None of the other results in the article are affected by this error.
• Archna Bhatia, Chu-Cheng Lin, Nathan Schneider, Yulia Tsvetkov, Fatima Talib Al-Raisi, Laleh Roostapour, Jordan Bender, Abhimanu Kumar, Lori Levin, Mandy Simons, and Chris Dyer (2014). Automatic classification of communicative functions of definiteness. COLING. [paper] [poster] Definiteness expresses a constellation of semantic, pragmatic, and discourse properties—the communicative functions—of an NP. We present a supervised classifier for English NPs that uses lexical, morphological, and syntactic features to predict an NP’s communicative function in terms of a language-universal classification scheme. Our classifiers establish strong baselines for future work in this neglected area of computational semantic analysis. In addition, analysis of the features and learned parameters in the model provides insight into the grammaticalization of definiteness in English, not all of which is obvious a priori. @inproceedings{definiteness, address = {Dublin, Ireland}, title = {Automatic Classification of Communicative Functions of Definiteness}, booktitle = {Proceedings of {COLING} 2014, the 25th International Conference on Computational Linguistics}, author = {Bhatia, Archna and Lin, {Chu-Cheng} and Schneider, Nathan and Tsvetkov, Yulia and Talib Al-Raisi, Fatima and Roostapour, Laleh and Bender, Jordan and Kumar, Abhimanu and Levin, Lori and Simons, Mandy and Dyer, Chris}, month = aug, year = {2014}, publisher = {Dublin City University and Association for Computational Linguistics}, pages = {1059--1070}, url = {http://www.aclweb.org/anthology/C14-1100} }
• Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith (2014). Comprehensive annotation of multiword expressions in a social web corpus. LREC. [paper] [slides] Multiword expressions (MWEs) are quite frequent in languages such as English, but their diversity, the scarcity of individual MWE types, and contextual ambiguity have presented obstacles to corpus-based studies and NLP systems addressing them as a class. Here we advocate for a comprehensive annotation approach: proceeding sentence by sentence, our annotators manually group tokens into MWEs according to guidelines that cover a broad range of multiword phenomena. Under this scheme, we have fully annotated an English web corpus for multiword expressions, including those containing gaps. @inproceedings{mwecorpus, address = {Reykjav\'{i}k, Iceland}, title = {Comprehensive Annotation of Multiword Expressions in a Social Web Corpus}, booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation}, publisher = {{ELRA}}, author = {Schneider, Nathan and Onuffer, Spencer and Kazour, Nora and Danchik, Emily and Mordowanec, Michael T. and Conrad, Henrietta and Smith, Noah A.}, editor = {Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios}, month = may, year = {2014}, pages = {455--461}, url = {http://www.lrec-conf.org/proceedings/lrec2014/pdf/521_Paper.pdf} } [data]
• Yulia Tsvetkov, Nathan Schneider, Dirk Hovy, Archna Bhatia, Manaal Faruqui, and Chris Dyer (2014). Augmenting English adjective senses with supersenses. LREC. [paper] We develop a supersense taxonomy for adjectives, based on that of GermaNet, and apply it to English adjectives in WordNet using human annotation and supervised classification. Results show that accuracy for automatic adjective type classification is high, but synsets are considerably more difficult to classify, even for trained human annotators. We release the manually annotated data, the classifier, and the induced supersense labeling of 12,304 WordNet adjective synsets. @inproceedings{adjs, address = {Reykjav\'{i}k, Iceland}, title = {Augmenting {E}nglish Adjective Senses with Supersenses}, booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation}, publisher = {{ELRA}}, author = {Tsvetkov, Yulia and Schneider, Nathan and Hovy, Dirk and Bhatia, Archna and Faruqui, Manaal and Dyer, Chris}, editor = {Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios}, month = may, year = {2014}, pages = {4359--4365}, url = {http://www.lrec-conf.org/proceedings/lrec2014/pdf/1096_Paper.pdf} } [data] [software]
• Meghana Kshirsagar, Nathan Schneider, and Chris Dyer (2014). Leveraging heterogeneous data sources for relational semantic parsing. Workshop on Semantic Parsing. [extended abstract] [poster] A number of semantic annotation efforts have produced a variety of annotated corpora, capturing various aspects of semantic knowledge in different formalisms. Due to to the cost of these annotation efforts and the relatively small amount of semantically annotated corpora, we argue it is advantageous to be able to leverage as much annotated data as possible. This work presents a preliminary exploration of the opportunities and challenges of learning semantic parsers from heterogeneous semantic annotation sources. We primarily focus on two semantic resources, FrameNet and PropBank, with the goal of improving frame-semantic parsing. Our analysis of the two data sources highlights the benefits that can be reaped by combining information across them.
• Jeffrey Flanigan, Samuel Thomson, David Bamman, Jesse Dodge, Manaal Faruqui, Brendan O’Connor, Nathan Schneider, Swabha Swayamdipta, Chris Dyer, and Noah A. Smith (2014). Graph-based algorithms for semantic parsing.
• Michael T. Mordowanec, Nathan Schneider, Chris Dyer, and Noah A. Smith (2014). Simplified dependency annotations with GFL-Web. ACL demo. [paper] [poster] We present GFL-Web, a web-based interface for syntactic dependency annotation with the lightweight FUDG/GFL formalism. Syntactic attachments are specified in GFL notation and visualized as a graph. A one-day pilot of this workflow with 26 annotators established that even novices were, with a bit of training, able to rapidly annotate the syntax of English Twitter messages. The open-source tool is easily installed and configured; it is available at: https://github.com/Mordeaux/gfl_web @inproceedings{gflweb, address = {Baltimore, Maryland, USA}, title = {Simplified dependency annotations with {GFL-Web}}, booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations}, publisher = {Association for Computational Linguistics}, author = {Mordowanec, Michael T. and Schneider, Nathan and Dyer, Chris and Smith, Noah A.}, month = jun, year = {2014}, pages = {121--126}, url = {http://www.aclweb.org/anthology/P14-5021} } [software]
• Nathan Schneider, Brendan O’Connor, Naomi Saphra, David Bamman, Manaal Faruqui, Noah A. Smith, Chris Dyer, and Jason Baldridge (2013). A framework for (under)specifying dependency syntax without overloading annotators. Linguistic Annotation Workshop. [paper] [extended version] We introduce a framework for lightweight dependency syntax annotation. Our formalism builds upon the typical representation for unlabeled dependencies, permitting a simple notation and annotation workflow. Moreover, the formalism encourages annotators to underspecify parts of the syntax if doing so would streamline the annotation process. We demonstrate the efficacy of this annotation on three languages and develop algorithms to evaluate and compare underspecified annotations. @InProceedings{fudg, author = {Schneider, Nathan and O'Connor, Brendan and Saphra, Naomi and Bamman, David and Faruqui, Manaal and Smith, Noah A. and Dyer, Chris and Baldridge, Jason}, title = {A Framework for (Under)specifying Dependency Syntax without Overloading Annotators}, booktitle = {Proceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse}, month = aug, year = {2013}, address = {Sofia, Bulgaria}, publisher = {Association for Computational Linguistics}, url = {http://www.aclweb.org/anthology/W13-2307} } [data and software]
• Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider (2013). Abstract Meaning Representation for sembanking. Linguistic Annotation Workshop. [paper] [website] We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it. @InProceedings{amr, author = {Banarescu, Laura and Bonial, Claire and Cai, Shu and Georgescu, Madalina and Griffitt, Kira and Hermjakob, Ulf and Knight, Kevin and Koehn, Philipp and Palmer, Martha and Schneider, Nathan}, title = {{A}bstract {M}eaning {R}epresentation for Sembanking}, booktitle = {Proceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse}, month = aug, year = {2013}, address = {Sofia, Bulgaria}, publisher = {Association for Computational Linguistics}, url = {http://www.aclweb.org/anthology/W13-2322} }
• Nathan Schneider, Behrang Mohit, Chris Dyer, Kemal Oflazer, and Noah A. Smith (2013). Supersense tagging for Arabic: the MT-in-the-middle attack. NAACL-HLT. [paper] [slides] We consider the task of tagging Arabic nouns with WordNet supersenses. Three approaches are evaluated. The first uses an expert-crafted but limited-coverage lexicon, Arabic WordNet, and heuristics. The second uses unsupervised sequence modeling. The third and most successful approach uses machine translation to translate the Arabic into English, which is automatically tagged with English supersenses, the results of which are then projected back into Arabic. Analysis shows gains and remaining obstacles in four Wikipedia topical domains. @InProceedings{arabic-sst-mt, author = {Schneider, Nathan and Mohit, Behrang and Dyer, Chris and Oflazer, Kemal and Smith, Noah A.}, title = {Supersense Tagging for {A}rabic: the {MT}-in-the-Middle Attack}, booktitle = {Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, month = jun, year = {2013}, address = {Atlanta, Georgia, USA}, publisher = {Association for Computational Linguistics}, pages = {661--667}, url = {http://www.aclweb.org/anthology/N13-1076} }
• Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith (2013). Improved part-of-speech tagging for online conversational text with word clusters. NAACL-HLT. [paper] [summary slide] [poster] We consider the problem of part-of-speech tagging for informal, online conversational text. We systematically evaluate the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy. With these features, our system achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks; Twitter tagging is improved from 90% to 93% accuracy (more than 3% absolute). Qualitative analysis of these word clusters yields insights about NLP and linguistic phenomena in this genre. Additionally, we contribute the first POS annotation guidelines for such text and release a new dataset of English language tweets annotated using these guidelines. Tagging software, annotation guidelines, and large-scale word clusters are available at: http://cs.cmu.edu/~ark/TweetNLP
This paper describes release 0.3 of the “CMU Twitter Part-of-Speech Tagger” and annotated data.
@InProceedings{owoputi-twitter-pos, author = {Owoputi, Olutobi and O'Connor, Brendan and Dyer, Chris and Gimpel, Kevin and Schneider, Nathan and Smith, Noah A.}, title = {Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters}, booktitle = {Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, month = jun, year = {2013}, address = {Atlanta, Georgia, USA}, publisher = {Association for Computational Linguistics}, pages = {380--390}, url = {http://www.aclweb.org/anthology/N13-1039} } [data and software]
• Yulia Tsvetkov, Naama Twitto, Nathan Schneider, Noam Ordan, Manaal Faruqui, Victor Chahuneau, Shuly Wintner, and Chris Dyer (2013). Identifying the L1 of non-native writers: the CMU-Haifa system. . [paper] [poster] We show that it is possible to learn to identify, with high accuracy, the native language of English test takers from the content of the essays they write. Our method uses standard text classification techniques based on multiclass logistic regression, combining individually weak indicators to predict the most probable native language from a set of 11 possibilities. We describe the various features used for classification, as well as the settings of the classifier that yielded the highest accuracy. @InProceedings{tsvetkov-l1id, author = {Yulia Tsvetkov and Naama Twitto and Nathan Schneider and Noam Ordan and Manaal Faruqui and Victor Chahuneau and Shuly Wintner and Chris Dyer}, title = {Identifying the {L1} of non-native writers: the {CMU}-{H}aifa system}, booktitle = {Proceedings of the Eighth Workshop on Innovative Use of {NLP} for Building Educational Applications}, month = jun, year = {2013}, address = {Atlanta, Georgia, USA}, publisher = {Association for Computational Linguistics}, pages = {279--287}, url = {http://www.aclweb.org/anthology/W13-1736} }
• Nathan Schneider, Chris Dyer, and Noah A. Smith (2013). Exploiting and expanding corpus resources for frame-semantic parsing.
• Nathan Schneider, Behrang Mohit, Kemal Oflazer, and Noah A. Smith (2012). Coarse lexical semantic annotation with supersenses: an Arabic case study. ACL. [paper] [poster] “Lightweight” semantic annotation of text calls for a simple representation, ideally without requiring a semantic lexicon to achieve good coverage in the language and domain. In this paper, we repurpose WordNet’s supersense tags for annotation, developing specific guidelines for nominal expressions and applying them to Arabic Wikipedia articles in four topical domains. The resulting corpus has high coverage and was completed quickly with reasonable inter-annotator agreement. @InProceedings{arabic-sst-annotation, author = {Schneider, Nathan and Mohit, Behrang and Oflazer, Kemal and Smith, Noah A.}, title = {Coarse Lexical Semantic Annotation with Supersenses: An {A}rabic Case Study}, booktitle = {Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics}, month = jul, year = {2012}, address = {Jeju Island, Korea}, publisher = {Association for Computational Linguistics}, pages = {253--258}, url = {http://www.aclweb.org/anthology/P12-2050} } [data]
• Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A. Smith (2012). Recall-oriented learning of named entities in Arabic Wikipedia. EACL. [paper] [supplement] We consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which we have no labeled training data in the target domain. To facilitate evaluation, we obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. We train a sequence model and show that a simple modification to the online learner—a loss function encouraging it to “arrogantly” favor recall over precision—substantially improves recall and F1. We then adapt our model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the self-training stage yields marginal gains. @InProceedings{mohit-arabic-ner, author = {Mohit, Behrang and Schneider, Nathan and Bhowmick, Rishav and Oflazer, Kemal and Smith, Noah A.}, title = {Recall-Oriented Learning of Named Entities in {A}rabic {W}ikipedia}, booktitle = {Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics}, month = apr, year = {2012}, address = {Avignon, France}, publisher = {Association for Computational Linguistics}, pages = {162--173}, url = {http://www.aclweb.org/anthology/E12-1017} } [data and software]
• Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith (2011). Part-of-speech tagging for Twitter: annotation, features, and experiments. ACL-HLT. [paper] [slides] We address the problem of part-of-speech tagging for English data from the popular microblogging service Twitter. We develop a tagset, annotate data, develop features, and report tagging results nearing 90% accuracy. The data and tools have been made available to the research community with the goal of enabling richer text analysis of Twitter and related social media data sets. @inproceedings{gimpel-twitter-pos, author = {Gimpel, Kevin and Schneider, Nathan and O'Connor, Brendan and Das, Dipanjan and Mills, Daniel and Eisenstein, Jacob and Heilman, Michael and Yogatama, Dani and Flanigan, Jeffrey and Smith, Noah A.}, title = {Part-of-Speech Tagging for {T}witter: Annotation, Features, and Experiments}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = jun, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {42--47}, url = {http://www.aclweb.org/anthology/P11-2008} } [data and software] [press]
• Desai Chen, Nathan Schneider, Dipanjan Das, and Noah A. Smith (2010). SEMAFOR: Frame argument resolution with log-linear models. . [paper] [slides] This paper describes the SEMAFOR system’s performance in the SemEval 2010 task on linking events and their participants in discourse. Our entry is based upon SEMAFOR 1.0 (Das et al., 2010), a frame-semantic probabilistic parser built from log-linear models. The extended system models null instantiations, including non-local argument reference. Performance is evaluated on the task data with and without gold-standard overt arguments. In both settings, it fares the best of the submitted systems with respect to recall and F1. @inproceedings{chen-schneider-das-smith-10, author = {Desai Chen and Nathan Schneider and Dipanjan Das and Noah A. Smith}, title = {{SEMAFOR}: Frame Argument Resolution with Log-Linear Models}, booktitle = {Proceedings of the Fifth International Workshop on Semantic Evaluation (SemEval-2010)}, month = jul, year = {2010}, address = {Uppsala, Sweden}, publisher = {Association for Computational Linguistics}, pages = {264--267}, url = {http://www.aclweb.org/anthology/S10-1059} }
• Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith (2010). Probabilistic frame-semantic parsing. NAACL-HLT. [paper] [slides] This paper contributes a formalization of frame-semantic parsing as a structure prediction problem and describes an implemented parser that transforms an English sentence into a frame-semantic representation. It finds words that evoke FrameNet frames, selects frames for them, and locates the arguments for each frame. The system uses two feature-based, discriminative probabilistic (log-linear) models, one with latent variables to permit disambiguation of new predicate words. The parser is demonstrated to significantly outperform previously published results. @inproceedings{das-schneider-chen-smith-10, author = {Dipanjan Das and Nathan Schneider and Desai Chen and Noah A. Smith}, title = {Probabilistic Frame-Semantic Parsing}, booktitle = {Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics}, month = jun, year = {2010}, address = {Los Angeles, California}, publisher = {Association for Computational Linguistics}, pages = {948--956}, url = {http://www.aclweb.org/anthology/N10-1138} } [software]
• Nathan Schneider (2010). English morphology in construction grammar.
• Nathan Schneider (2010). Computational cognitive morphosemantics: modeling morphological compositionality in Hebrew verbs with Embodied Construction Grammar. [paper] This paper brings together the theoretical framework of construction grammar and studies of verbs in Modern Hebrew to furnish an analysis integrating the form and meaning components of morphological structure. In doing so, this work employs and extends Embodied Construction Grammar (ECG; Bergen and Chang 2005), a computational formalism developed to study grammar from a cognitive linguistic perspective. In developing a formal analysis of Hebrew verbs, I adapt ECG—until now a lexical/syntactic/semantic formalism—to account for the compositionality of morphological constructions, accommodating idiosyncrasy while encoding generalizations at multiple levels. Similar to syntactic constructions, morpheme constructions are related in an inheritance network, and can be productively composed to form words. With the expanded version of ECG, constructions can readily encode nonconcatenative root-and-pattern morphology and associated (compositional or noncompositional) semantics, cleanly integrated with syntactic constructions. This formal, cognitive study should pave the way for computational models of morphological learning and processing in Hebrew and other languages.

### Tutorials

• Omri Abend, Dotan Dvir, Daniel Hershcovich, Jakob Prange, and Nathan Schneider (2020). Cross-lingual semantic representation for NLP with UCCA. COLING tutorial. [abstract] [materials] [video] This is an introductory tutorial to UCCA (Universal Conceptual Cognitive Annotation), a cross-linguistically applicable framework for semantic representation, with corpora annotated in English, German and French, and ongoing annotation in Russian and Hebrew. UCCA builds on extensive typological work and supports rapid annotation. The tutorial provides a detailed introduction to the UCCA annotation guidelines, design philosophy and the available resources; and a comparison to other meaning representations. It also surveys the existing parsing work, including the findings of three recent shared tasks, in SemEval and CoNLL, that addressed UCCA parsing. Finally, the tutorial presents recent applications and extensions to the scheme, demonstrating its value for natural language processing in a range of languages and domains. @InProceedings{ucca-tutorial, author = {Abend, Omri and Dvir, Dotan and Hershcovich, Daniel and Prange, Jakob and Schneider, Nathan}, title = {Cross-lingual Semantic Representation for {NLP} with {UCCA}}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics: Tutorial Abstracts}, month = dec, year = {2020}, address = {Barcelona, Spain (Online)}, publisher = {International Committee for Computational Linguistics}, pages = {1--9}, url = {https://www.aclweb.org/anthology/2020.coling-tutorials.1} }
• Nathan Schneider (2017). Universal Dependencies for English. Unpublished. [slides]
• Nathan Schneider, Jeffrey Flanigan, and Tim O’Gorman (2015). The logic of AMR: practical, unified, graph-based sentence semantics for NLP. NAACL-HLT tutorial. [abstract] [materials] The Abstract Meaning Representation formalism is rapidly emerging as an important practical form of structured sentence semantics which, thanks to the availability of large-scale annotated corpora, has potential as a convergence point for NLP research. This tutorial unmasks the design philosophy, data creation process, and existing algorithms for AMR semantics. It is intended for anyone interested in working with AMR data, including parsing text into AMRs, generating text from AMRs, and applying AMRs to tasks such as machine translation and summarization. The goals of this tutorial are twofold. First, it will describe the nature and design principles behind the representation, and demonstrate that it can be practical for annotation. In Part I: The AMR Formalism, participants will be coached in the basics of annotation so that, when working with AMR data in the future, they will appreciate the benefits and limitations of the process by which it was created. Second, the tutorial will survey the state of the art for computation with AMRs. Part II: Algorithms and Applications will focus on the task of parsing English text into AMR graphs, which requires algorithms for alignment, for structured prediction, and for statistical learning. The tutorial will also address graph grammar formalisms that have been recently developed, and future applications such as AMR-based machine translation and summarization. @InProceedings{amr-tutorial, author = {Schneider, Nathan and Flanigan, Jeffrey and O'Gorman, Tim}, title = {The Logic of {AMR}: Practical, Unified, Graph-Based Sentence Semantics for {NLP}}, booktitle = {Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts}, month = {May--June}, year = {2015}, address = {Denver, Colorado, USA}, publisher = {Association for Computational Linguistics}, pages = {4--5}, url = {http://www.aclweb.org/anthology/N15-4003} }
• Collin Baker, Nathan Schneider, Miriam R. L. Petruck, and Michael Ellsworth (2015). Getting the roles right: using FrameNet in NLP. NAACL-HLT tutorial. [abstract] The FrameNet lexical database (Fillmore & Baker 2010, Ruppenhofer et al. 2006, http://framenet.icsi.berkeley.edu), covers roughly 13,000 lexical units (word senses) for the core Engish lexicon, associating them with roughly 1,200 fully defined semantic frames; these frames and their roles cover the majority of event types in everyday, non-specialist text, and they are documented with 200,000 manually annotated examples. This tutorial will teach attendees what they need to know to start using the FrameNet lexical database as part of an NLP system. We will cover the basics of Frame Semantics, explain how the database was created, introduce the Python API and the state of the art in automatic frame semantic role labeling systems; and we will discuss FrameNet collaboration with commercial partners. Time permitting, we will present new research on frames and annotation of locative relations, as well as corresponding metaphorical uses, along with information about how frame semantic roles can aid the interpretation of metaphors. @InProceedings{fn-tutorial, author = {Baker, Collin and Schneider, Nathan and Petruck, Miriam R. L. and Ellsworth, Michael}, title = {Getting the Roles Right: Using {FrameNet} in {NLP}}, booktitle = {Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts}, month = {May--June}, year = {2015}, address = {Denver, Colorado, USA}, publisher = {Association for Computational Linguistics}, pages = {10--12}, url = {http://www.aclweb.org/anthology/N15-4006} }

### Reviews & Position Papers

• Nathan Schneider (2015). What I’ve learned about annotating informal text (and why you shouldn’t take my word for it). Linguistic Annotation Workshop. [paper] In conjunction with this year’s LAW theme, “Syntactic Annotation of Non-canonical Language” (NCL), I have been asked to weigh in on several important questions faced by anyone wishing to create annotated resources of NCLs. My experience with syntactic annotation of non-canonical language falls under an effort undertaken at Carnegie Mellon University with the aim of building an NLP pipeline for syntactic analysis of Twitter text. We designed a linguistically-grounded annotation scheme, applied it to tweets, and then trained statistical analyzers—first for part-of-speech (POS) tags (Gimpel et al., 2011; Owoputi et al., 2012), then for parses (Schneider et al., 2013; Kong et al., 2014). I will review some of the salient points from this work in addressing the broader questions about annotation methodology. @inproceedings{law2015-opinion, title = {What I've learned about annotating informal text (and why you shouldn't take my word for it)}, booktitle = {Proceedings of the 9th Linguistic Annotation Workshop}, author = {Schneider, Nathan}, month = jun, year = {2015}, address = {Denver, Colorado, USA}, publisher = {Association for Computational Linguistics}, pages = {152--157}, url = {http://www.aclweb.org/anthology/W15-1618} }
• Nathan Schneider and Reut Tsarfaty (June 2013). Design Patterns in Fluid Construction Grammar, Luc Steels (editor). . [paper] @article{schneider-dpfcg-13, title = {{\em Design {P}atterns in {F}luid {C}onstruction {G}rammar}, {Luc Steels} (editor)}, volume = {39}, number = {2}, journal = {Computational Linguistics}, author = {Schneider, Nathan and Tsarfaty, Reut}, month = jun, year = {2013}, pages = {447--453} } [publisher link]

### Annotation Manuals

• Nathan Schneider, Jena D. Hwang, Archna Bhatia, Vivek Srikumar, Na-Rae Han, Tim O’Gorman, Sarah R. Moeller, Omri Abend, Adi Shalev, Austin Blodgett, and Jakob Prange (1 April 2020). Adposition and Case Supersenses v2.5: Guidelines for English. arXiv:1704.02134 [cs]. [paper] This document offers a detailed linguistic description of SNACS (Semantic Network of Adposition and Case Supersenses; Schneider et al., 2018), an inventory of 50 semantic labels (“supersenses”) that characterize the use of adpositions and case markers at a somewhat coarse level of granularity, as demonstrated in the STREUSLE corpus (https://github.com/nert-gu/streusle/; version 4.3 tracks guidelines version 2.5). Though the SNACS inventory aspires to be universal, this document is specific to English; documentation for other languages will be published separately. Version 2 is a revision of the supersense inventory proposed for English by Schneider et al. (2015, 2016) (henceforth “v1”), which in turn was based on previous schemes. The present inventory was developed after extensive review of the v1 corpus annotations for English, plus previously unanalyzed genitive case possessives (Blodgett and Schneider, 2018), as well as consideration of adposition and case phenomena in Hebrew, Hindi, Korean, and German. Hwang et al. (2017) present the theoretical underpinnings of the v2 scheme. Schneider et al. (2018) summarize the scheme, its application to English corpus data, and an automatic disambiguation task. @article{snacs-guidelines, title = {Adposition and {C}ase {S}upersenses v2.5: {G}uidelines for {E}nglish}, url = {http://arxiv.org/abs/1704.02134}, journal = {{arXiv}:1704.02134 [cs]}, author = {Schneider, Nathan and Hwang, Jena D. and Bhatia, Archna and Srikumar, Vivek and Han, {Na-Rae} and {O'Gorman}, Tim and Moeller, Sarah R. and Abend, Omri and Shalev, Adi and Blodgett, Austin and Prange, Jakob}, month = apr, year = {2020} }
• Omri Abend, Nathan Schneider, Dotan Dvir, Jakob Prange, and Ari Rappoport (31 December 2020). UCCA’s Foundational Layer: Annotation guidelines v2.1. arXiv:2012.15810 [cs]. [paper] This is the annotation manual for Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013), specifically the Foundational Layer. UCCA is a graph-based semantic annotation scheme based on typological linguistic principles. It has been applied to several languages; for ease of exposition these guidelines give examples mainly in English. New annotators may wish to start with the tutorial on the UCCA framework (Abend et al., 2020). Further resources are available at the project homepage: https://universalconceptualcognitiveannotation.github.io @article{ucca-guidelines, title = {{UCCA's} {F}oundational {L}ayer: {A}nnotation guidelines v2.1}, url = {http://arxiv.org/abs/2012.15810}, journal = {{arXiv}:2012.15810 [cs]}, author = {Abend, Omri and Schneider, Nathan and Dvir, Dotan and Prange, Jakob and Rappoport, Ari}, month = dec, year = {2020} }
• Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider (1 May 2019). Abstract Meaning Representation (AMR) 1.2.6 specification. Webpage. English annotation guidelines for the Abstract Meaning Representation. Updated from the version published with the initial work on AMR (Banarescu et. al 2013). @misc{amr-guidelines, author = {Banarescu, Laura and Bonial, Claire and Cai, Shu and Georgescu, Madalina and Griffitt, Kira and Hermjakob, Ulf and Knight, Kevin and Koehn, Philipp and Palmer, Martha and Schneider, Nathan}, title = {{A}bstract {M}eaning {R}epresentation ({AMR}) 1.2.6 specification}, url = {https://github.com/amrisi/amr-guidelines/blob/master/amr.md}, month = may, year = {2019} }

### Reports & Presentations

• Nathan Schneider (9 October 2019). The Ins and Outs of Preposition Semantics: Challenges in Comprehensive Corpus Annotation and Automatic Disambiguation. Invited talk, University of Washington Computer Science. [video] In most linguistic meaning representations that are used in NLP, prepositions fly under the radar. I will argue that they should instead be put front and center given their crucial status as linkers of meaning—whether for spatial and temporal relations, for predicate-driven roles, or in special constructions. To that end, we have sought to characterize and disambiguate semantic functions expressed by prepositions and possessives in English (Schneider et al., ACL 2018; https://github.com/nertnlp/streusle/), and similar markers in other languages (ongoing work on Korean, Hebrew, German, and Mandarin Chinese). This approach can be broadened to provide language-neutral, lexicon-free semantic relations in structured meaning representation parsing (Prange et al., CoNLL 2019; Shalev et al., DMR 2019).
• Julia Hockett, Yaguang Liu, Yifang Wei, Lisa Singh, and Nathan Schneider (2018). Detecting and using buzz from newspapers to understand patterns of movement. Poster at IEEE BigData. [extended abstract] Meaningful leading indicators of mass movement are difficult to discover given the dearth of available data about involuntary movement. As a first step, we propose analyzing whether we can use the changing dynamics of newspaper content as one possible indirect indicator of such displacement. Specifically, we explore whether news media buzz correlates with patterns of migration in Iraq. We consider different methods for detecting buzz and empirically evaluate them on a corpus of 1.4 million articles.
• Nathan Schneider and Omri Abend (31 January 2016). Towards a dataset for evaluating multiword predicate interpretation in context. PARSEME STSM Report. [paper] Many multiword expressions (in the sense of Baldwin & Kim, 2010) are verbal predicates taking one or more arguments—including light verb constructions, other verb-noun constructions, verb-particle constructions, and prepositional verbs. We frame a lexical interpretation task for such multiword predicates (MWPs): given a sentence containing an MWP, the task is to predict other predicates (single or multiword) that are entailed. Preliminary steps have been taken toward developing an evaluation dataset via crowdsourcing.
• Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, and Nathan Schneider (September 2012). Part-of-speech tagging for Twitter: word clusters and other advances. Technical Report CMU-ML-12-107. [paper] We present improvements to a Twitter part-of-speech tagger, making use of several new features and large- scale word clustering. With these changes, the tagging accuracy increased from 89.2% to 92.8% and the tagging speed is 40 times faster. In addition, we expanded our Twitter tokenizer to support a broader range of Unicode characters, emoticons, and URLs. Finally, we annotate and evaluate on a new tweet dataset, DailyTweet547, that is more statistically representative of English-language Twitter as a whole. The new tagger is released as TweetNLP version 0.3, along with the new annotated data and large-scale word clusters at http://cs.cmu.edu/~ark/TweetNLP. @techreport{owoputi-twitter-pos-12-tr, author = {Olutobi Owoputi and Brendan O'Connor and Chris Dyer and Kevin Gimpel and Nathan Schneider}, institution = {Carnegie Mellon University}, address = {Pittsburgh, Pennsylvania}, type = {Technical Report}, number = {CMU-ML-12-107}, title = {Part-of-Speech Tagging for {T}witter: Word Clusters and Other Advances}, year = {2012}, month = sep, url = {http://cs.cmu.edu/~ark/TweetNLP/owoputi+etal.tr12.pdf} } [data and software]
• Nathan Schneider (5 October 2011). Casting a wider ’Net: NLP for the Social Web. . [slides] Natural language text dominates the information available on the Web. Yet the language of online expression often differs substantially, in both style and substance, from the language found in more traditional sources such as news. Making natural language processing techniques robust to this sort of variation is thus important for applications to behave intelligently when presented with Web text. This talk presents new research applying two sequence prediction tasks—part-of-speech tagging and named entity detection—to text from online social media platforms (Twitter and Wikipedia). For both tasks, we adapt standard forms of annotation to better suit the linguistic and topical characteristics of the data. We also propose techniques to elicit more accurate statistical taggers, including linguistic features inspired by the domain (for part-of-speech tagging of Twitter messages) as well as modifications to the learning algorithm (for named entity detection in Arabic Wikipedia).
• Nathan Schneider, Rebecca Hwa, Philip Gianfortoni, Dipanjan Das, Michael Heilman, Alan W. Black, Frederick L. Crabbe, and Noah A. Smith (July 2010). Visualizing topical quotations over time to understand news discourse. Technical Report CMU-LTI-10-013. [paper] We present the Pictor browser, a visualization designed to facilitate the analysis of quotations about user-specified topics in large collections of news text. Pictor focuses on quotations because they are a major vehicle of communication in the news genre. It extracts quotes from articles that match a user’s text query, and groups these quotes into “threads” that illustrate the development of subtopics over time. It allows users to rapidly explore the space of relevant quotes by viewing their content and speakers, to examine the contexts in which quotes appear, and to tune how threads are constructed. We offer two case studies demonstrating how Pictor can support a richer understanding of news events. @techreport{das-schneider-chen-smith-10-tr, author = {Nathan Schneider and Rebecca Hwa and Philip Gianfortoni and Dipanjan Das and Michael Heilman and Alan W. Black and Frederick L. Crabbe and Noah A. Smith}, institution = {Carnegie Mellon University}, address = {Pittsburgh, Pennsylvania}, type = {Technical Report}, number = {CMU-LTI-10-013}, title = {Visualizing Topical Quotations Over Time to Understand News Discourse}, year = {2010}, month = jul, url = {http://www.cs.cmu.edu/~nasmith/papers/schneider+etal.tr10.pdf} }
• Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith (April 2010). SEMAFOR 1.0: A probabilistic frame-semantic parser. Technical Report CMU-LTI-10-001. [paper] An elaboration on (Das et al., 2010), this report formalizes frame-semantic parsing as a structure prediction problem and describes an implemented parser that transforms an English sentence into a frame-semantic representation. SEMAFOR 1.0 finds words that evoke FrameNet frames, selects frames for them, and locates the arguments for each frame. The system uses two feature-based, discriminative probabilistic (log-linear) models, one with latent variables to permit disambiguation of new predicate words. The parser is demonstrated to significantly outperform previously published results and is released for public use. @techreport{das-schneider-chen-smith-10-tr, author = {Dipanjan Das and Nathan Schneider and Desai Chen and Noah A. Smith}, institution = {Carnegie Mellon University}, address = {Pittsburgh, Pennsylvania}, type = {Technical Report}, number = {CMU-LTI-10-001}, title = {{SEMAFOR} 1.0: A Probabilistic Frame-Semantic Parser}, year = {2010}, month = apr, url = {http://cs.cmu.edu/~ark/SEMAFOR/das+schneider+chen+smith.tr10.pdf} } [software]
• Reza Bosagh Zadeh and Nathan Schneider (December 2008). Unsupervised approaches to sequence tagging, morphology induction, and lexical resource acquisition. LS2 course literature review. [paper] [slides] We consider unsupervised approaches to three types of problems involving the prediction of natural language information at or below the level of words: sequence labeling (including part-of-speech tagging); decomposition (morphological analysis and segmentation); and lexical resource acquisition (building dictionaries to encode linguistic knowledge about words within and across languages). We highlight the strengths and weaknesses of these approaches, including the extent of labeled data/resources assumed as input, the robustness of modeling techniques to linguistic variation, and the semantic richness of the output relative to the input.