Aggregating and analysing crowdsourced annotations for NLP (AnnoNLP)

Silviu Paun, Dirk Hovy

Workshop Description

Background

Crowdsourcing, whether through microwork platforms or through Games with a Purpose, is increasingly used as an alternative to traditional expert annotation, achieving comparable annotation quality at lower cost and offering greater scalability. The NLP community has enthusiastically adopted crowdsourcing to support work in tasks such as coreference resolution, sentiment analysis, textual entailment, named entity recognition, word similarity, word sense disambiguation, and many others. This interest has also resulted in the organization of a number of workshops at ACL and elsewhere, from as early as “The People’s Web meets NLP” in 2009. These days, general purpose research on crowdsourcing can be presented at HCOMP or CrowdML, but the need for workshops more focused on the use of crowdsourcing in NLP remains. In particular, NLP-specific methods are typically required for the task of aggregating the interpretations provided by the annotators. Most existing work on aggregation methods is based on a common set of assumptions: 1) independence between the true classes, 2) the set of classes the coders can choose from is fixed across the annotated items, and 3) there is one true class per item. However, for many NLP tasks such assumptions are not entirely appropriate. For example, sequence labelling tasks (e.g., NER, tagging) have an implicit inter-label dependence (e.g., Nguyen et al., 2017). In other tasks such as coreference the labels the coders can choose from are not fixed but depend on the mentions from each document (Passonneau, 2004; Paun et al., 2018). Furthermore, in many NLP tasks, the data items can have more than one interpretation (e.g., Poesio and Artstein, 2005; Passonneau et al., 2012; Plank et al., 2014). Such cases of ambiguity also affect the reliability of existing gold standard datasets (often labelled with a single interpretation even though expert disagreement is a well-known issue). This former point motivates the research on alternative, complementary evaluation methods, but also the development of multi-label datasets. More broadly, the proposed workshop aims to bring together researchers interested in methods for aggregating and analysing crowdsourced data for NLP-specific tasks which relax the aforementioned assumptions. We also invite work on ambiguous, subjective or complex annotation tasks which received less attention in the literature.

Objectives

Although there is a large body of work analysing crowdsourced data, be that probabilistic (models of annotation) or traditional (majority voting aggregation, agreement statistics), there has been less work devoted to NLP tasks. It is often the case that NLP data violate the assumptions made by most existing models, opening the path to new research. The aim of the proposed workshop is to bring together the community of researchers interested in this area.

Topics

Topics of interest include but are not limited to the following:

Workshop Organizers

Silviu Paun, Queen Mary University of London, s.paun@qmul.ac.uk
Research interests and areas of expertise include probabilistic models of annotation, with a particular interest in coreference; generative models of text, such as topic models, applied to short text data; and parameter estimation techniques, with a particular interest in variational inference.
Dirk Hovy, Bocconi University, mail@dirkhovy.com
Dirk is an associate professor at Bocconi University in Milan. His research focuses on the intersection of social science and statistical NLP, i.e., how social dimensions influence language and in turn NLP models. He is also interested in matters of algorithmic fairness and bias, and works on incorporating the human factors into model, including in annotation. He is the author of the annotation aggregation tool MACE, and was an organizer of five *ACL workshops and a SemEval task, as well as local chair for EMNLP 2017.

Invited Speakers

Jordan Boyd-Graber, University of Maryland (confirmed)

Proposed Programme Committee

(Some of the members still have to confirm)

  • Omar Alonso, Microsoft (confirmed)
  • Beata Beigman Klebanov, Princeton (confirmed)
  • Bob Carpenter, Columbia University (confirmed)
  • Jon Chamberlain, University of Essex (confirmed)
  • Anca Dumitrache, Vrije Universiteit Amsterdam (confirmed)
  • Paul Felt, IBM (confirmed)
  • Udo Kruschwitz, University of Essex (confirmed)
  • Matthew Lease, University of Texas at Austin (confirmed)
  • Massimo Poesio, Queen Mary University of London (confirmed)
  • Vikas C Raykar, IBM (confirmed)
  • Edwin Simpson, Technische Universität Darmstadt (confirmed)
  • Yudian Zheng, Twitter (confirmed)
  • Rebecca Passonneau, Penn State University
  • Gabriella Kazai, Lumi
  • Chris Callison-Burch, University of Pennsylvania
  • Matteo Venanzi, Microsoft
  • References