Papers of the day   All papers

Evaluating NLP Models via Contrast Sets

Comments

Zachary Lipton: Before the media blitz & retweet party get out of control, this idea exists, has been published, has a name, and a clearer justification. It is called ***Counterfactually-Augmented Data*** and here's the published paper (spotlight at #ICLR2020). https://arxiv.org/abs/1909.12434

9 replies, 364 likes


Matt Gardner: Evaluating NLP Models via Contrast Sets New work that is a collaboration between 26 people at 10 institutions (!) https://arxiv.org/abs/2004.02709 Trying to tag everyone at the top of the thread, here it goes:

11 replies, 358 likes


Noah Smith: new work by @nlpmattg of @ai2_allennlp, with a cast of dozens: contrast sets https://arxiv.org/abs/2004.02709

0 replies, 34 likes


John Platt: Adding local perturbations to NLP test sets highlights fragility of some newer models.

0 replies, 6 likes


lazary: @jxmorris12 Looks really interesting! It reminds me of the recent "minimal pair" literature, that aims to perform minimal changes to examples that *do* change the meaning, followed by an evaluation. https://arxiv.org/pdf/1909.12434.pdf by @dkaushik96 et al. https://arxiv.org/pdf/2004.02709.pdf by @nlpmattg et al.

1 replies, 1 likes


Content

Found on Apr 07 2020 at https://arxiv.org/pdf/2004.02709.pdf

PDF content of a computer science paper: Evaluating NLP Models via Contrast Sets