Papers of the day   All papers

Adversarial NLI: A New Benchmark for Natural Language Understanding

Comments

Douwe Kiela: Excited (in my 1st tweet ever!) to announce Adversarial NLI: a new large-scale benchmark dataset for NLU, and a challenge to the community. Great job by @EasonNie, together with @adinamwilliams @em_dinan @mohitban47 and @jaseweston. https://arxiv.org/abs/1910.14599

5 replies, 317 likes


Gary Marcus: "A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets .. instead of learning meaning in the flexible and generalizable way that humans do" -- Adversarial NLI: A New Benchmark for NLU https://arxiv.org/abs/1910.14599

9 replies, 210 likes


Douwe Kiela: Just updated the ANLI paper with the #acl2020nlp camera ready: https://arxiv.org/abs/1910.14599v2. Lots of extra stuff: more analysis on the value of dynamic adversarial data collection, details on annotators and more discussion. (1/2) https://t.co/vzWXkUhlQa

1 replies, 81 likes


Mohit Bansal: Exciting work by @EasonNie (+@adinamwilliams @em_dinan @jaseweston @douwekiela)! Adversarial NLI, a large dataset collected via a multi-round adversarial (weakness-finding) human-&-model-in-the-loop process; allows moving/lifelong-learning target for NLU😀 https://arxiv.org/abs/1910.14599 https://t.co/eoZpeLIsRq

1 replies, 48 likes


Kyunghyun Cho: ahahahaha "there is something rotten in the state of the art"

0 replies, 22 likes


Grady Booch: I feel a chill in the air. https://t.co/eHDcSdjlnX

2 replies, 20 likes


Jerome Pesenti: Why we should be skeptical of claims of near-human performance in NLP based on traditional benchmarks https://thegradient.pub/nlps-clever-hans-moment-has-arrived/ by @benbenhh. NLP made huge progress in the past two years, but human like pretensions will require new evaluations. My bet is on https://arxiv.org/pdf/1910.14599.pdf

1 replies, 18 likes


Matt Gardner: Glad to see this actually done; I've wanted to do something similar for reading comprehension, but it's hard to get right. Nice work! Question, though: the paper says that NLI is "arguably the most canonical task in NLU". Is it? Should it be? Why or why not?

1 replies, 17 likes


MUNOZRICK: Most #AI tech seems more akin to muscle memory or capturing instinctive behavior, rather than actual learning or intelligence. More cerebellum than prefrontal cortex

0 replies, 12 likes


claps for anyone 👏👏👏: I would hope everyone knows this already by now.

1 replies, 8 likes


Beth Carey: Meaning is represented for machines by #Patom Theory + #RRG=NLU. Once machines have meaning represented, applications like #chatbots and #digitalassistants can leverage context in conversation - common sense and reasoning are by products.

0 replies, 5 likes


arXiv CS-CL: Adversarial NLI: A New Benchmark for Natural Language Understanding http://arxiv.org/abs/1910.14599

0 replies, 1 likes


Alex Hamilton: @anthonyncutler The quote is itself taken from a Facebook AI paper, which quotes a number of other papers. You can read it here: https://arxiv.org/pdf/1910.14599.pdf https://t.co/dDk8rE81wE

1 replies, 1 likes


Content

Found on Nov 01 2019 at https://arxiv.org/pdf/1910.14599.pdf

PDF content of a computer science paper: Adversarial NLI: A New Benchmark for Natural Language Understanding