Papers of the day   All papers

Natural Adversarial Examples

Comments

Jul 17 2019 Dan Hendrycks

Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: https://arxiv.org/abs/1907.07174 Dataset and code: https://github.com/hendrycks/natural-adv-examples https://t.co/pd75CyK54T
11 replies, 571 likes


Jul 18 2019 Janelle Shane

7,500 pictures that image recognition algorithms tend to get wrong. My kind of dataset.
7 replies, 274 likes


Jul 18 2019 Gary Marcus

anyone who thinks vision is anywhere near solved should read this important paper. #deeplearning and AI more generally have a long way to go.
4 replies, 210 likes


Jul 18 2019 Oh Hey, Gravis Is Here!

you know, something about this technology: it will never "get better." there are literally infinite possible scenes, and because these things don't actually "learn" as we know it, they'll always have gigantic, unpredictable false positive gaps
7 replies, 108 likes


Jul 19 2019 Leon Derczynski

Huge if true - this work indicates that BERT exploits artifacts that distort / inflate behaviors around it, and when cleaned up, the results are markedly less impressive https://twitter.com/benbenhh/status/1151698285884633089
7 replies, 36 likes


Jul 17 2019 Max Little

Wonderful dataset which highlights the pervasive and diverse nature of the failures of current deep nets in image classification, such as confounding on texture, background and context. Unlike with artificial adversarial examples, robust training does not help. @GaryMarcus
0 replies, 25 likes


Jul 17 2019 Chris Cundy

Wild new paper collects a set of naturally occuring adversarial examples for imagenet classifiers: densenet gets only 2% accuracy! https://arxiv.org/abs/1907.07174 #RobustML https://t.co/S3f7FpYa2M
0 replies, 4 likes


Jul 18 2019 Peter J. Kootsookos 🇦🇺🇮🇪🇺🇸

Entertaining. https://twitter.com/danhendrycks/status/1151515352121004032?s=12
0 replies, 2 likes


Jul 20 2019 MartinC

Read this and then consider autonomous vehicles
0 replies, 2 likes


Jul 17 2019 Tony Peng

From the paper Natural Adversarial Examples: "IMAGENET-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%." OMG! https://arxiv.org/pdf/1907.07174.pdf
1 replies, 1 likes


Jul 17 2019 Machine Learning

Natural Adversarial Examples. http://arxiv.org/abs/1907.07174
0 replies, 1 likes


Jul 18 2019 Drew Harwell

AI that knows what it's looking at is still a problem that's far from solved, new research shows. "Over-reliance on color, texture, and background cues" can fool computer-vision algorithms into thinking, for instance, that a frog is a squirrel: https://arxiv.org/pdf/1907.07174.pdf https://t.co/lpp7sx32fu
0 replies, 1 likes


Content