Papers of the day   All papers

Natural Adversarial Examples


Jul 17 2019 Dan Hendrycks

Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: Dataset and code:
11 replies, 564 likes

Jul 18 2019 Janelle Shane

7,500 pictures that image recognition algorithms tend to get wrong. My kind of dataset.
7 replies, 274 likes

Jul 18 2019 Gary Marcus

anyone who thinks vision is anywhere near solved should read this important paper. #deeplearning and AI more generally have a long way to go.
4 replies, 210 likes

Jul 18 2019 Oh Hey, Gravis Is Here!

you know, something about this technology: it will never "get better." there are literally infinite possible scenes, and because these things don't actually "learn" as we know it, they'll always have gigantic, unpredictable false positive gaps
7 replies, 108 likes

Jul 19 2019 Leon Derczynski

Huge if true - this work indicates that BERT exploits artifacts that distort / inflate behaviors around it, and when cleaned up, the results are markedly less impressive
7 replies, 36 likes

Jul 17 2019 Max Little

Wonderful dataset which highlights the pervasive and diverse nature of the failures of current deep nets in image classification, such as confounding on texture, background and context. Unlike with artificial adversarial examples, robust training does not help. @GaryMarcus
0 replies, 25 likes

Jul 17 2019 Chris Cundy

Wild new paper collects a set of naturally occuring adversarial examples for imagenet classifiers: densenet gets only 2% accuracy! #RobustML
0 replies, 4 likes

Jul 20 2019 MartinC

Read this and then consider autonomous vehicles
0 replies, 2 likes

Jul 18 2019 Peter J. Kootsookos 🇦🇺🇮🇪🇺🇸

0 replies, 2 likes

Jan 09 2020 arXiv CS-CV

Natural Adversarial Examples
0 replies, 1 likes

Jul 17 2019 Machine Learning

Natural Adversarial Examples.
0 replies, 1 likes

Jul 18 2019 Drew Harwell

AI that knows what it's looking at is still a problem that's far from solved, new research shows. "Over-reliance on color, texture, and background cues" can fool computer-vision algorithms into thinking, for instance, that a frog is a squirrel:
0 replies, 1 likes

Jul 17 2019 Tony Peng

From the paper Natural Adversarial Examples: "IMAGENET-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%." OMG!
1 replies, 1 likes