Papers of the day   All papers

Natural Adversarial Examples


Dan Hendrycks: Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: Dataset and code:

11 replies, 564 likes

Janelle Shane: 7,500 pictures that image recognition algorithms tend to get wrong. My kind of dataset.

7 replies, 274 likes

Gary Marcus: anyone who thinks vision is anywhere near solved should read this important paper. #deeplearning and AI more generally have a long way to go.

4 replies, 210 likes

Oh Hey, Gravis Is Here!: you know, something about this technology: it will never "get better." there are literally infinite possible scenes, and because these things don't actually "learn" as we know it, they'll always have gigantic, unpredictable false positive gaps

7 replies, 108 likes

Leon Derczynski: Huge if true - this work indicates that BERT exploits artifacts that distort / inflate behaviors around it, and when cleaned up, the results are markedly less impressive

7 replies, 36 likes

Max Little: Wonderful dataset which highlights the pervasive and diverse nature of the failures of current deep nets in image classification, such as confounding on texture, background and context. Unlike with artificial adversarial examples, robust training does not help. @GaryMarcus

0 replies, 25 likes

Chris Cundy: Wild new paper collects a set of naturally occuring adversarial examples for imagenet classifiers: densenet gets only 2% accuracy! #RobustML

0 replies, 4 likes

Peter J. Kootsookos 🇦🇺🇮🇪🇺🇸: Entertaining.

0 replies, 2 likes

MartinC: Read this and then consider autonomous vehicles

0 replies, 2 likes

Machine Learning: Natural Adversarial Examples.

0 replies, 1 likes

Tony Peng: From the paper Natural Adversarial Examples: "IMAGENET-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%." OMG!

1 replies, 1 likes

arXiv CS-CV: Natural Adversarial Examples

0 replies, 1 likes

Drew Harwell: AI that knows what it's looking at is still a problem that's far from solved, new research shows. "Over-reliance on color, texture, and background cues" can fool computer-vision algorithms into thinking, for instance, that a frog is a squirrel:

0 replies, 1 likes


Found on Jul 17 2019 at

PDF content of a computer science paper: Natural Adversarial Examples