Papers of the day   All papers

Natural Adversarial Examples

Comments

Dan Hendrycks: Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: https://arxiv.org/abs/1907.07174 Dataset and code: https://github.com/hendrycks/natural-adv-examples https://t.co/pd75CyK54T

11 replies, 564 likes


Janelle Shane: 7,500 pictures that image recognition algorithms tend to get wrong. My kind of dataset.

7 replies, 274 likes


Gary Marcus: anyone who thinks vision is anywhere near solved should read this important paper. #deeplearning and AI more generally have a long way to go.

4 replies, 210 likes


Oh Hey, Gravis Is Here!: you know, something about this technology: it will never "get better." there are literally infinite possible scenes, and because these things don't actually "learn" as we know it, they'll always have gigantic, unpredictable false positive gaps

7 replies, 108 likes


Max Little: Wonderful dataset which highlights the pervasive and diverse nature of the failures of current deep nets in image classification, such as confounding on texture, background and context. Unlike with artificial adversarial examples, robust training does not help. @GaryMarcus

0 replies, 25 likes


Chris Cundy: Wild new paper collects a set of naturally occuring adversarial examples for imagenet classifiers: densenet gets only 2% accuracy! https://arxiv.org/abs/1907.07174 #RobustML https://t.co/S3f7FpYa2M

0 replies, 4 likes


MartinC: Read this and then consider autonomous vehicles

0 replies, 2 likes


Peter J. Kootsookos 🇦🇺🇮🇪🇺🇸: Entertaining. https://twitter.com/danhendrycks/status/1151515352121004032?s=12

0 replies, 2 likes


arXiv CS-CV: Natural Adversarial Examples http://arxiv.org/abs/1907.07174

0 replies, 1 likes


Drew Harwell: AI that knows what it's looking at is still a problem that's far from solved, new research shows. "Over-reliance on color, texture, and background cues" can fool computer-vision algorithms into thinking, for instance, that a frog is a squirrel: https://arxiv.org/pdf/1907.07174.pdf https://t.co/lpp7sx32fu

0 replies, 1 likes


Machine Learning: Natural Adversarial Examples. http://arxiv.org/abs/1907.07174

0 replies, 1 likes


Tony Peng: From the paper Natural Adversarial Examples: "IMAGENET-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%." OMG! https://arxiv.org/pdf/1907.07174.pdf

1 replies, 1 likes


Content

Found on Jul 17 2019 at https://arxiv.org/pdf/1907.07174.pdf

PDF content of a computer science paper: Natural Adversarial Examples