Papers of the day   All papers

A Metric Learning Reality Check


Kyle Hill: The controlled burning of pollen made by the Fire Brigade of Castilla y León County at the park of Cidacos, in Calahorra, La Rioja, Spain

2273 replies, 232047 likes

Eric Jang 🇺🇸🇹🇼: Every once in awhile a paper comes out that makes you breathe a sigh of relief that you don't publish in that field... "Our results show that when hyperparameters are properly tuned via cross-validation, most methods perform similarly to one another"

32 replies, 1851 likes

Zachary Lipton: I suspect most of us doing deep learning are in "that field". Similar qualitatively to language modeling gains attributable to hyperparameter tuning (Melis et al. & other examples we discuss in "Troubling Trends in ML Scholarship"—

3 replies, 292 likes

(((ل()(ل() 'yoav)))): (a) important work; (b) is anyone really surprised? i'd imagine this general trend will hold for any task and/or metric people are hill climbing on with DL.

9 replies, 135 likes

Denny Britz: A Metric Learning Reality Check: “[…] state of the art loss functions perform marginally better than, and sometimes on par with, classic methods” 🔥

2 replies, 68 likes

Oded Rechavi 🦉: This must be a metaphor, but for what?

23 replies, 63 likes

Matthew Hutson: “Eye-Catching Advances in Some AI Fields Are Not Real”: my piece in this week’s @ScienceMagazine. Thanks @davisblalock, @zicokolter, @LightningSource, @_leslierice, @RICEric22, @andrew_ilyas, @logan_engstrom, John Guttag, Jose Javier Gonzalez Ortiz.

0 replies, 40 likes

Edward Grefenstette: Love this. Similar result (different domain) to Melis et al 2017: That paper was pearls before swine for the EMNLP reviewers that read it. Also echos stuff that @Smerity has observed, I believe.

0 replies, 37 likes

Thomas Wolf: @dennybritz So many. Here are a few: CV: Standard splits: Metric Learning: IR: Pruning: Summarization: Lipton et al: & RL...

0 replies, 34 likes

X.A. staying 🏡 + 😷 saves lives: "Deep metric learning papers from the past four years have consistently claimed great advances in accuracy(....)We find flaws (...), and (...) show that the improvements over time have been marginal at best."😳 - [PAPER] by folks at Cornell and @facebookai

0 replies, 14 likes

Daniël Lakens: There is an important lesson here about reward structures and knowledge generation in science.

0 replies, 13 likes

arxiv: A Metric Learning Reality Check.

0 replies, 12 likes

Andrey Kurenkov 🤖: "We find flaws in the experimental setup of these papers, and propose a new way to evaluate metric learning algorithms. ... improvements over time have been marginal at be" As a largely empirical field, AI needs to get better an empirical research. And to do that, slow down!

0 replies, 12 likes

Karl Higley: If you’re learning embeddings for recommendation candidate selection via approximate nearest neighbor search, turns out cutting edge loss functions from the past few years may not be much better than triplet loss (from 2006.)

1 replies, 10 likes

Dagmar Monett: "We find that [20 #ML #AI] papers have drastically overstated improvements ... If a paper attempts to explain the performance gains of its proposed method, & it turns out that [those gains] are non-existent, then their explanation must be invalid as well."

1 replies, 9 likes

Arthur Douillard: A reality check on #DeepLearning Metric Learning: The authors show that the progress of the last decade is mainly due to: 1. Hyperparameters tuning on the test set. 2. Better architectures (GoogleNet -> BN-Inception) I'm astonished!

1 replies, 8 likes

Serge Belongie: Metric Learning Reality Check paper @LightningSource @CornellCIS @sernamlim @facebookai in the news

0 replies, 7 likes

Jason Baldridge: Calls @DaniYogatama, @ikekong and @nlpnoah (2015) to mind.

1 replies, 7 likes

Mark Nelson: @dennybritz A few:

0 replies, 5 likes

Hamid EBZD: "A Metric Learning Reality Check"

0 replies, 4 likes

Sotirios (Sotos) Tsaftaris: When experiments are done on equal footing and each method has proper hyperparameters tuned there is no progress in performance in metric learning. A valuable lesson for everyone working in ML.

0 replies, 4 likes

Dmytro Mishkin: @ogrisel

0 replies, 3 likes

Daniel Lowd: “If a paper attempts to explain the performance gains of its proposed method, and it turns out that those performance gains are non-existent, then their explanation must be invalid as well.”

0 replies, 3 likes

ML and Data Projects To Know: 📙 A Metric Learning Reality Check Authors: @LightningSource, @SergeBelongie, @sernamlim Paper:

0 replies, 3 likes

Robert (Munro) Monarch: Wow! When you properly account for tuning and data treatment, there’s been no gain for common computer vision tasks for 15 years. From @LightningSource @SergeBelongie & Ser-Nam Lim, who are presumably under police protection from the ML community

0 replies, 2 likes

Magda Paschali: A much needed reality check! Intriguing findings and lots of valuable tips for experiment standarization and fairness from @LightningSource et al.

0 replies, 2 likes

MONTREAL.AI: A Metric Learning Reality Check Musgrave et al.:… "Our results show that when hyperparameters are properly tuned via cross-validation, most methods perform similarly to one another" #ArtificialIntelligence #DeepLearning

0 replies, 1 likes Sad but not surprising. I suppose similar results can be found for many other tasks

0 replies, 1 likes

Akshaj Verma: Press F to pay respects #DeepLearning #MachineLearning

0 replies, 1 likes

Andrew Beam: @tammy_jiang I would say most machine learning papers use these in some form *except* preregistration, which can cause... problems:

1 replies, 1 likes


0 replies, 1 likes

عمر فرؤق: Leaderboard Driven Development.

1 replies, 0 likes


Found on May 07 2020 at

PDF content of a computer science paper: A Metric Learning Reality Check