Papers of the day   All papers

Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches

Comments

Xavier 🎗️: Systematic analysis of many recent neural #recsys shows lack of reproducibility and poor performance even when compared to simple heuristics 😨 https://arxiv.org/abs/1907.06902

4 replies, 107 likes


Claudia Hauff: 18 neural recommender algs attempted to be reproduced. 7 were reproducible with "reasonable effort" and among those 6 often fell short of simple heuristic methods via @dawen_liang https://arxiv.org/abs/1907.06902 #recsys19

3 replies, 94 likes


halvarflake: https://arxiv.org/abs/1907.06902 "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" - "... considered 18 algorithms ... top-level research conferences in the last years. Only 7 of them could be reproduced with reasonable effort.

3 replies, 93 likes


Arman Rahmim: "Are We Really Making Much Progress?" Out of 18 #DeepLearning algorithms from top-level conferences, only 7 could be reproduced with reasonable effort. Out of these, 6 could be outperformed with comparably simple heuristic methods! https://arxiv.org/pdf/1907.06902.pdf #AI

7 replies, 73 likes


Dawen Liang: I feel quite proud that our VAE paper is the only one that get reproduced and not outperformed by simple baselines. https://arxiv.org/abs/1907.06902 #recsys #reproduciblescience

2 replies, 66 likes


Maurizio Ferrari Dacrema: Our analysis of 18 neural #recsys shows only 7 are reproducible and 6 can be outperformed by simple baselines "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" https://arxiv.org/abs/1907.06902 #sigir19 #TheWebConf #kdd19 #DeepLearning

1 replies, 43 likes


DataScienceNigeria: 2 weeks ago was Recommendation System Conference #recsys2019. The event’s BEST PAPER raises concerns on some recent neural recommendation approaches with respect to reproducibility & instances where simple algorithms can outperform the “super” ones. Read:https://arxiv.org/pdf/1907.06902.pdf https://t.co/glvlqkALrw

2 replies, 32 likes


Jed Brown: The null hypothesis for reviewers of computational studies should be that results are not reproducible and not competitive with simpler methods. Refutation involves open code/data (yay!) and would significantly raise the signal-to-noise ratio of pubs.

2 replies, 29 likes


bayo adekanmbi: Simplicity vs Sophistication. Good job! Fact is some conceptually simpler algorithms do outperform mega DL ones. However, this research must pass the test of representativeness by exploring more traditional algorithms as baseline + more sources to clarify what’s real or phantom

0 replies, 27 likes


Kieran Campbell: Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches Results of only 7/18 algorithms could be reproduced, of which 6 were outperformed by "simple heuristic methods" https://arxiv.org/abs/1907.06902

0 replies, 20 likes


Rachael Tatman: Relevant to this blog post I wrote a while ago: https://towardsdatascience.com/beating-state-of-the-art-by-tuning-baselines-74ec6ad2cd59 Unfortunately it looks like even more evidence that common methods of ML algorithm evaluation have some pretty big flaws. 😢

1 replies, 20 likes


Claudia Pagliari: Trending in critical #AI Out of 18 algorithms presented at top conferences only 7 could be reproduced, of which 6 could be outperformed using simpler methods https://arxiv.org/abs/1907.06902 With so much capital & kudos at stake no wonder companies are faking it #DeepLearning #WizardOfOz https://t.co/xnhYBiwN9N

1 replies, 15 likes


Maurizio Ferrari Dacrema: Have a look at our #recsys19 analysis of 18 DL #recsys algorithms. Only 7 could be reproduced and 6 of them outperformed by simple heuristic methods "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" https://arxiv.org/abs/1907.06902

1 replies, 12 likes


Bindu Reddy 🔥❤️: Only 7 of the 18 papers in recommendations were reproducible and of which, only one of them beat the state of the art. This encapsulate the core problem with machine learning today - https://arxiv.org/abs/1907.06902

0 replies, 9 likes


Dmitri Sotnikov ⚛: Of 18 algorithms presented at top-level conferences only 7 could be reproduced with reasonable effort. 6 can often be outperformed with comparably simple heuristic. The last did not consistently outperform a well-tuned non-neural linear ranking method. https://arxiv.org/abs/1907.06902

0 replies, 9 likes


Hacker News: A Worrying Analysis of Recent Neural Recommendation Approaches https://arxiv.org/abs/1907.06902

1 replies, 8 likes


Daniel Roy: An interesting counterexample outside computer vision to work arguing that adaptivity to held out data appears to be benign (in computer vision). https://arxiv.org/abs/1907.06902

0 replies, 7 likes


Nikolai Slavov: Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. Dismal report on the reproducibility and performance of deep learning https://arxiv.org/abs/1907.06902

2 replies, 6 likes


Alexander Kruel: Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches https://arxiv.org/abs/1907.06902 Just one out of 18 algorithms "clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method."

1 replies, 5 likes


Greg Linden: "18 [Neural recommender] algorithms ... at top-level research conferences in the last years. Only 7 of them could be reproduced ... [only] one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural ... method" https://arxiv.org/abs/1907.06902

1 replies, 5 likes


Tomáš Kafka: A winning and worrying paper of #recsys2019: - most papers from top conferences couldn't be reproduced - most of the remaining ones could often be outperformed with comparably simple heuristic methods Basically, a proof of Sturgeon's law for ML 🤷‍♂️ https://arxiv.org/abs/1907.06902

0 replies, 4 likes


Yizhar (Izzy) Toren: "Therefore, progress is often claimed by comparing a complex neural model against another neural model, which is, however, not necessarily a strong baseline." https://arxiv.org/abs/1907.06902

0 replies, 4 likes


Michael Maclean: Read the abstract of https://arxiv.org/abs/1907.06902 and then read https://apenwarr.ca/log/20190201

0 replies, 4 likes


Rahel Jhirad: Best paper #recsys2019 Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches @Maurizio_fd @crmpla67 @dietmarjannach https://arxiv.org/pdf/1907.06902.pdf https://t.co/gFvJctn5In

0 replies, 3 likes


Moshe Dolejsi: https://arxiv.org/abs/1907.06902 'We considered 18 algorithms that were presented at top-level research conferences in the last years... 7 of them could be reproduced with reasonable effort.... turned out that 6 of them can often be outperformed with comparably simple heuristic methods'

1 replies, 3 likes


Machine Learning: Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. http://arxiv.org/abs/1907.06902

0 replies, 3 likes


Dylan Bourgeois: #RecSys2019 Best Paper Award goes to « Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches » by Dacrema et al. https://arxiv.org/abs/1907.06902

0 replies, 2 likes


Theophano Mitsa: #MachineLearning Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches https://arxiv.org/pdf/1907.06902.pdf

0 replies, 2 likes


에그: @neal_lathia The only competitive DL paper in the last 3 years is the [[Variational Autoencoder]] from Netflix https://arxiv.org/abs/1907.06902

0 replies, 2 likes


Alex Davis: https://arxiv.org/abs/1907.06902

0 replies, 2 likes


sileye ba: Interesting read about results reproducibility, a well known reccurent problem https://arxiv.org/abs/1907.06902

1 replies, 2 likes


We can do better!: @skdh The problem goes beyond reproducibility. https://arxiv.org/abs/1907.06902 and some of the papers it cites.

0 replies, 2 likes


Flávio Clésio: Well deserved. Probably one of the best papers in the field in this year. #recsys2019 Direct link: https://arxiv.org/abs/1907.06902 https://t.co/f3SPkZMrAu

0 replies, 2 likes


Kristy Brock: So important to keep in mind!

0 replies, 2 likes


Emanuel: Best full paper at #recsys2019 already announced! Looking forward to Tuesday and "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" --> https://arxiv.org/abs/1907.06902 https://t.co/IBQ93P0EIX

0 replies, 2 likes


Yves Raimond: % of cases where a neural recommendation approach was competitive with a baseline approach - https://arxiv.org/abs/1907.06902 https://t.co/dXae9oXkvg

0 replies, 2 likes


Vivek Das: Indeed this is going to haunt us in future. If we validate our results in wet experimental, we also need to cross validate methods precision via benchmarking, test scalability & reproducibility. Such is scarce in #Genomics . #DataScience & #medicine needs benchmarking to progress

1 replies, 1 likes


Hacker News 150: A Worrying Analysis of Recent Neural Recommendation Approaches https://arxiv.org/abs/1907.06902 (http://bit.ly/2SB4Lkv)

0 replies, 1 likes


Fabrizio Montesi: "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" https://arxiv.org/abs/1907.06902

0 replies, 1 likes


Surya Kallumadi: "Are We Really Making Much Progress?A Worrying Analysis of Recent Neural Recommendation Approaches" 7/18 of the papers can be reproduced;6 of those out performed by simple heuristics. In line with some of the findings by @lintool in IR. https://arxiv.org/pdf/1907.06902.pdf by @dietmarjannach

0 replies, 1 likes


Quercia: And i really like the paper | Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches https://arxiv.org/pdf/1907.06902.pdf

0 replies, 1 likes


Matthew Chalmers: “Only 7 of them could be reproduced with reasonable effort. [...] 6 of them can often be outperformed with comparably simple heuristic methods” Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches https://arxiv.org/pdf/1907.06902.pdf

1 replies, 0 likes


Content

Found on Jul 17 2019 at https://arxiv.org/pdf/1907.06902.pdf

PDF content of a computer science paper: Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches