Papers of the day   All papers

Real numbers, data science and chaos: How to fit any dataset with a single parameter

Comments

Marcos López de Prado: A common misconception is that the risk of overfitting increases with the number of parameters in the model. In reality, a single parameter suffices to fit most datasets: https://arxiv.org/abs/1904.12320 Implementation available at: https://github.com/Ranlot/single-parameter-fit/ https://t.co/2gGlrSuRGj

32 replies, 1473 likes


Alfredo Canziani: Overfitting a model with one parameter. Impressive paper, implications, and cute drawings. That's why autoencoders require an information bottleneck to learn any good representation. Check this work out!

0 replies, 291 likes


Danilo J. Rezende: These results illustrate why we should care about the "description length" of models, not how many parameters they have.

4 replies, 225 likes


David Pfau: Whenever anyone asks why VC dimension is important...

3 replies, 116 likes


Ash Jogalekar: Here's a funny paper which fits almost any function - including von Neumann's elephants (http://wavefunction.fieldofscience.com/2015/02/derek-lowe-to-world-beware-of-von.html) - with a single adjustable parameter. https://arxiv.org/pdf/1904.12320.pdf https://t.co/BnH56EGX6e

8 replies, 55 likes


ML Limericks: Elegant dactylic trimeter Using a single parameter?!   You might say “no way”   But look at it this way: Overfitting’s just got prettier

0 replies, 33 likes


Mukund Thattai: Can a single-parameter model be used to fit most data? Yes! This was Alan Turing's momentous discovery. The model (the thing that uses the parameter to generate an output) is known as a "universal computer". The parameter itself is just a binary number, known as the "program".

1 replies, 19 likes


Jeremy Sheff: Missing the Data for the Algorithm: Neat new paper by @spiantado, with implications for regulation of machine learning: heads up @BrettFrischmann, @FrankPasquale, @Klonick. http://jeremysheff.com/2018/05/30/missing-the-data-for-the-algorithm/ https://t.co/L3VKa5V2H2

0 replies, 15 likes


steven t. piantadosi: Super duper cool -- Laurent Boué unpacked my "one parameter" paper in a friendly way for data scientists and applied it to a bunch of fun cases: https://arxiv.org/pdf/1904.12320.pdf https://t.co/zu7aLX8x66

1 replies, 13 likes


Teemu Roos: The best paper I have seen this year. Absolutely brilliant. 🎯

0 replies, 11 likes


Brandon Rohrer@ODSC: “Entertainment for curious data scientists” Intuition that a model’s flexibility is driven by its parameter count can come from working with linear and polynomial models. But not all parameters are created equal.

1 replies, 11 likes


Stuart Reid: It's not a misconception, it's just nuanced. It's simultaneously true for some models (polynomial regression) *and* seemingly false for other models (deep neural networks). You'll enjoy this work recently presented the IndabaX: https://arxiv.org/abs/1710.03667

2 replies, 9 likes


Arkady Konovalov: an interesting read for those who fit models to the data: "Real numbers, data science and chaos: How to fit any dataset with a single parameter" https://arxiv.org/abs/1904.12320

0 replies, 8 likes


Paola Gori-Giorgi: This is interesting for #DFT #compchem : with 1 parameter (albeit high precision) you can fit any data set. What is then a nonempirical functional? https://arxiv.org/abs/1904.12320

3 replies, 8 likes


Irenes (many): WHAT

1 replies, 7 likes


André David: Very interesting perspective here.

0 replies, 6 likes


jrbanga: The perils of overfitting... "How much can the values of the weightsof a well-trained neural network be attributed to brute-force memorization of the training data vs. meaningfullearning and why do they provide any kind of generalization?" http://arxiv.org/abs/1904.12320

0 replies, 6 likes


Jin Choi: This blows my mind

0 replies, 5 likes


Astroboy: While the work is fascinating the claim that a "single parameter can fit any data" is totally false. The paper says "complete values of parameter α that may extend to tens of thousands of digits". The net information contained in that "one" parameter is unbounded!

2 replies, 5 likes


tylerni7: I don't think this is particularly deep or meaningful, but it's definitely cute https://twitter.com/lopezdeprado/status/1123399675296481280

1 replies, 4 likes


Jeffrey West: Approximating *any* dataset of any modality (time-series, images, sound) by a scalar function with a single real-valued parameter: https://arxiv.org/abs/1904.12320 Love this line: "Targeting an audience of data scientists with a taste for the curious and unusual,..." https://t.co/7BVg8s3k73

0 replies, 4 likes


Lobsters: Real numbers, data science and chaos: How to fit any dataset with a single parameter https://lobste.rs/s/mf2itd #pdf #ai https://arxiv.org/abs/1904.12320

0 replies, 4 likes


Sopcaja: @sarahookr @SeanHooker @underrated_ml I really like this very different and quite unusual ML paper https://arxiv.org/abs/1904.12320 The author shows how you can (over)fit datasets with a single parameter. So debunking the common misconception that the risk of overfitting increases with the number of parameters in the model

1 replies, 3 likes


Sarah Vallelian: interesting arxiv find of the day 🐘 https://arxiv.org/abs/1904.12320

1 replies, 3 likes


the bioinformartist: Wait, what ? [1904.12320] Real numbers, data science and chaos: How to fit any dataset with a single parameter https://arxiv.org/abs/1904.12320

1 replies, 3 likes


Enzo Tagliazucchi: @pgroisma https://arxiv.org/abs/1904.12320

1 replies, 3 likes


Ben Nagy: WAT IS THIS SORCERY THE 🤬 FUNCTION IS DIFFERENTIABLE 🤯

0 replies, 3 likes


Duane Rich: Wow this paper is illuminating, hilarious and ingenious! You can represent, with arbitrary precision, any dataset with a single parameter! It shows counting your models parameters isn’t necessarily a good proxy for what you want - yuno, that murky “degrees of freedom” idea

0 replies, 3 likes


Moideen Kalladi: what is this behavior

0 replies, 2 likes


R. Ferrer-i-Cancho🎗: It is worth reminding that overfitting has two sources: the complexity of the model per se and its parameters.

1 replies, 2 likes


J. Javier Gálvez: This a mind-blowing, at least for me. In this paper: "How much can the values of the weights of a well-trained neural network be attributed to brute-force memorization of the training data vs. meaningful learning and why do they provide any kind of generalization?"... why???

1 replies, 2 likes


Steinn Sigurdsson: this! @arxiv

0 replies, 1 likes


Abolfazl Alipour: Nice illustration!

0 replies, 1 likes


Fayolle Pierre-Alain: @KMMoerman @BrunoLevy01 "Real numbers, data science and chaos: How to fit any dataset with a single parameter": https://arxiv.org/abs/1904.12320 github code: https://github.com/Ranlot/single-parameter-fit/

0 replies, 1 likes


Joan Serrà: Who needs a single parameter when you can train millions?

0 replies, 1 likes


Uri Manor: Whoa

0 replies, 1 likes


Adrian Jacobo: Take that von Neuman! How to fit an elephant using a single parameter!

0 replies, 1 likes


Content

Found on May 01 2019 at https://arxiv.org/pdf/1904.12320.pdf

PDF content of a computer science paper: Real numbers, data science and chaos: How to fit any dataset with a single parameter