Papers of the day   All papers

Real numbers, data science and chaos: How to fit any dataset with a single parameter

Comments

May 01 2019 Marcos López de Prado

A common misconception is that the risk of overfitting increases with the number of parameters in the model. In reality, a single parameter suffices to fit most datasets: https://arxiv.org/abs/1904.12320 Implementation available at: https://github.com/Ranlot/single-parameter-fit/ https://t.co/2gGlrSuRGj
31 replies, 1476 likes


May 01 2019 Alfredo Canziani

Overfitting a model with one parameter. Impressive paper, implications, and cute drawings. That's why autoencoders require an information bottleneck to learn any good representation. Check this work out!
0 replies, 286 likes


May 01 2019 Danilo J. Rezende

These results illustrate why we should care about the "description length" of models, not how many parameters they have.
4 replies, 225 likes


May 01 2019 David Pfau

Whenever anyone asks why VC dimension is important...
3 replies, 116 likes


May 02 2019 Ash Jogalekar

Here's a funny paper which fits almost any function - including von Neumann's elephants (http://wavefunction.fieldofscience.com/2015/02/derek-lowe-to-world-beware-of-von.html) - with a single adjustable parameter. https://arxiv.org/pdf/1904.12320.pdf https://t.co/BnH56EGX6e
8 replies, 55 likes


May 02 2019 ML Limericks

Elegant dactylic trimeter Using a single parameter?!   You might say “no way”   But look at it this way: Overfitting’s just got prettier
0 replies, 33 likes


May 02 2019 Mukund Thattai

Can a single-parameter model be used to fit most data? Yes! This was Alan Turing's momentous discovery. The model (the thing that uses the parameter to generate an output) is known as a "universal computer". The parameter itself is just a binary number, known as the "program".
1 replies, 19 likes


May 30 2018 Jeremy Sheff

Missing the Data for the Algorithm: Neat new paper by @spiantado, with implications for regulation of machine learning: heads up @BrettFrischmann, @FrankPasquale, @Klonick. http://jeremysheff.com/2018/05/30/missing-the-data-for-the-algorithm/ https://t.co/L3VKa5V2H2
0 replies, 15 likes


May 11 2019 steven t. piantadosi

Super duper cool -- Laurent Boué unpacked my "one parameter" paper in a friendly way for data scientists and applied it to a bunch of fun cases: https://arxiv.org/pdf/1904.12320.pdf https://t.co/zu7aLX8x66
1 replies, 13 likes


May 01 2019 Teemu Roos

The best paper I have seen this year. Absolutely brilliant. 🎯
0 replies, 11 likes


May 01 2019 Brandon Rohrer@ODSC

“Entertainment for curious data scientists” Intuition that a model’s flexibility is driven by its parameter count can come from working with linear and polynomial models. But not all parameters are created equal.
1 replies, 11 likes


May 01 2019 Stuart Reid

It's not a misconception, it's just nuanced. It's simultaneously true for some models (polynomial regression) *and* seemingly false for other models (deep neural networks). You'll enjoy this work recently presented the IndabaX: https://arxiv.org/abs/1710.03667
2 replies, 9 likes


May 02 2019 Paola Gori-Giorgi

This is interesting for #DFT #compchem : with 1 parameter (albeit high precision) you can fit any data set. What is then a nonempirical functional? https://arxiv.org/abs/1904.12320
3 replies, 8 likes


May 02 2019 Arkady Konovalov

an interesting read for those who fit models to the data: "Real numbers, data science and chaos: How to fit any dataset with a single parameter" https://arxiv.org/abs/1904.12320
0 replies, 8 likes


May 02 2019 Irenes (many)

WHAT
1 replies, 7 likes


Aug 28 2019 André David

Very interesting perspective here.
0 replies, 6 likes


Jun 24 2019 jrbanga

The perils of overfitting... "How much can the values of the weightsof a well-trained neural network be attributed to brute-force memorization of the training data vs. meaningfullearning and why do they provide any kind of generalization?" http://arxiv.org/abs/1904.12320
0 replies, 6 likes


May 01 2019 Jin Choi

This blows my mind
0 replies, 5 likes


May 02 2019 Astroboy

While the work is fascinating the claim that a "single parameter can fit any data" is totally false. The paper says "complete values of parameter α that may extend to tens of thousands of digits". The net information contained in that "one" parameter is unbounded!
2 replies, 5 likes


May 02 2019 tylerni7

I don't think this is particularly deep or meaningful, but it's definitely cute https://twitter.com/lopezdeprado/status/1123399675296481280
1 replies, 4 likes


May 03 2019 Lobsters

Real numbers, data science and chaos: How to fit any dataset with a single parameter https://lobste.rs/s/mf2itd #pdf #ai https://arxiv.org/abs/1904.12320
0 replies, 4 likes


May 07 2019 Jeffrey West

Approximating *any* dataset of any modality (time-series, images, sound) by a scalar function with a single real-valued parameter: https://arxiv.org/abs/1904.12320 Love this line: "Targeting an audience of data scientists with a taste for the curious and unusual,..." https://t.co/7BVg8s3k73
0 replies, 4 likes


May 01 2019 the bioinformartist

Wait, what ? [1904.12320] Real numbers, data science and chaos: How to fit any dataset with a single parameter https://arxiv.org/abs/1904.12320
1 replies, 3 likes


May 02 2019 Duane Rich

Wow this paper is illuminating, hilarious and ingenious! You can represent, with arbitrary precision, any dataset with a single parameter! It shows counting your models parameters isn’t necessarily a good proxy for what you want - yuno, that murky “degrees of freedom” idea
0 replies, 3 likes


May 02 2019 Ben Nagy

WAT IS THIS SORCERY THE 🤬 FUNCTION IS DIFFERENTIABLE 🤯
0 replies, 3 likes


May 17 2019 Sarah Vallelian

interesting arxiv find of the day 🐘 https://arxiv.org/abs/1904.12320
1 replies, 3 likes


May 01 2019 R. Ferrer-i-Cancho🎗

It is worth reminding that overfitting has two sources: the complexity of the model per se and its parameters.
1 replies, 2 likes


May 01 2019 J. Javier Gálvez

This a mind-blowing, at least for me. In this paper: "How much can the values of the weights of a well-trained neural network be attributed to brute-force memorization of the training data vs. meaningful learning and why do they provide any kind of generalization?"... why???
1 replies, 2 likes


May 01 2019 Moideen Kalladi

what is this behavior
0 replies, 2 likes


May 31 2019 Fayolle Pierre-Alain

@KMMoerman @BrunoLevy01 "Real numbers, data science and chaos: How to fit any dataset with a single parameter": https://arxiv.org/abs/1904.12320 github code: https://github.com/Ranlot/single-parameter-fit/
0 replies, 1 likes


May 01 2019 Adrian Jacobo

Take that von Neuman! How to fit an elephant using a single parameter!
0 replies, 1 likes


May 02 2019 Uri Manor

Whoa
0 replies, 1 likes


May 01 2019 Joan Serrà

Who needs a single parameter when you can train millions?
0 replies, 1 likes


May 17 2019 Steinn Sigurdsson

this! @arxiv
0 replies, 1 likes


May 02 2019 Abolfazl Alipour

Nice illustration!
0 replies, 1 likes


Content