Papers of the day   All papers

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs


Oct 16 2019 Alexia Jolicoeur-Martineau

My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs! We also show how to make better gradient penalties!
9 replies, 708 likes

Oct 16 2019 hardmaru 😷

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs New work by @jm_alexia and @bouzoukipunks 🔥🔥
1 replies, 381 likes

Oct 19 2019 Alexia Jolicoeur-Martineau

Proof that SVMs are just a trivial case of GAN. 🤠
4 replies, 294 likes

Oct 27 2019 Alexia Jolicoeur-Martineau

You miss the old times when SVMs where at the top of the food-chain? Turns out that gradient-penalized classifiers are a generalization of Soft-SVMs. Read my paper to find out how to make your #NeuralNetworks act like SVMs. #ML #AI
5 replies, 198 likes

Nov 13 2019 Alexia Jolicoeur-Martineau

Frequent users of gradient penalty (WGAN-GP, StyleGAN, etc.), make sure to try out the new Linfinity hinge gradient penalty from for better results. See for how to quickly and easily implement it in #PyTorch.
1 replies, 167 likes

Nov 04 2019 Alexia Jolicoeur-Martineau

Can you train a #SupportVectorMachine (SVM) when your classifier is a #NeuralNetwork? 🤔 Yes, use a Hinge loss classifier with a L2-norm gradient penalty, see: #MachineLearning #AI #ArtificialInteligence #Math
2 replies, 152 likes

Jun 17 2019 Borealis AI

Great news! Our #AdverTorch toolbox has been added to the #PyTorch Ecosystem. Learn how to use it for attack-and-defence strategies here: #AINorth
1 replies, 97 likes

Nov 02 2019 Alexia Jolicoeur-Martineau

You like Relativistic GANs? 🤩 Read my recent paper for a geometrical interpretation of Relativistic GANs. I also explain why certain variants only perform well with a gradient penalty. #AI #DeepLearning #MachineLearning
0 replies, 88 likes

Dec 01 2019 Alexia Jolicoeur-Martineau

I tried AdverTorch from @BorealisAI for my class project on adversarial robustness of gradient penalized classifiers ( and its very easy to use. You can plug-and-play and test accuracy at different levels of adversarial examples.
1 replies, 64 likes

Oct 16 2019 Statistics Papers

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs.
0 replies, 37 likes

Oct 29 2019 Alexia Jolicoeur-Martineau

There are many ways to explain gradient penalties in GANs, but most are post-hoc reasonings. A satisfying explanation is that gradient penalties result from assuming a maximum-margin discriminator/critic (a generalization of SVM to non-linear classifiers).
1 replies, 34 likes

Dec 07 2019 Alexia Jolicoeur-Martineau

For a given margin (a distance between samples and the decision boundary), we obtain a specific gradient penalty. I explored L1 and L2 margins, but one could invent better margins that lead to better gradient penalties. It's an interesting problem to think about.
0 replies, 29 likes

Nov 22 2019 Alexia Jolicoeur-Martineau

My poster to be presented at @SOCML showing new work on a generalization of #SVMs to #NeuralNetworks with links to gradient-penalized #GANs.
1 replies, 26 likes

Nov 07 2019 Alexia Jolicoeur-Martineau

Learn how to generalize Support Vector Machines (SVMs) with #DeepLearning: #AI #ML #MachineLearning #ComputerScience #Mathematics
0 replies, 21 likes

Oct 24 2019 Montreal.AI

http://Montreal.AI just had an insightful conversation with @jm_alexia about the Relativistic GANs
1 replies, 15 likes

Oct 21 2019 Alexia Jolicoeur-Martineau

@reworkdl I'll be available to talk on Thursday and Friday at the conference. Feel free to ask me questions about the links between SVMs and GANs: 🧐
0 replies, 9 likes

Oct 22 2019 HotComputerScience

Most popular computer science paper of the day: "Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs"
0 replies, 2 likes

Nov 16 2019 Pinaki Dasgupta ,MBA ✨

This work by @jm_alexia provides a framework to derive MMCs(maximum-margin classifiers) that results in very effective #GAN loss functions. can be used to derive new gradient norm penalties & improve the performance of #GANs. #SVMs
0 replies, 1 likes