Papers of the day   All papers

Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs


Alexia Jolicoeur-Martineau: My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs! We also show how to make better gradient penalties!

10 replies, 758 likes

hardmaru 😷: Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs New work by @jm_alexia and @bouzoukipunks 🔥🔥

1 replies, 385 likes

Alexia Jolicoeur-Martineau: Proof that SVMs are just a trivial case of GAN. 🤠

4 replies, 294 likes

Alexia Jolicoeur-Martineau: You miss the old times when SVMs where at the top of the food-chain? Turns out that gradient-penalized classifiers are a generalization of Soft-SVMs. Read my paper to find out how to make your #NeuralNetworks act like SVMs. #ML #AI

5 replies, 198 likes

Alexia Jolicoeur-Martineau: Frequent users of gradient penalty (WGAN-GP, StyleGAN, etc.), make sure to try out the new Linfinity hinge gradient penalty from for better results. See for how to quickly and easily implement it in #PyTorch.

1 replies, 167 likes

Alexia Jolicoeur-Martineau: Can you train a #SupportVectorMachine (SVM) when your classifier is a #NeuralNetwork? 🤔 Yes, use a Hinge loss classifier with a L2-norm gradient penalty, see: #MachineLearning #AI #ArtificialInteligence #Math

2 replies, 152 likes

Alexia Jolicoeur-Martineau: You like Relativistic GANs? 🤩 Read my recent paper for a geometrical interpretation of Relativistic GANs. I also explain why certain variants only perform well with a gradient penalty. #AI #DeepLearning #MachineLearning

0 replies, 88 likes

Alexia Jolicoeur-Martineau: I tried AdverTorch from @BorealisAI for my class project on adversarial robustness of gradient penalized classifiers ( and its very easy to use. You can plug-and-play and test accuracy at different levels of adversarial examples.

1 replies, 64 likes

Statistics Papers: Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs.

0 replies, 37 likes

Alexia Jolicoeur-Martineau: There are many ways to explain gradient penalties in GANs, but most are post-hoc reasonings. A satisfying explanation is that gradient penalties result from assuming a maximum-margin discriminator/critic (a generalization of SVM to non-linear classifiers).

1 replies, 34 likes

Alexia Jolicoeur-Martineau: For a given margin (a distance between samples and the decision boundary), we obtain a specific gradient penalty. I explored L1 and L2 margins, but one could invent better margins that lead to better gradient penalties. It's an interesting problem to think about.

0 replies, 29 likes

Alexia Jolicoeur-Martineau: My poster to be presented at @SOCML showing new work on a generalization of #SVMs to #NeuralNetworks with links to gradient-penalized #GANs.

1 replies, 26 likes

Alexia Jolicoeur-Martineau: Learn how to generalize Support Vector Machines (SVMs) with #DeepLearning: #AI #ML #MachineLearning #ComputerScience #Mathematics

0 replies, 21 likes

Montreal.AI: http://Montreal.AI just had an insightful conversation with @jm_alexia about the Relativistic GANs

1 replies, 15 likes

Alexia Jolicoeur-Martineau: I thought that it was "trivial" so I didn't even bother proving it in my last paper, but it turns out that it's very hard to prove and it requires differential geometry to solve. 😹

1 replies, 9 likes

Alexia Jolicoeur-Martineau: @reworkdl I'll be available to talk on Thursday and Friday at the conference. Feel free to ask me questions about the links between SVMs and GANs: 🧐

0 replies, 9 likes

MilaMontreal: #TBTMilaPapers

0 replies, 8 likes

HotComputerScience: Most popular computer science paper of the day: "Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs"

0 replies, 2 likes

Alexia Jolicoeur-Martineau: For more info on these equivalences, I recommend reading: and

0 replies, 2 likes

Pinaki Dasgupta ,MBA ✨: This work by @jm_alexia provides a framework to derive MMCs(maximum-margin classifiers) that results in very effective #GAN loss functions. can be used to derive new gradient norm penalties & improve the performance of #GANs. #SVMs

0 replies, 1 likes


Found on Oct 16 2019 at

PDF content of a computer science paper: Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs