PostDoctoral fellow at Mila laboratory and McGill University
Welcome to my personal website !
I am a PostDoctoral fellow at Mila laboratory and McGill University working with Adam Oberman and Ioannis Mitliagkas. I am working on Out-of-Distribution samples and Optimal Transport.
Prior to my PostDoc, I was a PhD candidate under the supervision of Pr. Nicolas Courty and Pr. Rémi Flamary at IRISA-INRIA Panama and Obelix. My research focused on optimal transport, machine learning and optimization with applications in large scale settings and noisy labels. The defense recording can be found on YouTube, the slides here and the manuscript here.
I graduated from both Ecole Polytechnique and ENSTA ParisTech in applied mathematics and machine learning. I was also an exchange student at UC Berkeley during the fall of 2018.
You can find my resume here.
My work focuses on optimization for machine learning and the interaction between optimal transport and machine learning.
POT: Python Optimal Transport
Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya,
Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos,
Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H. Gayraud, Hicham Janati,
Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy,
Danica J. Sutherland, Alexander Tong and Titouan Vayer
Journal of Machine Learning Research (JMLR) - Open Source Software, 2021
@article{JMLR:v22:20-451, author = {R\'emi Flamary and Nicolas Courty and Alexandre Gramfort and Mokhtar Z. Alaya and Aur\'elie Boisbunon and Stanislas Chambon and Laetitia Chapel and Adrien Corenflos and Kilian Fatras and Nemo Fournier and L\'eo Gautheron and Nathalie T.H. Gayraud and Hicham Janati and Alain Rakotomamonjy and Ievgen Redko and Antoine Rolet and Antony Schutz and Vivien Seguy and Danica J. Sutherland and Romain Tavenard and Alexander Tong and Titouan Vayer}, title = {POT: Python Optimal Transport}, journal = {Journal of Machine Learning Research}, year = {2021}, volume = {22}, number = {78}, pages = {1-8}, url = {http://jmlr.org/papers/v22/20-451.html} }
Unbalanced minibatch Optimal Transport; applications to Domain Adaptation
Kilian Fatras, Thibault Séjourné, Nicolas Courty and Rémi Flamary
International Conference on Machine Learning (ICML), 2021
Keywords: Unbalanced Optimal Transport, Minibatch, Concentration Bounds, (Partial) Domain Adaptation
@InProceedings{pmlr-v139-fatras21a, title = {Unbalanced minibatch Optimal Transport; applications to Domain Adaptation}, author = {Fatras, Kilian and Sejourne, Thibault and Flamary, R{\'e}mi and Courty, Nicolas}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3186--3197}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/fatras21a/fatras21a.pdf}, url = {http://proceedings.mlr.press/v139/fatras21a.html}, abstract = {Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, i.e. minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behaviors. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.} }
Minibatch Optimal Transport distances; analysis and applications
Kilian Fatras, Younes Zine, Szymon Majewski, Rémi Flamary, Rémi Gribonval and Nicolas Courty
Preprint, 2021
Keywords: Optimal Transport, Minibatch, Concentration Bounds, GANs, Sub-Gaussian data
@misc{fatras2021minibatch, title={Minibatch optimal transport distances; analysis and applications}, author={Kilian Fatras and Younes Zine and Szymon Majewski and Rémi Flamary and Rémi Gribonval and Nicolas Courty}, year={2021}, eprint={2101.01792}, archivePrefix={arXiv}, primaryClass={stat.ML} }
Generating natural adversarial Remote Sensing Images
Jean-Christophe Burnel, Kilian Fatras, Rémi Flamary and Nicolas Courty
IEEE Transactions on Geoscience and Remote Sensing (TGRS), 2021
Keywords: Optimal Transport, GANs, Adversarial Examples, Remote Sensing
@ARTICLE{burnel2021, author={Burnel, Jean-Christophe and Fatras, Kilian and Flamary, R{\'e}mi and Courty, Nicolas}, journal={IEEE Transactions on Geoscience and Remote Sensing}, title={Generating natural adversarial Remote Sensing Images}, year={(to appear) 2021}}
Learning with minibatch Wasserstein: asymptotic and gradient properties
Kilian Fatras, Younes Zine, Rémi Flamary, Rémi Gribonval and Nicolas Courty
Proceedings of the 23nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Keywords: Optimal Transport, Minibatch, Concentration Bounds, Large Scale Color Transfer
@InProceedings{pmlr-v108-fatras20a, title = {Learning with minibatch Wasserstein : asymptotic and gradient properties}, author = {Fatras, Kilian and Zine, Younes and Flamary, R\'emi and Gribonval, Remi and Courty, Nicolas}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2131--2141}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, address = {Online}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/fatras20a/fatras20a.pdf}, url = {http://proceedings.mlr.press/v108/fatras20a.html}, abstract = {Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches i.e., they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property. Along with this theoretical analysis, we also conduct empirical experiments on gradient flows, GANs or color transfer that highlight the practical interest of this strategy.} }
Wasserstein Adversarial Regularization (WAR) on label noise
Kilian Fatras*, Bharath Damodaran*, Sylvain Lobry, Rémi Flamary, Devis Tuia and Nicolas Courty
* equal contribution
IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI), 2021
Keywords: Optimal Transport, Adversarial Training, label noise, Remote Sensing
@ARTICLE{Fatras2021WAR, author={Fatras, Kilian and Damodaran, Bharath Bhushan and Lobry, Sylvain and Flamary, Remi and Tuia, Devis and Courty, Nicolas}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, title={Wasserstein Adversarial Regularization for learning with label noise}, year={2021}, doi={10.1109/TPAMI.2021.3094662}}
Proximal Splitting Meets Variance Reduction
Fabian Pedregosa, Kilian Fatras and Mattia Casotto.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Keywords: Proximal Splitting, Variance Reduction, Sparse Update
@InProceedings{pmlr-v89-pedregosa19a, title = {Proximal Splitting Meets Variance Reduction}, author = {Pedregosa, Fabian and Fatras, Kilian and Casotto, Mattia}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {1--10}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/pedregosa19a/pedregosa19a.pdf}, url = {http://proceedings.mlr.press/v89/pedregosa19a.html}, abstract = {Despite the raise to fame of stochastic variance reduced methods like SAGA and ProxSVRG, their use in non-smooth optimization is still limited to a few simple cases. Existing methods require to compute the proximal operator of the non-smooth term at each iteration, which, for complex penalties like the total variation, overlapping group lasso or trend filtering, is an iterative process that becomes unfeasible for moderately large problems. In this work we propose and analyze VRTOS, a variance-reduced method to solve problems with an arbitrary number of non-smooth terms. Like other variance reduced methods, it only requires to evaluate one gradient per iteration and converges with a constant step size, and so is ideally suited for large scale applications. Unlike existing variance reduced methods, it admits multiple non-smooth terms whose proximal operator only needs to be evaluated once per iteration. We provide a convergence rate analysis for the proposed methods that achieves the same asymptotic rate as their full gradient variants and illustrate its computational advantage on 4 different large scale datasets.} }