Teaching

Lecturer

Stochastic Modeling of Scientific Data I, II, STAT516, Autumn 2020, 2021, STAT517 Winter 2021, 2022.
University of Washington.
Stochastic Modeling of Scientific Data I (STAT516): Markov chains; ergodic theorem; Monte Carlo Markov Chains; Hidden Markov Models; frequentist and Bayesian inference on Markov chains.
Stochastic Modeling of Scientific Data II (STAT517): Gaussian processes; spatial linear regression; Gaussian Markov random fields; Poisson processes; Wiener processes.

Probability I, II, STAT394 Winter 2021, Autumn 2021 STAT395 Spring 2020, 2021, Winter 2022.
University of Washington.
Probability I (STAT394): axiomatic definitions of probability; conditional probability, independence, Bayes’ theorem; classical discrete and continuous random variables; expectation, variance, quantiles; transformations of a single random variable; Markov and Chebyshev’s inequality; weak law of large numbers.
Probability II (STAT395): joint distributions; exchangeability; moment generating function; covariance, correlation; central limit theorem; conditional distributions.

Statistical Learning: Modeling, Prediction, And Computing, STAT538 Winter 2020.
University of Washington. Co-taught with Zaid Harchaoui.
Reviews optimization and convex optimization in its relation to statistics. Covers the basics of unconstrained and constrained convex optimization, basics of clustering and classification, entropy, KL divergence and exponential family models, duality, modern learning algorithms like boosting, support vector machines, and variational approximations in inference.

Teaching Assistant

Convex Optimization, 2014-2017.
Master Mathematics, Vision, Learning, École Normale Supérieure Paris-Saclay, Paris.
Taught by Alexandre d’Aspremont, see website.

Oral Interrogations in Mathematics, 2013-2014.
Classes Préparatoires in Mathematics and Physics Lycée Janson de Sailly, Paris.

Tutorials

Automatic Differentiation, May 2019, 2020.
Statistical Machine Learning for Data Scientists, University of Washington.
Lecture on automatic differentiation with code examples covering: how to compute gradients of a chain of computations, how to use automatic-differentiation software, how to use automatic- differentiation beyond gradient computations.
slides notebook

Optimization for deep learning, July 2018.
Summer School on Fundamentals of Data Analysis, University of Wisconsin.
Interactive Jupyter Notebook to understand the basics of optimization for deep learning: automatic-differentiation, convergence guarantees of SGD, illustration of the batch-normalization effect.