Seminar and Talks

Convex-Concave Minmax Optimization: Applications and Methods

August 05, 2022

Seminar, Shenzhen Research Institute of Big Data (SRIBD) Forum, Shenzhen, China

In this seminar, I first introduced the smooth convex-concave saddle point problem and its intuitions. To solve such problem, I intuitively showed an algorithm called gradient descent-ascent (GDA) that theoretically feasible but practically diverged, and further showing its converged variant, proximal point algorithm (PPA). Given the intractability of PPA’s future step gradient $\nabla f(x_{k+1},y_{k+1})$, I provided the optimistic gradient descent-ascent algorithm (OGDA) and the extragradient (EG) algorithm, and highlighted how gradients used in OGDA and EG approximate the gradient of the PPA. Then, I exploit this interpretation to show that the primal-dual gap of the averaged iterates generated by both algorithms converge with a rate of $O(1/k)$. Ultimately, I analyzed the last iterate convergence properties of both algorithms, and showed that the last iterate of both algorithms converge at a rate of $O(1/\sqrt{k})$, which is slower than the averaged iterate in smooth convex-concave saddle point problem.

SAGA: Introduction to Variance Reduction

February 18, 2022

Seminar, Shenzhen Research Institute of Big Data (SRIBD) Forum, Shenzhen, China

This seminar is about a recursive framework for improving convergence performance in expectation on convex stochastic optimization. By replacing the gradient of the reference point with the last iterate, the stochastic average gradient algorithm (SAGA) saves more computational resource with linear convergence, and supports for composite objectives where a proximal operator is used on the regularizer, compared with stochastic variance reduced gradients (SVRG).