Convex-Concave Minmax Optimization: Applications and Methods
Seminar, Shenzhen Research Institute of Big Data (SRIBD) Forum, Shenzhen, China
In this seminar, I first introduced the smooth convex-concave saddle point problem and its intuitions. To solve such problem, I intuitively showed an algorithm called gradient descent-ascent (GDA) that theoretically feasible but practically diverged, and further showing its converged variant, proximal point algorithm (PPA). Given the intractability of PPA’s future step gradient $\nabla f(x_{k+1},y_{k+1})$, I provided the optimistic gradient descent-ascent algorithm (OGDA) and the extragradient (EG) algorithm, and highlighted how gradients used in OGDA and EG approximate the gradient of the PPA. Then, I exploit this interpretation to show that the primal-dual gap of the averaged iterates generated by both algorithms converge with a rate of $O(1/k)$. Ultimately, I analyzed the last iterate convergence properties of both algorithms, and showed that the last iterate of both algorithms converge at a rate of $O(1/\sqrt{k})$, which is slower than the averaged iterate in smooth convex-concave saddle point problem.