Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

talks

SAGA: Introduction to Variance Reduction

Published:

This seminar is about a recursive framework for improving convergence performance in expectation on convex stochastic optimization. By replacing the gradient of the reference point with the last iterate, the stochastic average gradient algorithm (SAGA) saves more computational resource with linear convergence, and supports for composite objectives where a proximal operator is used on the regularizer, compared with stochastic variance reduced gradients (SVRG).

Convex-Concave Minmax Optimization: Applications and Methods

Published:

In this seminar, I first introduced the smooth convex-concave saddle point problem and its intuitions. To solve such problem, I intuitively showed an algorithm called gradient descent-ascent (GDA) that theoretically feasible but practically diverged, and further showing its converged variant, proximal point algorithm (PPA). Given the intractability of PPA’s future step gradient $\nabla f(x_{k+1},y_{k+1})$, I provided the optimistic gradient descent-ascent algorithm (OGDA) and the extragradient (EG) algorithm, and highlighted how gradients used in OGDA and EG approximate the gradient of the PPA. Then, I exploit this interpretation to show that the primal-dual gap of the averaged iterates generated by both algorithms converge with a rate of $O(1/k)$. Ultimately, I analyzed the last iterate convergence properties of both algorithms, and showed that the last iterate of both algorithms converge at a rate of $O(1/\sqrt{k})$, which is slower than the averaged iterate in smooth convex-concave saddle point problem.

teaching

Global Management Challenge Workshop, 19 & 20 Fall

Instructor, School of Business, Macau University of Science and Technology, 2019

The Global Management Challenge (GMC) is a global strategic operations management competition that runs with a complex computer simulation system, in which each team runs different virtual company in the same market environment, and competes by developing and producing products that can better meet customer needs to maximize their investment performance. Since 2019, I have become the instructor for this workshop and take in charge of the graduate and undergraduate instruction for two semesters (2019 & 2020 Fall).