Some Useful Inequalities in Convex Optimization
Published:
This note collects a set of inequalities for smooth convex functions. As an introduction, we shall prove the convergence of gradient descent method using some of these inequalities. In particular, we show that gradient descent with fixed step size converges to a global minimum at a sublinear rate when the objective is smooth convex, and at a linear rate when strong convexity is added. It is also surprising to me that while the loose bound, $1-\mu/L$, is commonly used in standard convex analysis, the tight bound, $(1-\mu/L)^2$, is a lesser known fact and can hardly be found in notes/lectures on this topic.