Skip to content

Optimization for Machine Learning

Design of accelerated first-order optimization algorithms

First-order optimization algorithms are very commonly employed in machine learning problems such as classification and object recognition, and many methods have been developed to accelerate these large optimization problems. Yet the success of these accelerative gradient algorithms remains somewhat mysterious. We seek to build a systematic understanding of these numerical methods in order to develop a family of fundamental algorithms for use in a variety of applications.

Zhang, Jingzhao, et al. “Direct Runge-Kutta Discretization Achieves Acceleration.” arXiv preprint arXiv:1805.00521 (2018).

Understanding the Optimization landscape of deep neural networks

We are studying the theory of deep neural networks, in search of reasons for practical success of deep learning, especially from an optimization point of view. Current projects include investigation of global/local optimality of neural network’s nonconvex loss surfaces, behavior of stochastic gradient descent (SGD) on minimizing training losses, and expressivity of neural networks.

  • Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Global optimality conditions for deep neural networks. The Sixth International Conference on Learning Representations (ICLR 2018). [arXiv] [paper]
  • Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Efficiently testing local optimality and escaping saddles for ReLU networks. Sep 2018. [arXiv]
  • Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Small nonlinearities in activation functions create bad local minima in neural networks. Feb 2018.[arXiv]

 

Escaping saddle points in nonconvex optimization

The problem of escaping from saddle points in smooth nonconvex optimization has received a lot of attention recently. However, most works related to this topic focus on unconstrained problems, and they cannot be applied to nonconvex minimization problems with convex constraints, even if the constraint is as simple as an ellipsoid. In this research, we focus on escaping from saddle points in smooth nonconvex optimization problems subject to a convex constraints.

A. Mokhtari, A. Ozdaglar, and A. Jadbabaie. Escaping Saddle Points in Constrained Optimization, Advances in Neural Information Processing Systems (NIPS), 2018. [pdf]

Back to Research & Projects