Activity Review | The CUHK-Shenzhen Workshop on Optimization Theory and Applications Was Held from July 31 to August 2
The 2019 workshop on optimization theory and applications was successfully held from July 31 to August 2 on the campus of the Chinese university of Hong Kong (Shenzhen). The workshop was jointly organized by the Institute of Data and Decision Analytics (iDDA) at CUHK-Shenzhen, Shenzhen Research Institute of Big Data (SRIBD) and the Mathematical Programming Branch of Operations Research Society of China. Participants from academia, research institutes and business circles at home and abroad had academic discussions and exchanges of scientific research results.
Two lectures and eight presentations were given by Professor Yin Zhang, Co-Dean of the Institute of Data and Decision Analytics at CUHK-Shenzhen, Presidential Chair Professor Shuzhong Zhang, and Vice President (academic) Professor Tom Luo. Eighteen speakers were invited to share their latest academic achievements.
The workshop officially began on the morning of July 31. Prof. Yin Zhang, Co-Dean of the Institute of Data and Decision Analytics at CUHK-Shenzhen and Prof. Tom Luo, Vice President (academic) of CUHK-Shenzhen, delivered opening speeches respectively. Prof. Yin Zhang extended a warm welcome to the guests and participants. He said that the scale of the workshop was much larger than that of last year and he expected it to provide a beneficial platform for scholars at home and abroad to exchange academic ideas. Prof. Luo introduced the new university of CUHK-Shenzhen and pointed out that the scale of the university will continue to expand towards the goal of establishing an international and global university.
Professor Yin Zhang, Co-Dean of the Institute of Data and Decision Analytics at CUHK-Shenzhen delivered a speech
Professor Tom Luo, Vice President (academic) of CUHK-Shenzhen delivered a speech
Presidential Chair Professor Shuzhong Zhang hosted the workshop
The first lecture on the first day of the workshop was given by Professor Wotao Yin from University of California, Los Angeles. Based on the topic of " Building (Stochastic, Distributed, Learned) First-Order Methods from Monotone Operators", Professor Wotao Yin overviewed the foundation of monotone operators and used elementary Euclidean geometry to illustrate many useful results regarding convergence of operator iterations and their parameter selections. After presenting a couple of basic yet powerful techniques in operator splitting, he also suggested approaches to integrating classic operator-splitting methods with deep learning for better performance. Then Professor Ruoyu Sun from University of Illinois at Urbana-Champaign introduced the recent advances in mathematical theory of deep learning in his keynote lecture titled "Introduction to Mathematics of Deep Learning" and further discussed some representative results in approximation theory and optimization theory, and also pointed out some outstanding open questions.
Professor Wotao Yin (University of California, Los Angeles)
Professor Ruoyu Sun (University of Illinois at Urbana-Champaign)
In the afternoon, the first special report centered on the history and latest progress of Alternating Direction Methods of Multipliers (ADMM). First, Professor Roland Glowinski from University of Houston delivered a keynote speech entitled "On Alternating Direction Methods of Multipliers: A Historical Perspective". He provided historical facts concerning the origins of ADMM and its related algorithms in the framework of Hilbert Space, and showed the relationship between ADMM and some classical operator-splitting methods, and then presented the results of numerical experiments concerning the application of ADMM to the solution of the Weber problem and of a non-convex problem from nonlinear Elasto-Dynamics. Then, Professor Caihua Chen from Nanjing University introduced the double-block ADMM and its convergence rate, ADMM and its variants for multi-block convex minimization problems, the existing ADMM-type methods for non-convex optimization in his keynote speech "ADMM: Recent Developments".
Professor Roland Glowinski (University of Houston)
Professor Caihua Chen (Nanjing University)
After a short tea break came a second presentation on machine learning and optimization. Professor Hongyuan Zha of CUHK-Shenzhen delivered a keynote speech on Meta-Learning and Generative Models with Applications. He mentioned the ideas and notions of meta-learning and how they can be related to alternative ways of solving problems in scientific computing and optimization. At the same time, he also gave a brief presentation of learning generative models both likelihood-based and likelihood-free, and especially the role played by Stein's method and related ideas in learning those models. In his report "Large scale linear programming decoding via the alternating direction method of multipliers ", Stark Draper, a professor at the University of Toronto, applied the alternating direction method of multipliers (ADMM) to solve the linear programming (LP) relaxation of maximum likelihood decoding for error-correction codes in an efficient and parallelizable manner. In addition, he also elaborated the fixed-point implementation of Field-Programmable Gate Array (FPGA).
Professor Hongyuan Zha (CUHK-Shenzhen)
Professor Stark Draper (University of Toronto)
The next day, Professor Yaohua Hu from Shenzhen University gave the first lecture of the third special report. In his report "Group sparse optimization in systems biology", he stated that by employing the special structure of the involved regulatory networks, we can formulate two system biology problems—inferring gene regulatory networks from gene expression data and identifying key factors for cell fate conversion into group sparse optimization problem, and expounded its theoretical guarantee and numerical performance. Professor Jianwei Huang of CUHK-Shenzhen presented an academic report entitled “Economics of Mobile Crowd Sensing”. He described the design of an effective reward system to induce high-quality contributions by users in mobile crowd sensing, considering the diversity of data contributions and social relationship among users.
Professor Yaohua Hu (Shenzhen University)
Professor Jianwei Huang (CUHK-Shenzhen)
Radu loan Bot, professor at University of Vienna, proposed a primal-dual dynamical approach to the minimization of a structured convex function and introduced a dynamical system in his report “A Primal-dual Dynamical Approach to Structured Convex Minimization Problems”. In addition, he also provided rates for both the violation of the feasibility condition by the ergodic trajectories and the convergence of the objective function along these ergodic trajectories to its minimal value.
Bo Jiang, a professor at Shanghai University of Finance and Economics, delivered a speech called “An Optimal High-Order Tensor Method for Convex Optimization”. He proposed a high-order tensor algorithm suitable for the general composite case, whose iteration complexity matches the lower bound of for the d-th order tensor methods, hence is optimal.
Professor Radu loan Bot (University of Vienna)
Professor Bo Jiang (Shanghai University of Finance and Economics)
Around the topic “Some Recent Advances on Completely Positive Optimization”, Jinyan Fan, a professor from Shanghai Jiao Tong University overviewed the semidefinite relaxation methods for Completely positive (CP) matrices decomposition problem, the CP completion problem and the CP approximation problem.
Professor Mingyi Hong from University of Minnesota presented a lower bound analysis that identifies difficult problem instances for any first-order method and a rate-optimal distributed method whose rate matches the lower bound (up to a ploylog factor) in his report “Recent Advances in Non-Convex Distributed Optimization and Learning”. In addition, he provided a novel gradient correction mechanism to overcome the gap in a popular algorithm called signSGD.
Professor Jinyan Fan (Shanghai Jiao Tong University)
Professor Mingyi Hong (University of Minnesota)
In his talk “Newton-Type Stochastic Optimization Algorithms for Machine Learning” Professor Zaiwen Wen from Peking University introduced stochastic semismooth quasi-Newton methods for large-scale problems involving smooth nonconvex and nonsmooth convex terms in the objective function for deep learning and a stochastic trust region method for reinforcement learning.
Professor Xin Liu of Chinese Academy of Sciences delivered a report named A Class of Smooth Exact Penalty Function Methods for Optimization Problems with Orthogonality Constraints. He proposed an exact penalty function model with compact convex constraints (PenC) to solve optimization problems with orthogonality constraints. He also introduced a first-order algorithm called PenCF and a second-order approach called PenCS.
Professor Zaiwen Wen (Peking University)
Professor Xin Liu (Chinese Academy of Sciences)
On the last day of the workshop, Professor Yangyang Xu from Rensselaer Polytechnic Institute presented their recent works on both convex and nonconvex problems with nonlinear functional constraints in the report “the First-order Methods for Convex and Nonconvex Functional Constrained Problems”.
Professor Junfeng Yang from Nanjing University established the worst case nonergodic sublinear convergence rate which is optimal in terms of both the order as well as the constants involved in a report called Optimal Nonergodic Sublinear Convergence Rate of Proximal Point Algorithm for Maximal Monotone Inclusion Problems.
Professor Yangyang Xu (Rensselaer Polytechnic Institute)
Professor Junfeng Yang (Nanjing University)
Professor Chuangyin Dang of City University of Hong Kong made a report on Refinements of Nash Equilibrium and Their Computation. He briefly introduced these refinements of Nash equilibrium and presented our recent developments of smooth path-following approaches to computing these refinements.
Later Professor Yishuai Niu from Shanghai Jiao Tong University introduced a DC (Difference-of-Convex) programming approach to solving higher-order moment portfolio selection problem. He also proposed an improved DC algorithm called Boosted-DCA (BDCA) to accelerate the convergence of DCA.
Professor Chuangyin Dang (City University of Hong Kong)
Professor Yishuai Niu (on the left) (Shanghai Jiao Tong University)
With in-depth academic discussions and exchanges, The 2019 CUHK-Shenzhen Workshop on Optimization Theory and Applications came to a successful conclusion.