Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/231262
Title: Learning a Coupled Linearized Method in Online Setting
Authors: Wei Xue;Wensheng Zhang
Year: 2017
Publisher: IEEE
Abstract: Based on the alternating direction method of multipliers, in this paper, we propose, analyze, and test a coupled linearized method, which aims to minimize an unconstrained problem consisting of a loss term and a regularization term in an online setting. To solve this problem, we first transform it into an equivalent constrained minimization problem with a separable structure. Then, we split the corresponding augmented Lagrangian function and minimize the resulting subproblems distributedly with one variable by fixing another one. This method is easy to execute without calculating matrix inversion by implementing three linearized operations per iteration, and at each iteration, we can obtain a closed-form solution. In particular, our update rule contains the well-known softthresholding operator as a special case. Moreover, upper bound on the regret of the proposed method is analyzed. Under some mild conditions, it can achieve O(1/&x221A;T) convergence rate for convex learning problems and O((logT)/T) for strongly convex learning. Numerical experiments and comparisons with several state-of-the-art methods are reported, which demonstrate the efficiency and effectiveness of our approach.
URI: http://localhost/handle/Hannan/231262
volume: 28
issue: 2
More Information: 438,
450
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7390084.pdf1.76 MBAdobe PDF
Title: Learning a Coupled Linearized Method in Online Setting
Authors: Wei Xue;Wensheng Zhang
Year: 2017
Publisher: IEEE
Abstract: Based on the alternating direction method of multipliers, in this paper, we propose, analyze, and test a coupled linearized method, which aims to minimize an unconstrained problem consisting of a loss term and a regularization term in an online setting. To solve this problem, we first transform it into an equivalent constrained minimization problem with a separable structure. Then, we split the corresponding augmented Lagrangian function and minimize the resulting subproblems distributedly with one variable by fixing another one. This method is easy to execute without calculating matrix inversion by implementing three linearized operations per iteration, and at each iteration, we can obtain a closed-form solution. In particular, our update rule contains the well-known softthresholding operator as a special case. Moreover, upper bound on the regret of the proposed method is analyzed. Under some mild conditions, it can achieve O(1/&x221A;T) convergence rate for convex learning problems and O((logT)/T) for strongly convex learning. Numerical experiments and comparisons with several state-of-the-art methods are reported, which demonstrate the efficiency and effectiveness of our approach.
URI: http://localhost/handle/Hannan/231262
volume: 28
issue: 2
More Information: 438,
450
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7390084.pdf1.76 MBAdobe PDF
Title: Learning a Coupled Linearized Method in Online Setting
Authors: Wei Xue;Wensheng Zhang
Year: 2017
Publisher: IEEE
Abstract: Based on the alternating direction method of multipliers, in this paper, we propose, analyze, and test a coupled linearized method, which aims to minimize an unconstrained problem consisting of a loss term and a regularization term in an online setting. To solve this problem, we first transform it into an equivalent constrained minimization problem with a separable structure. Then, we split the corresponding augmented Lagrangian function and minimize the resulting subproblems distributedly with one variable by fixing another one. This method is easy to execute without calculating matrix inversion by implementing three linearized operations per iteration, and at each iteration, we can obtain a closed-form solution. In particular, our update rule contains the well-known softthresholding operator as a special case. Moreover, upper bound on the regret of the proposed method is analyzed. Under some mild conditions, it can achieve O(1/&x221A;T) convergence rate for convex learning problems and O((logT)/T) for strongly convex learning. Numerical experiments and comparisons with several state-of-the-art methods are reported, which demonstrate the efficiency and effectiveness of our approach.
URI: http://localhost/handle/Hannan/231262
volume: 28
issue: 2
More Information: 438,
450
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7390084.pdf1.76 MBAdobe PDF