Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/214654
Title: Decentralized Sparse Multitask RLS Over Networks
Authors: Xuanyu Cao;K. J. Ray Liu
Year: 2017
Publisher: IEEE
Abstract: Distributed adaptive signal processing has attracted much attention in the recent decade owing to its effectiveness in many decentralized real-time applications in networked systems. Because many natural signals are highly sparse with most entries equal to zero, several decentralized sparse adaptive algorithms have been proposed recently. Most of them is focused on the single task estimation problems, in which all nodes receive data associated with the same unknown vector and collaborate to estimate it. However, many applications are inherently multitask oriented and each node has its own unknown vector different from others. The related multitask estimation problem benefits from collaborations among the nodes as neighbor nodes usually share analogous properties and thus similar unknown vectors. In this paper, we study the distributed sparse multitask recursive least squares (RLS) problem over networks. We first propose a decentralized online alternating direction method of multipliers algorithm for the formulated RLS problem. The algorithm is simplified for easy implementation with closed-form computations in each iteration and low storage requirements. Convergence analysis of the algorithm is presented. Moreover, to further reduce the complexity, we propose a decentralized online subgradient method with low computational overhead. We theoretically establish its mean square stability by providing upper bounds for the mean square deviation and the excess mean square error. A related distributed online proximal gradient method is presented and extension to clustered multitask networks is also provided. The effectiveness of the proposed algorithms is corroborated by numerical simulations and an accuracy-complexity tradeoff between the proposed algorithms is highlighted.
URI: http://localhost/handle/Hannan/214654
volume: 65
issue: 23
More Information: 6217,
6232
Appears in Collections:2017

Files in This Item:
File SizeFormat 
8027138.pdf1.82 MBAdobe PDF
Title: Decentralized Sparse Multitask RLS Over Networks
Authors: Xuanyu Cao;K. J. Ray Liu
Year: 2017
Publisher: IEEE
Abstract: Distributed adaptive signal processing has attracted much attention in the recent decade owing to its effectiveness in many decentralized real-time applications in networked systems. Because many natural signals are highly sparse with most entries equal to zero, several decentralized sparse adaptive algorithms have been proposed recently. Most of them is focused on the single task estimation problems, in which all nodes receive data associated with the same unknown vector and collaborate to estimate it. However, many applications are inherently multitask oriented and each node has its own unknown vector different from others. The related multitask estimation problem benefits from collaborations among the nodes as neighbor nodes usually share analogous properties and thus similar unknown vectors. In this paper, we study the distributed sparse multitask recursive least squares (RLS) problem over networks. We first propose a decentralized online alternating direction method of multipliers algorithm for the formulated RLS problem. The algorithm is simplified for easy implementation with closed-form computations in each iteration and low storage requirements. Convergence analysis of the algorithm is presented. Moreover, to further reduce the complexity, we propose a decentralized online subgradient method with low computational overhead. We theoretically establish its mean square stability by providing upper bounds for the mean square deviation and the excess mean square error. A related distributed online proximal gradient method is presented and extension to clustered multitask networks is also provided. The effectiveness of the proposed algorithms is corroborated by numerical simulations and an accuracy-complexity tradeoff between the proposed algorithms is highlighted.
URI: http://localhost/handle/Hannan/214654
volume: 65
issue: 23
More Information: 6217,
6232
Appears in Collections:2017

Files in This Item:
File SizeFormat 
8027138.pdf1.82 MBAdobe PDF
Title: Decentralized Sparse Multitask RLS Over Networks
Authors: Xuanyu Cao;K. J. Ray Liu
Year: 2017
Publisher: IEEE
Abstract: Distributed adaptive signal processing has attracted much attention in the recent decade owing to its effectiveness in many decentralized real-time applications in networked systems. Because many natural signals are highly sparse with most entries equal to zero, several decentralized sparse adaptive algorithms have been proposed recently. Most of them is focused on the single task estimation problems, in which all nodes receive data associated with the same unknown vector and collaborate to estimate it. However, many applications are inherently multitask oriented and each node has its own unknown vector different from others. The related multitask estimation problem benefits from collaborations among the nodes as neighbor nodes usually share analogous properties and thus similar unknown vectors. In this paper, we study the distributed sparse multitask recursive least squares (RLS) problem over networks. We first propose a decentralized online alternating direction method of multipliers algorithm for the formulated RLS problem. The algorithm is simplified for easy implementation with closed-form computations in each iteration and low storage requirements. Convergence analysis of the algorithm is presented. Moreover, to further reduce the complexity, we propose a decentralized online subgradient method with low computational overhead. We theoretically establish its mean square stability by providing upper bounds for the mean square deviation and the excess mean square error. A related distributed online proximal gradient method is presented and extension to clustered multitask networks is also provided. The effectiveness of the proposed algorithms is corroborated by numerical simulations and an accuracy-complexity tradeoff between the proposed algorithms is highlighted.
URI: http://localhost/handle/Hannan/214654
volume: 65
issue: 23
More Information: 6217,
6232
Appears in Collections:2017

Files in This Item:
File SizeFormat 
8027138.pdf1.82 MBAdobe PDF