TY - JOUR
T1 - Distributed Sparse Optimization With Weakly Convex Regularizer
T2 - Consensus Promoting and Approximate Moreau Enhanced Penalties Towards Global Optimality
AU - Komuro, Kei
AU - Yukawa, Masahiro
AU - Cavalcante, Renato Luis Garrido
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2022
Y1 - 2022
N2 - We propose a promising framework for distributed sparse optimization based on weakly convex regularizers. More specifically, we pose two distributed optimization problems to recover sparse signals in networks. The first problem formulation relies on statistical properties of the signals, and it uses an approximate Moreau enhanced penalty. In contrast, the second formulation does not rely on any statistical assumptions, and it uses an additional consensus promoting penalty (CPP) that convexifies the cost function over the whole network. To solve both problems, we propose a distributed proximal debiasing-gradient (DPD) method, which uses the exact first-order proximal gradient algorithm. The DPD method features a pair of proximity operators that play complementary roles: one sparsifies the estimate, and the other reduces the bias caused by the sparsification. Owing to the overall convexity of the whole cost functions, the proposed method guarantees convergence to a global minimizer, as demonstrated by numerical examples. In addition, the use of CPP improves the convergence speed significantly.
AB - We propose a promising framework for distributed sparse optimization based on weakly convex regularizers. More specifically, we pose two distributed optimization problems to recover sparse signals in networks. The first problem formulation relies on statistical properties of the signals, and it uses an approximate Moreau enhanced penalty. In contrast, the second formulation does not rely on any statistical assumptions, and it uses an additional consensus promoting penalty (CPP) that convexifies the cost function over the whole network. To solve both problems, we propose a distributed proximal debiasing-gradient (DPD) method, which uses the exact first-order proximal gradient algorithm. The DPD method features a pair of proximity operators that play complementary roles: one sparsifies the estimate, and the other reduces the bias caused by the sparsification. Owing to the overall convexity of the whole cost functions, the proposed method guarantees convergence to a global minimizer, as demonstrated by numerical examples. In addition, the use of CPP improves the convergence speed significantly.
KW - Distributed optimization
KW - sparse optimization
KW - weakly convex regularizer
UR - http://www.scopus.com/inward/record.url?scp=85132723966&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132723966&partnerID=8YFLogxK
U2 - 10.1109/TSIPN.2022.3181729
DO - 10.1109/TSIPN.2022.3181729
M3 - Article
AN - SCOPUS:85132723966
SN - 2373-776X
VL - 8
SP - 514
EP - 527
JO - IEEE Transactions on Signal and Information Processing over Networks
JF - IEEE Transactions on Signal and Information Processing over Networks
ER -