TY - GEN
T1 - Distributed Sparse Optimization with Minimax Concave Regularization
AU - Komuro, Kei
AU - Yukawa, Masahiro
AU - Cavalcante, Renato L.G.
N1 - Funding Information:
∗This work was supported by JST SICORP Grant Number JPMJSC20C6, Japan. This work was partially supported by the Federal Ministry of Education and Research (BMBF) of the Federal Republic of Germany as part of the KICK project (16KIS1103). The authors alone are responsible for the content of the paper.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/11
Y1 - 2021/7/11
N2 - We study the use of weakly-convex minmax concave (MC) regularizes in distributed sparse optimization. The global cost function is the squared error penalized by the MC regularizer. While it is convex as long as the whole system is overdetermined and the regularization parameter is sufficiently small, the local cost of each node is usually nonconvex as the system from local measurements are underdetermined in practical applications. The Moreau decomposition is applied to the MC regularizer so that the total cost takes the form of a smooth function plus the rescaled ℓ1 norm. We propose two solvers: the first applies the proximal gradient exact first-order algorithm (PG-EXTRA) directly to our cost, while the second is based on convex relaxation of the local costs to ensure convergence. Numerical examples show that the proposed approaches attain significant gains compared to the ℓ1 -based PG-EXTRA.
AB - We study the use of weakly-convex minmax concave (MC) regularizes in distributed sparse optimization. The global cost function is the squared error penalized by the MC regularizer. While it is convex as long as the whole system is overdetermined and the regularization parameter is sufficiently small, the local cost of each node is usually nonconvex as the system from local measurements are underdetermined in practical applications. The Moreau decomposition is applied to the MC regularizer so that the total cost takes the form of a smooth function plus the rescaled ℓ1 norm. We propose two solvers: the first applies the proximal gradient exact first-order algorithm (PG-EXTRA) directly to our cost, while the second is based on convex relaxation of the local costs to ensure convergence. Numerical examples show that the proposed approaches attain significant gains compared to the ℓ1 -based PG-EXTRA.
UR - http://www.scopus.com/inward/record.url?scp=85113434865&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113434865&partnerID=8YFLogxK
U2 - 10.1109/SSP49050.2021.9513764
DO - 10.1109/SSP49050.2021.9513764
M3 - Conference contribution
AN - SCOPUS:85113434865
T3 - IEEE Workshop on Statistical Signal Processing Proceedings
SP - 31
EP - 35
BT - 2021 IEEE Statistical Signal Processing Workshop, SSP 2021
PB - IEEE Computer Society
T2 - 21st IEEE Statistical Signal Processing Workshop, SSP 2021
Y2 - 11 July 2021 through 14 July 2021
ER -