Asymptotics and optimal designs of SLOPE for sparse linear regression

Citation:

H. Hong and Y. M. Lu, “Asymptotics and optimal designs of SLOPE for sparse linear regression,” in Prof. International Symposium on Information Theory (ISIT), 2019.

Abstract:

In sparse linear regression, the SLOPE estimator generalizes LASSO by assigning magnitude-dependent regular- izations to different coordinates of the estimate. In this paper, we present an asymptotically exact characterization of the performance of SLOPE in the high-dimensional regime where the number of unknown parameters grows in proportion to the number of observations. Our asymptotic characterization enables us to derive optimal regularization sequences to either minimize the MSE or to maximize the power in variable selection under any given level of Type-I error. In both cases, we show that the optimal design can be recast as certain infinite-dimensional convex optimization problems, which have efficient and accurate finite-dimensional approximations. Numerical simulations verify our asymptotic predictions. They also demonstrate the superi- ority of our optimal design over LASSO and a regularization sequence previously proposed in the literature.

Longer version with full technical details

Last updated on 07/07/2019