TY - JOUR
T1 - A model-averaging approach for high-dimensional regression
AU - Ando, Tomohiro
AU - Li, Ker Chau
N1 - Funding Information:
Tomohiro Ando is Associate Professor, Graduate School of Business Administration, Keio University, Kanagawa, Japan (E-mail: andoh@kbs.keio.ac.jp). Ker-Chau Li is Professor, Institute of Statistical Science, Academia Sinica, Taipei, Taiwan and Department of Statistics, UCLA, CA 90095 (E-mail: kcli@stat.sinica.edu.tw). The authors thank the co-editor, the associate editor, and two anonymous reviewers for constructive and helpful comments that improved the quality of the article considerably. We are also grateful to them for providing several references. The research of TA is partially supported by Inamori Foundation, Japan. The work of KCL is supported in part by NSF grant DMS-0707160 and by internal funding of Academia Sinica.
PY - 2014
Y1 - 2014
N2 - This article considers high-dimensional regression problems in which the number of predictors p exceeds the sample size n. We develop a model-averaging procedure for high-dimensional regression problems. Unlike most variable selection studies featuring the identification of true predictors, our focus here is on the prediction accuracy for the true conditional mean of y given the p predictors. Our method consists of two steps. The first step is to construct a class of regression models, each with a smaller number of regressors, to avoid the degeneracy of the information matrix. The second step is to find suitable model weights for averaging. To minimize the prediction error, we estimate the model weights using a delete-one cross-validation procedure. Departing from the literature of model averaging that requires the weights always sum to one, an important improvement we introduce is to remove this constraint. We derive some theoretical results to justify our procedure. A theorem is proved, showing that delete-one cross-validation achieves the lowest possible prediction loss asymptotically. This optimality result requires a condition that unravels an important feature of high-dimensional regression. The prediction error of any individual model in the class for averaging is required to be higher than the classic root n rate under the traditional parametric regression. This condition reflects the difficulty of high-dimensional regression and it depicts a situation especially meaningful for p > n. We also conduct a simulation study to illustrate the merits of the proposed approach over several existing methods, including lasso, group lasso, forward regression, Phase Coupled (PC)-simple algorithm, Akaike information criterion (AIC) model-averaging, Bayesian information criterion (BIC) model-averaging methods, and SCAD (smoothly clipped absolute deviation). This approach uses quadratic programming to overcome the computing time issue commonly encountered in the cross-validation literature. Supplementary materials for this article are available online.
AB - This article considers high-dimensional regression problems in which the number of predictors p exceeds the sample size n. We develop a model-averaging procedure for high-dimensional regression problems. Unlike most variable selection studies featuring the identification of true predictors, our focus here is on the prediction accuracy for the true conditional mean of y given the p predictors. Our method consists of two steps. The first step is to construct a class of regression models, each with a smaller number of regressors, to avoid the degeneracy of the information matrix. The second step is to find suitable model weights for averaging. To minimize the prediction error, we estimate the model weights using a delete-one cross-validation procedure. Departing from the literature of model averaging that requires the weights always sum to one, an important improvement we introduce is to remove this constraint. We derive some theoretical results to justify our procedure. A theorem is proved, showing that delete-one cross-validation achieves the lowest possible prediction loss asymptotically. This optimality result requires a condition that unravels an important feature of high-dimensional regression. The prediction error of any individual model in the class for averaging is required to be higher than the classic root n rate under the traditional parametric regression. This condition reflects the difficulty of high-dimensional regression and it depicts a situation especially meaningful for p > n. We also conduct a simulation study to illustrate the merits of the proposed approach over several existing methods, including lasso, group lasso, forward regression, Phase Coupled (PC)-simple algorithm, Akaike information criterion (AIC) model-averaging, Bayesian information criterion (BIC) model-averaging methods, and SCAD (smoothly clipped absolute deviation). This approach uses quadratic programming to overcome the computing time issue commonly encountered in the cross-validation literature. Supplementary materials for this article are available online.
KW - Asymptotic optimality
KW - High-dimensional regression models
KW - Model weights
UR - http://www.scopus.com/inward/record.url?scp=84901754835&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84901754835&partnerID=8YFLogxK
U2 - 10.1080/01621459.2013.838168
DO - 10.1080/01621459.2013.838168
M3 - Article
AN - SCOPUS:84901754835
SN - 0162-1459
VL - 109
SP - 254
EP - 265
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
IS - 505
ER -