Log-regularly varying scale mixture of normals for robust regression

Yasuyuki Hamura, Kaoru Irie, Shonosuke Sugasawa

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


Linear regression that employs the assumption of normality for the error distribution may lead to an undesirable posterior inference of regression coefficients due to potential outliers. A finite mixture of two components, one with thin and one with heavy tails, is considered as the error distribution in this study. For the heavily-tailed component, the novel class of distributions is introduced; their densities are log-regularly varying and have heavier tails than the Cauchy distribution. Yet, they are expressed as a scale mixture of normals which enables the efficient posterior inference when using a Gibbs sampler. The robustness of the posterior distributions is proved under the proposed models using a minimal set of assumptions, which justifies the use of shrinkage priors with unbounded densities for the coefficient vector in the presence of outliers. An extensive comparison with the existing methods via simulation study shows the improved performance of the proposed model in point and interval estimation, as well as its computational efficiency. Further, the posterior robustness of the proposed method is confirmed in an empirical study with shrinkage priors for regression coefficients.

Original languageEnglish
Article number107517
JournalComputational Statistics and Data Analysis
Publication statusPublished - 2022 Sept
Externally publishedYes


  • Gibbs sampler
  • Heavily-tailed distribution
  • Linear regression
  • Log-regularly varying density
  • Robust statistics
  • Scale mixture of normals

ASJC Scopus subject areas

  • Statistics and Probability
  • Computational Mathematics
  • Computational Theory and Mathematics
  • Applied Mathematics


Dive into the research topics of 'Log-regularly varying scale mixture of normals for robust regression'. Together they form a unique fingerprint.

Cite this