The reciprocal Bayesian LASSO. Academic Article uri icon

Overview

abstract

  • A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters are assigned independent inverse Laplace priors. Bayesian inference from this posterior is possible using an expanded hierarchy motivated by a scale mixture of double Pareto or truncated normal distributions. On simulated and real datasets, we show that the Bayesian formulation outperforms its classical cousin in estimation, prediction, and variable selection across a wide range of scenarios while offering the advantage of posterior inference. Finally, we discuss other variants of this new approach and provide a unified framework for variable selection using flexible reciprocal penalties. All methods described in this article are publicly available as an R package at: https://github.com/himelmallick/BayesRecipe.

publication date

  • June 14, 2021

Research

keywords

  • Bayes Theorem

Identity

Scopus Document Identifier

  • 85107811670

Digital Object Identifier (DOI)

  • 10.1002/sim.9098

PubMed ID

  • 34126655

Additional Document Info

volume

  • 40

issue

  • 22