The primary goal of GGMncv is to provide non-convex penalties for estimating Gaussian graphical models. These are known to overcome the various limitations of lasso (least absolute shrinkage "screening" operator), including inconsistent model selection (Zhao and Yu 2006) , biased estimates (Zhang 2010) , and a high false positive rate (see for example Williams and Rast 2020;Williams et al. 2019)

Several of the penalties are (continuous) approximations to the \(\ell_0\) penalty, that is, best subset selection. However, the solution does not require enumerating all possible models which results in a computationally efficient solution.

L0 Approximations

  • Atan: penalty = "atan" (Wang and Zhu 2016) . This is currently the default.

  • Seamless \(\ell_0\): penalty = "selo" (Dicker et al. 2013) .

  • Exponential: penalty = "exp" (Wang et al. 2018)

  • Log: penalty = "log" (Mazumder et al. 2011) .

  • Sica: penalty = "sica" (Lv and Fan 2009)

Additional penalties:

  • SCAD: penalty = "scad" (Fan and Li 2001) .

  • MCP: penalty = "mcp" (Zhang 2010) .

  • Adaptive lasso: penalty = "adapt" (Zou 2006) .

  • Lasso: penalty = "lasso" (Tibshirani 1996) .

Citing GGMncv

It is important to note that GGMncv merely provides a software implementation of other researchers work. There are no methodological innovations, although this is the most comprehensive R package for estimating GGMs with non-convex penalties. Hence, in addition to citing the package citation("GGMncv"), it is important to give credit to the primary sources. The references are provided above and in ggmncv.

Further, a survey (or review) of these penalties can be found in Williams (2020) .

References

Dicker L, Huang B, Lin X (2013). “Variable selection and estimation with the seamless-L 0 penalty.” Statistica Sinica, 929--962.

Fan J, Li R (2001). “Variable selection via nonconcave penalized likelihood and its oracle properties.” Journal of the American statistical Association, 96(456), 1348--1360.

Lv J, Fan Y (2009). “A unified approach to model selection and sparse recovery using regularized least squares.” The Annals of Statistics, 37(6A), 3498--3528.

Mazumder R, Friedman JH, Hastie T (2011). “Sparsenet: Coordinate descent with nonconvex penalties.” Journal of the American Statistical Association, 106(495), 1125--1138.

Tibshirani R (1996). “Regression shrinkage and selection via the lasso.” Journal of the Royal Statistical Society: Series B (Methodological), 58(1), 267--288.

Wang Y, Fan Q, Zhu L (2018). “Variable selection and estimation using a continuous approximation to the L0 penalty.” Annals of the Institute of Statistical Mathematics, 70(1), 191--214.

Wang Y, Zhu L (2016). “Variable selection and parameter estimation with the Atan regularization method.” Journal of Probability and Statistics.

Williams DR (2020). “Beyond Lasso: A Survey of Nonconvex Regularization in Gaussian Graphical Models.” PsyArXiv.

Williams DR, Rast P (2020). “Back to the basics: Rethinking partial correlation network methodology.” British Journal of Mathematical and Statistical Psychology, 73(2), 187--212.

Williams DR, Rhemtulla M, Wysocki AC, Rast P (2019). “On nonregularized estimation of psychological networks.” Multivariate behavioral research, 54(5), 719--750.

Zhang C (2010). “Nearly unbiased variable selection under minimax concave penalty.” The Annals of statistics, 38(2), 894--942.

Zhao P, Yu B (2006). “On model selection consistency of Lasso.” Journal of Machine learning research, 7(Nov), 2541--2563.

Zou H (2006). “The adaptive lasso and its oracle properties.” Journal of the American statistical association, 101(476), 1418--1429.