Compute p-values for each relation based on the de-sparsified glasso estimator (Jankova and Van De Geer 2015) .
inference(object, method = "fdr", alpha = 0.05, ...) significance_test(object, method = "fdr", alpha = 0.05, ...)
object | An object of class |
---|---|
method | Character string. A correction method for multiple comparison (defaults to |
alpha | Numeric. Significance level (defaults to |
... | Currently ignored. |
Theta
De-sparsified precision matrix
adj
Adjacency matrix based on the p-values.
pval_uncorrected
Uncorrected p-values
pval_corrected
Corrected p-values
method
The approach used for multiple comparisons
alpha
Significance level
This assumes (reasonably) Gaussian data, and should not to be expected
to work for, say, polychoric correlations. Further, all work to date
has only looked at the graphical lasso estimator, and not de-sparsifying
nonconvex regularization. Accordingly, it is probably best to set
penalty = "lasso"
in ggmncv
.
Further, whether the de-sparsified estimator provides nominal error rates remains to be seen, at least across a range of conditions. For example, the simulation results in Williams (2021) demonstrated that the confidence intervals can have (severely) compromised coverage properties (whereas non-regularized methods had coverage at the nominal level).
Jankova J, Van De Geer S (2015).
“Confidence intervals for high-dimensional inverse covariance estimation.”
Electronic Journal of Statistics, 9(1), 1205--1229.
Williams DR (2021).
“The Confidence Interval that Wasn't: Bootstrapped "Confidence Intervals" in L1-Regularized Partial Correlation Networks.”
PsyArXiv.
doi: 10.31234/osf.io/kjh2f
.
# data Y <- GGMncv::ptsd[,1:5] # fit model fit <- ggmncv(cor(Y), n = nrow(Y), progress = FALSE, penalty = "lasso") # statistical inference inference(fit) #> Statistical Inference #> fdr: 0.05 #> --- #> #> 1 2 3 4 5 #> 1 0 1 0 1 1 #> 2 1 0 1 0 0 #> 3 0 1 0 1 1 #> 4 1 0 1 0 1 #> 5 1 0 1 1 0 # alias all.equal(inference(fit), significance_test(fit)) #> [1] TRUE