Compute nodewise predictability or Bayesian variance explained @R2 @gelman_r2_2019BGGM. In the context of GGMs, this method was described in Williams2019;textualBGGM.
Usage
predictability(
object,
select = FALSE,
cred = 0.95,
BF_cut = 3,
iter = NULL,
progress = TRUE,
...
)
Arguments
- object
object of class
estimate
orexplore
- select
logical. Should the graph be selected ? The default is currently
FALSE
.- cred
numeric. credible interval between 0 and 1 (default is 0.95) that is used for selecting the graph.
- BF_cut
numeric. evidentiary threshold (default is 3).
- iter
interger. iterations (posterior samples) used for computing R2.
- progress
Logical. Should a progress bar be included (defaults to
TRUE
) ?- ...
currently ignored.
Value
An object of classes bayes_R2
and metric
, including
scores
A list containing the posterior samples of R2. The is one elementfor each node.
Note
Binary and Ordinal Data:
R2 is computed from the latent data.
Mixed Data:
The mixed data approach is somewhat ad-hoc @see for example p. 277 in @hoff2007extending;textualBGGM. This is becaue uncertainty in the ranks is not incorporated, which means that variance explained is computed from the 'empirical' CDF.
Model Selection:
Currently the default to include all nodes in the model when computing R2. This can be changed (i.e., select = TRUE
), which
then sets those edges not detected to zero. This is accomplished by subsetting the correlation matrix according to each neighborhood
of relations.
Examples
# \donttest{
# data
Y <- ptsd[,1:5]
fit <- estimate(Y, iter = 250, progress = FALSE)
r2 <- predictability(fit, select = TRUE,
iter = 250, progress = FALSE)
# summary
r2
#> BGGM: Bayesian Gaussian Graphical Models
#> ---
#> Metric: Bayes R2
#> Type: continuous
#> ---
#> Estimates:
#>
#> Node Post.mean Post.sd Cred.lb Cred.ub
#> B1 0.444 0.047 0.354 0.539
#> B2 0.496 0.047 0.420 0.595
#> B3 0.559 0.049 0.469 0.653
#> B4 0.502 0.050 0.421 0.605
#> B5 0.457 0.046 0.373 0.560
# }