Confirmatory hypothesis testing for comparing GGMs. Hypotheses are expressed as equality
and/or ineqaulity contraints on the partial correlations of interest. Here the focus is not
on determining the graph (see explore
) but testing specific hypotheses related to
the conditional (in)dependence structure. These methods were introduced in
Williams and Mulder (2019)
and in Williams et al. (2020)
Usage
ggm_compare_confirm(
...,
hypothesis,
formula = NULL,
type = "continuous",
mixed_type = NULL,
prior_sd = 0.5,
iter = 25000,
impute = TRUE,
progress = TRUE,
seed = NULL
)
Arguments
- ...
At least two matrices (or data frame) of dimensions n (observations) by p (nodes).
- hypothesis
Character string. The hypothesis (or hypotheses) to be tested. See notes for futher details.
- formula
an object of class
formula
. This allows for including control variables in the model (i.e.,~ gender
).- type
Character string. Which type of data for
Y
? The options includecontinuous
,binary
,ordinal
, ormixed
. Note that mixed can be used for data with only ordinal variables. See the note for further details.- mixed_type
numeric vector. An indicator of length p for which varibles should be treated as ranks. (1 for rank and 0 to assume normality). The default is currently (dev version) to treat all integer variables as ranks when
type = "mixed"
andNULL
otherwise. See note for further details.- prior_sd
Numeric. The scale of the prior distribution (centered at zero), in reference to a beta distribtuion (defaults to 0.5).
- iter
Number of iterations (posterior samples; defaults to 25,000).
- impute
Logicial. Should the missing values (
NA
) be imputed during model fitting (defaults toTRUE
) ?- progress
Logical. Should a progress bar be included (defaults to
TRUE
) ?- seed
An integer for the random seed.
Value
The returned object of class confirm
contains a lot of information that
is used for printing and plotting the results. For users of BGGM, the following
are the useful objects:
out_hyp_prob
Posterior hypothesis probabilities.info
An object of classBF
from the R package BFpack (Mulder et al. 2019)
Details
The hypotheses can be written either with the respective column names or numbers.
For example, g1_1--2
denotes the relation between the variables in column 1 and 2 for group 1.
The g1_
is required and the only difference from confirm
(one group).
Note that these must correspond to the upper triangular elements of the correlation
matrix. This is accomplished by ensuring that the first number is smaller than the second number.
This also applies when using column names (i.e,, in reference to the column number).
One Hypothesis:
To test whether a relation in larger in one group, while both are expected to be positive, this can be written as
hyp <- c(g1_1--2 > g2_1--2 > 0)
This is then compared to the complement.
More Than One Hypothesis:
The above hypothesis can also be compared to, say, a null model by using ";" to seperate the hypotheses, for example,
hyp <- c(g1_1--2 > g2_1--2 > 0; g1_1--2 = g2_1--2 = 0)
.
Any number of hypotheses can be compared this way.
Using "&"
It is also possible to include &
. This allows for testing one constraint and
another contraint as one hypothesis.
hyp <- c("g1_A1--A2 > g2_A1--A2 & g1_A1--A3 = g2_A1--A3")
Of course, it is then possible to include additional hypotheses by separating them with ";".
Testing Sums
It might also be interesting to test the sum of partial correlations. For example, that the sum of specific relations in one group is larger than the sum in another group.
hyp <- c("g1_A1--A2 + g1_A1--A3 > g2_A1--A2 + g2_A1--A3; g1_A1--A2 + g1_A1--A3 = g2_A1--A2 + g2_A1--A3")
Potential Delays:
There is a chance for a potentially long delay from the time the progress bar finishes
to when the function is done running. This occurs when the hypotheses require further
sampling to be tested, for example, when grouping relations
c("(g1_A1--A2, g2_A2--A3) > (g2_A1--A2, g2_A2--A3)"
.
This is not an error.
Controlling for Variables:
When controlling for variables, it is assumed that Y
includes only
the nodes in the GGM and the control variables. Internally, only
the predictors
that are included in formula
are removed from Y
. This is not behavior of, say,
lm
, but was adopted to ensure users do not have to write out each variable that
should be included in the GGM. An example is provided below.
Mixed Type:
The term "mixed" is somewhat of a misnomer, because the method can be used for data including only continuous or only discrete variables (Hoff 2007) . This is based on the ranked likelihood which requires sampling the ranks for each variable (i.e., the data is not merely transformed to ranks). This is computationally expensive when there are many levels. For example, with continuous data, there are as many ranks as data points!
The option mixed_type
allows the user to determine which variable should be treated as ranks
and the "emprical" distribution is used otherwise. This is accomplished by specifying an indicator
vector of length p. A one indicates to use the ranks, whereas a zero indicates to "ignore"
that variable. By default all integer variables are handled as ranks.
Dealing with Errors:
An error is most likely to arise when type = "ordinal"
. The are two common errors (although still rare):
The first is due to sampling the thresholds, especially when the data is heavily skewed. This can result in an ill-defined matrix. If this occurs, we recommend to first try decreasing
prior_sd
(i.e., a more informative prior). If that does not work, then change the data type totype = mixed
which then estimates a copula GGM (this method can be used for data containing only ordinal variable). This should work without a problem.The second is due to how the ordinal data are categorized. For example, if the error states that the index is out of bounds, this indicates that the first category is a zero. This is not allowed, as the first category must be one. This is addressed by adding one (e.g.,
Y + 1
) to the data matrix.
Imputing Missing Values:
Missing values are imputed with the approach described in Hoff (2009)
.
The basic idea is to impute the missing values with the respective posterior pedictive distribution,
given the observed data, as the model is being estimated. Note that the default is TRUE
,
but this ignored when there are no missing values. If set to FALSE
, and there are missing
values, list-wise deletion is performed with na.omit
.
Note
"Default" Prior:
In Bayesian statistics, a default Bayes factor needs to have several properties. I refer
interested users to section 2.2 in Dablander et al. (2020)
. In
Williams and Mulder (2019)
, some of these propteries were investigated (e.g.,
model selection consistency). That said, we would not consider this a "default" or "automatic"
Bayes factor and thus we encourage users to perform sensitivity analyses by varying the scale of
the prior distribution (prior_sd
).
Furthermore, it is important to note there is no "correct" prior and, also, there is no need to entertain the possibility of a "true" model. Rather, the Bayes factor can be interpreted as which hypothesis best (relative to each other) predicts the observed data (Section 3.2 in Kass and Raftery 1995) .
Interpretation of Conditional (In)dependence Models for Latent Data:
See BGGM-package
for details about interpreting GGMs based on latent data
(i.e, all data types besides "continuous"
)
References
Dablander F, Bergh Dvd, Ly A, Wagenmakers E (2020).
“Default Bayes Factors for Testing the (In) equality of Several Population Variances.”
arXiv preprint arXiv:2003.06278.
Hoff PD (2007).
“Extending the rank likelihood for semiparametric copula estimation.”
The Annals of Applied Statistics, 1(1), 265–283.
doi:10.1214/07-AOAS107
.
Hoff PD (2009).
A first course in Bayesian statistical methods, volume 580.
Springer.
Kass RE, Raftery AE (1995).
“Bayes Factors.”
Journal of the American Statistical Association, 90(430), 773–795.
Mulder J, Gu X, Olsson-Collentine A, Tomarken A, Böing-Messing F, Hoijtink H, Meijerink M, Williams DR, Menke J, Fox J, others (2019).
“BFpack: Flexible Bayes Factor Testing of Scientific Theories in R.”
arXiv preprint arXiv:1911.07728.
Williams DR, Mulder J (2019).
“Bayesian Hypothesis Testing for Gaussian Graphical Models: Conditional Independence and Order Constraints.”
PsyArXiv.
doi:10.31234/osf.io/ypxd8
.
Williams DR, Rast P, Pericchi LR, Mulder J (2020).
“Comparing Gaussian graphical models with the posterior predictive distribution and Bayesian model selection.”
Psychological Methods.
doi:10.1037/met0000254
.
Examples
# \donttest{
# note: iter = 250 for demonstrative purposes
# data
Y <- bfi
###############################
#### example 1: continuous ####
###############################
# males
Ymale <- subset(Y, gender == 1,
select = -c(education,
gender))[,1:5]
# females
Yfemale <- subset(Y, gender == 2,
select = -c(education,
gender))[,1:5]
# exhaustive
hypothesis <- c("g1_A1--A2 > g2_A1--A2;
g1_A1--A2 < g2_A1--A2;
g1_A1--A2 = g2_A1--A2")
# test hyp
test <- ggm_compare_confirm(Ymale, Yfemale,
hypothesis = hypothesis,
iter = 250,
progress = FALSE)
# print (evidence not strong)
test
#> BGGM: Bayesian Gaussian Graphical Models
#> Type: continuous
#> ---
#> Posterior Samples: 250
#> Group 1: 896
#> Group 2: 1813
#> Variables (p): 5
#> Relations: 10
#> Delta: 3
#> ---
#> Call:
#> ggm_compare_confirm(Ymale, Yfemale, hypothesis = hypothesis,
#> iter = 250, progress = FALSE)
#> ---
#> Hypotheses:
#>
#> H1: g1_A1--A2>g2_A1--A2
#> H2: g1_A1--A2<g2_A1--A2
#> H3: g1_A1--A2=g2_A1--A2
#> ---
#> Posterior prob:
#>
#> p(H1|data) = 0.131
#> p(H2|data) = 0.017
#> p(H3|data) = 0.852
#> ---
#> Bayes factor matrix:
#> H1 H2 H3
#> H1 1.000 7.488 0.154
#> H2 0.134 1.000 0.021
#> H3 6.507 48.728 1.000
#> ---
#> note: equal hypothesis prior probabilities
#########################################
#### example 2: sensitivity to prior ####
#########################################
# continued from example 1
# decrease prior SD
test <- ggm_compare_confirm(Ymale,
Yfemale,
prior_sd = 0.1,
hypothesis = hypothesis,
iter = 250,
progress = FALSE)
# print
test
#> BGGM: Bayesian Gaussian Graphical Models
#> Type: continuous
#> ---
#> Posterior Samples: 250
#> Group 1: 896
#> Group 2: 1813
#> Variables (p): 5
#> Relations: 10
#> Delta: 99
#> ---
#> Call:
#> ggm_compare_confirm(Ymale, Yfemale, hypothesis = hypothesis,
#> prior_sd = 0.1, iter = 250, progress = FALSE)
#> ---
#> Hypotheses:
#>
#> H1: g1_A1--A2>g2_A1--A2
#> H2: g1_A1--A2<g2_A1--A2
#> H3: g1_A1--A2=g2_A1--A2
#> ---
#> Posterior prob:
#>
#> p(H1|data) = 0.396
#> p(H2|data) = 0.056
#> p(H3|data) = 0.548
#> ---
#> Bayes factor matrix:
#> H1 H2 H3
#> H1 1.000 7.090 0.723
#> H2 0.141 1.000 0.102
#> H3 1.383 9.808 1.000
#> ---
#> note: equal hypothesis prior probabilities
# indecrease prior SD
test <- ggm_compare_confirm(Ymale,
Yfemale,
prior_sd = 0.28,
hypothesis = hypothesis,
iter = 250,
progress = FALSE)
# print
test
#> BGGM: Bayesian Gaussian Graphical Models
#> Type: continuous
#> ---
#> Posterior Samples: 250
#> Group 1: 896
#> Group 2: 1813
#> Variables (p): 5
#> Relations: 10
#> Delta: 11.7551
#> ---
#> Call:
#> ggm_compare_confirm(Ymale, Yfemale, hypothesis = hypothesis,
#> prior_sd = 0.28, iter = 250, progress = FALSE)
#> ---
#> Hypotheses:
#>
#> H1: g1_A1--A2>g2_A1--A2
#> H2: g1_A1--A2<g2_A1--A2
#> H3: g1_A1--A2=g2_A1--A2
#> ---
#> Posterior prob:
#>
#> p(H1|data) = 0.207
#> p(H2|data) = 0.037
#> p(H3|data) = 0.756
#> ---
#> Bayes factor matrix:
#> H1 H2 H3
#> H1 1.000 5.577 0.273
#> H2 0.179 1.000 0.049
#> H3 3.661 20.417 1.000
#> ---
#> note: equal hypothesis prior probabilities
################################
#### example 3: mixed data #####
################################
hypothesis <- c("g1_A1--A2 > g2_A1--A2;
g1_A1--A2 < g2_A1--A2;
g1_A1--A2 = g2_A1--A2")
# test (1000 for example)
test <- ggm_compare_confirm(Ymale,
Yfemale,
type = "mixed",
hypothesis = hypothesis,
iter = 250,
progress = FALSE)
#> Warning: imputation during model fitting is
#> currently only implemented for 'continuous' data.
# print
test
#> BGGM: Bayesian Gaussian Graphical Models
#> Type: mixed
#> ---
#> Posterior Samples: 250
#> Group 1: 896
#> Group 2: 1813
#> Variables (p): 5
#> Relations: 10
#> Delta: 3
#> ---
#> Call:
#> ggm_compare_confirm(Ymale, Yfemale, hypothesis = hypothesis,
#> type = "mixed", iter = 250, progress = FALSE)
#> ---
#> Hypotheses:
#>
#> H1: g1_A1--A2>g2_A1--A2
#> H2: g1_A1--A2<g2_A1--A2
#> H3: g1_A1--A2=g2_A1--A2
#> ---
#> Posterior prob:
#>
#> p(H1|data) = 0.027
#> p(H2|data) = 0.091
#> p(H3|data) = 0.882
#> ---
#> Bayes factor matrix:
#> H1 H2 H3
#> H1 1.000 0.294 0.030
#> H2 3.400 1.000 0.103
#> H3 32.928 9.685 1.000
#> ---
#> note: equal hypothesis prior probabilities
##############################
##### example 4: control #####
##############################
# control for education
# data
Y <- bfi
# males
Ymale <- subset(Y, gender == 1,
select = -c(gender))[,c(1:5, 26)]
# females
Yfemale <- subset(Y, gender == 2,
select = -c(gender))[,c(1:5, 26)]
# test
test <- ggm_compare_confirm(Ymale,
Yfemale,
formula = ~ education,
hypothesis = hypothesis,
iter = 250,
progress = FALSE)
# print
test
#> BGGM: Bayesian Gaussian Graphical Models
#> Type: continuous
#> ---
#> Posterior Samples: 250
#> Group 1: 817
#> Group 2: 1676
#> Variables (p): 5
#> Relations: 10
#> Delta: 3
#> ---
#> Call:
#> ggm_compare_confirm(Ymale, Yfemale, hypothesis = hypothesis,
#> formula = ~education, iter = 250, progress = FALSE)
#> ---
#> Hypotheses:
#>
#> H1: g1_A1--A2>g2_A1--A2
#> H2: g1_A1--A2<g2_A1--A2
#> H3: g1_A1--A2=g2_A1--A2
#> ---
#> Posterior prob:
#>
#> p(H1|data) = 0.024
#> p(H2|data) = 0.085
#> p(H3|data) = 0.891
#> ---
#> Bayes factor matrix:
#> H1 H2 H3
#> H1 1.000 0.283 0.027
#> H2 3.530 1.000 0.095
#> H3 37.196 10.536 1.000
#> ---
#> note: equal hypothesis prior probabilities
#####################################
##### example 5: many relations #####
#####################################
# data
Y <- bfi
hypothesis <- c("g1_A1--A2 > g2_A1--A2 & g1_A1--A3 = g2_A1--A3;
g1_A1--A2 = g2_A1--A2 & g1_A1--A3 = g2_A1--A3;
g1_A1--A2 = g2_A1--A2 = g1_A1--A3 = g2_A1--A3")
Ymale <- subset(Y, gender == 1,
select = -c(education,
gender))[,1:5]
# females
Yfemale <- subset(Y, gender == 2,
select = -c(education,
gender))[,1:5]
test <- ggm_compare_confirm(Ymale,
Yfemale,
hypothesis = hypothesis,
iter = 250,
progress = FALSE)
# print
test
#> BGGM: Bayesian Gaussian Graphical Models
#> Type: continuous
#> ---
#> Posterior Samples: 250
#> Group 1: 896
#> Group 2: 1813
#> Variables (p): 5
#> Relations: 10
#> Delta: 3
#> ---
#> Call:
#> ggm_compare_confirm(Ymale, Yfemale, hypothesis = hypothesis,
#> iter = 250, progress = FALSE)
#> ---
#> Hypotheses:
#>
#> H1: g1_A1--A2>g2_A1--A2&g1_A1--A3=g2_A1--A3
#> H2: g1_A1--A2=g2_A1--A2&g1_A1--A3=g2_A1--A3
#> H3: g1_A1--A2=g2_A1--A2=g1_A1--A3=g2_A1--A3
#> H4: complement
#> ---
#> Posterior prob:
#>
#> p(H1|data) = 0.101
#> p(H2|data) = 0.889
#> p(H3|data) = 0.007
#> p(H4|data) = 0.003
#> ---
#> Bayes factor matrix:
#> H1 H2 H3 H4
#> H1 1.000 0.114 14.522 37.799
#> H2 8.798 1.000 127.772 332.566
#> H3 0.069 0.008 1.000 2.603
#> H4 0.026 0.003 0.384 1.000
#> ---
#> note: equal hypothesis prior probabilities
# }