Skip to contents

Estimate VAR(1) models by efficiently sampling from the posterior distribution. This provides two graphical structures: (1) a network of undirected relations (the GGM, controlling for the lagged predictors) and (2) a network of directed relations (the lagged coefficients). Note that in the graphical modeling literature, this model is also known as a time series chain graphical model (Abegaz and Wit 2013) .

Usage

var_estimate(
  Y,
  rho_sd = sqrt(1/3),
  beta_sd = 1,
  iter = 5000,
  progress = TRUE,
  seed = NULL,
  ...
)

Arguments

Y

Matrix (or data frame) of dimensions n (observations) by p (variables).

rho_sd

Numeric. Scale of the prior distribution for the partial correlations, approximately the standard deviation of a beta distribution (defaults to sqrt(1/3) as this results to delta = 2, and a uniform distribution across the partial correlations).

beta_sd

Numeric. Standard deviation of the prior distribution for the regression coefficients (defaults to 1). The prior is by default centered at zero and follows a normal distribution (Equation 9, Sinay and Hsu 2014)

iter

Number of iterations (posterior samples; defaults to 5000).

progress

Logical. Should a progress bar be included (defaults to TRUE) ?

seed

An integer for the random seed (defaults to 1).

...

Currently ignored.

Value

An object of class var_estimate containing a lot of information that is used for printing and plotting the results. For users of BGGM, the following are the useful objects:

  • beta_mu A matrix including the regression coefficients (posterior mean).

  • pcor_mu Partial correlation matrix (posterior mean).

  • fit A list including the posterior samples.

Details

Each time series in Y is standardized (mean = 0; standard deviation = 1).

Note

Regularization:

A Bayesian ridge regression can be fitted by decreasing beta_sd (e.g., beta_sd = 0.25). This could be advantageous for forecasting (out-of-sample prediction) in particular.

References

Abegaz F, Wit E (2013). “Sparse time series chain graphical models for reconstructing genetic networks.” Biostatistics, 14(3), 586–599. doi:10.1093/biostatistics/kxt005 .

Sinay MS, Hsu JS (2014). “Bayesian inference of a multivariate regression model.” Journal of Probability and Statistics, 2014.

Examples

# \donttest{
# data
Y <- subset(ifit, id == 1)[,-1]

# use alias (var_estimate also works)
fit <- var_estimate(Y, progress = FALSE)

fit
#> BGGM: Bayesian Gaussian Graphical Models 
#> --- 
#> Vector Autoregressive Model (VAR) 
#> --- 
#> Posterior Samples: 5000 
#> Observations (n): 94 
#> Nodes (p): 7 
#> --- 
#> Call: 
#> var_estimate(Y = Y, progress = FALSE)
#> --- 
#> Partial Correlations: 
#> 
#>               interested disinterested excited  upset strong stressed  steps
#> interested         0.000        -0.171   0.377 -0.207  0.331    0.270  0.067
#> disinterested     -0.171         0.000  -0.181 -0.036  0.099    0.155 -0.090
#> excited            0.377        -0.181   0.000 -0.135  0.492   -0.165 -0.005
#> upset             -0.207        -0.036  -0.135  0.000  0.122    0.349 -0.048
#> strong             0.331         0.099   0.492  0.122  0.000   -0.009  0.182
#> stressed           0.270         0.155  -0.165  0.349 -0.009    0.000 -0.011
#> steps              0.067        -0.090  -0.005 -0.048  0.182   -0.011  0.000
#> --- 
#> Coefficients: 
#> 
#>                  interested disinterested excited  upset strong stressed  steps
#> interested.l1         0.225        -0.017   0.181 -0.100  0.176    0.013  0.112
#> disinterested.l1     -0.049        -0.001   0.055 -0.019  0.048    0.091 -0.023
#> excited.l1           -0.081        -0.181   0.003  0.050 -0.082    0.088  0.100
#> upset.l1             -0.154         0.256  -0.095  0.430  0.058    0.315 -0.089
#> strong.l1             0.026         0.175   0.027  0.053  0.184   -0.067 -0.183
#> stressed.l1          -0.021        -0.012  -0.032 -0.044 -0.076    0.151  0.130
#> steps.l1             -0.153         0.181  -0.207  0.150 -0.091    0.203  0.042
#> --- 
#> Date: Mon Dec  1 16:09:53 2025 

# }