There is a direct correspondence between the inverse covariance matrix and multiple regression (Stephens 1998; Kwan 2014) . This readily allows for converting the off diagonal elements to regression coefficients, opening the door to out-of-sample prediction in multiple regression.
# S3 method for ggmncv predict(object, train_data = NULL, newdata = NULL, ...)
object | An object of class |
---|---|
train_data | Data used for model fitting (defaults to |
newdata | An optional data frame in which to look for variables with which to predict. If omitted, the fitted values are used. |
... | Currently ignored. |
A matrix of predicted values, of dimensions rows (in the training/test data) by the number of nodes (columns).
Kwan CC (2014).
“A regression-based interpretation of the inverse of the sample covariance matrix.”
Spreadsheets in Education, 7(1), 4613.
Stephens G (1998).
“On the Inverse of the Covariance Matrix in Portfolio Analysis.”
The Journal of Finance, 53(5), 1821--1827.
# data Y <- scale(Sachs) # test data Ytest <- Y[1:100,] # training data Ytrain <- Y[101:nrow(Y),] fit <- ggmncv(cor(Ytrain), n = nrow(Ytrain), progress = FALSE) pred <- predict(fit, newdata = Ytest) round(apply((pred - Ytest)^2, 2, mean), 2) #> Raf Erk Plcg PKC PKA PIP2 PIP3 Mek P38 Jnk Akt #> 0.18 0.27 0.59 0.42 0.39 0.47 0.69 0.16 0.15 0.69 0.26