Let the data speak! Let us purge a-priori expectations.
Christopher Sims proposed the Vector Autoregression which is a multivariate linear time series model in which the endogenous variables in the system are functions of the lagged values of all endogenous variables. This allows for a simple and flexible alternative to the traditional structural system of equations. A VAR could model macroeconomic data informatively, without imposing very strong restrictions or relationships. Essentially, it is macroeconomic modeling without much of the a-priori expectations getting in the way.
VARs are very useful especially in the field of macroeconomics. It has been used widely in simulating macroeconomic shocks to the real economy and has been used heavily in policy simulations and forecasting. This is the thrust and the main use of the Vector Autoregression. Firstly, it is a sophisticated forecasting tool. We will show how VARs can be much better than standard univariate forecasting models, especially in determining the long-run. The majority of empirical studies on forecasting suggest that the VAR has already eclipsed the traditional univariate forecasting models and theory-based structural equation models.
Apart from forecasting, VARs are also useful tools for structural analysis. We note that VARs can investigate the response to shocks. It can pinpoint sources of fluctuations that traditional univariate models fail at. Moreover, VARs can help distinguish between competing theoretical models.
As we have mentioned previously, the VAR is a multivariate linear time series model where the endogenous variables in the system are functions of the lagged values of all endogenous variables. Said simply, the VAR is essentially a generalization of the univariate autoregressive model. Commonly, we notate a VAR as a VAR(p) where p denotes the number of autoregressive lags in the system. Consider a VAR system with only two variables.
We can also write this in matrix form as
The estimation of a VAR is an Equation by Equation OLS. We essentially run OLS on each equation. What we find is that estimates are consistent as only the lagged values of the endogenous variables are on the right-hand side of the equation. Moreover, estimates are also efficient in that all equations have identical regressors which minimizes the variation in each one. Doing this suggests that it is essentially equivalent to a generalized least squares. The assumption of no-serial correlation also holds in this regard.
We will now apply the numerous concepts learned in VAR in an actual example. In particular, we will be using a framework developed by Sims (1992) using Philippine data. In this model, we will be using four variables. These are the following:
Overnight Reverse Repurchase Rate (RRP) which is set by the Bangko Sentral ng Pilipinas. This is, by all accounts, the main policy rate that the Philippine central bank controls.
M1 Money Supply which can be obtained from the BSP’s website
CPI Inflation Rate which is reported monthly by the Philippine Statistics Authority and measures the relative increase in prices based on a Laspeyres price index.
Industrial Production which measures the value of all goods in the industrial sector.
We will first estimate a standard VAR which reflects a key economic response. It is believed in theory and by Sims that shocks to the nominal interest rate represent monetary policy shocks. A shock to the policy variable affects all other variables contemporaneously. The variable is affected by all the others within the period, and is order last. Lastly, the central bank only observes non-policy variables with a lag.
We start again by installing the required packages and loading them using the library() command. For this part, we need to install the “vars” package which will have a host of commands necessary for us to run the VAR and SVAR and the diagnostic tests and applications to follow. We then see the plots of each variable and judge some initial conditions such as non-stationarity.
As said, we need to install the “vars” package. Use the install.packages(“vars”) command to do this. After this, we will load this package together with our standard suite of packages and libraries for us to continue on the estimation.
After installation and loading, it is time to load our dataset. We will use the file Sample_VAR.csv which contains the data on all the variables from January 2003 until February 2020. The data is monthly and is publicly available, obtained from the BSP and PSA. We use the read_csv() command to read the dataset and the file.choose() command to open up a dialogue box for us to select our data. In this case, we place the dataset under an object named “mp”. Name it whatever you want, if you’d like. We then use the head() command to see the first few rows of the dataset to inspect if it loaded correctly. The files and code can be found here: https://drive.google.com/drive/u/0/folders/11mAXh0trxjuf1y_yh-AJXgxulHQwcBEn
mp <- read_csv(file.choose())
Next, we need to declare each variable in the dataset as a time series using the ts() command. We use the $ symbol to call a variable from the dataset. All variables start in the year 2003 with the same day and month of January 1. We set the frequency to 12 since we are dealing with monthly data.
lnIP <- ts(mp$lnIP, start = c(2003,1,1), frequency = 12)
M1 <- ts(mp$M1, start = c(2003,1,1), frequency = 12)
CPI <- ts(mp$CPI, start = c(2003,1,1), frequency = 12)
RRP <- ts(mp$RRP, start = c(2003,1,1), frequency = 12)
We can also visualize our series using the autoplot() or ts_plot() command. As before, the ts_plot() command is a more interactive version using the plot.ly package as base.
Again, it is important to assess whether the variables under study are stationary or not. As we have said, having stationary variables is of an ideal case in our VAR even if we can run it without these. As we have been accustomed to, let us use the tests we are familiar with. For simplicity, we will use the Phillips Perron so we would not need to specify the number of lags.
In the estimation above, we find that all variables are non-stationary variables. Remember that a rejection of the null hypothesis suggests the data is stationary. Nevertheless, we can still run a VAR estimation using these level data. However, you may opt to difference the data and see if that provides better results and forecasts.
The time has come to formally estimate our VAR. We will first need to bind our VAR variables together to create the system. After this, we will select the optimal lag order behind the VAR we will be using. We will then run an unrestricted VAR estimation and see the results. Lastly, we will run some diagnostics such as tests for autocorrelation, stability, and normality.
The first step is to build the VAR system. This is done through the cbind() command which essentially groups our time series. We will order this in the desired order that we see fit. We will store this in an object called “v1”. We will then rename the variables as the list to follow using the colnames() command.
v1 <- cbind(RRP, lnM1, CPI, lnIP)
colnames(v1) <- cbind("RRP","M1","CPI", "lnIP")
After we bind the variables and created the VAR system, we will determine some lag order which we will use. To do this, we use VARselect() command and use the v1 object we just created. We will use a maximum lag order of 15. The command will automatically generate the preferred lag order based on the multivariate iterations of the AIC, SBIC, HQIC and the FPE.
lagselect <- VARselect(v1, lag.max = 15, type = "const")
Running the commands suggests that the lag order to be used is 2. In the study of Sims, he used 14 lags which may have been more adept in US data as the effects reflect greater persistence.
We will now estimate a model. We estimate the VAR using the VAR() command. The p option refers to the number of lags used. Since we determined that 2 lags is best, we set this to 2. We let it be a typical unrestricted VAR with a constant and we will specify no exogenous variables in the system. The summary() command lists down the results.
Model1 <- VAR(v1, p = 2, type = "const", season = NULL, exog = NULL) summary(Model1)
We do not typically interpret the coefficients of the VAR, we typically interpret the results of the applications. You will see however that we have coefficients there for each lag and each equation in the VAR. Each equation represents an equation in the VAR system.
One assumption is that the residuals should, as much as possible, be non-autocorrelated. This is again on our assumption that the residuals are white noise and thus uncorrelated with the previous periods. To do this, we run the serial.test() command. We store our results in an object Serial1.
Serial1 <- serial.test(Model1, lags.pt = 5, type = "PT.asymptotic")Serial1
In this test, we see that the residuals do not show signs of autocorrelation. However, there is a chance that if we change the maximum lag order, there could be a sign of autocorrelation. As such, it is best to experiment with multiple lag orders.
Another aspect to consider is the presence of heteroscedasticity. In time series, there is what we call ARCH effects which are essentially clustered volatility areas in a time series. This is common is series’ such as stock prices where massive increases or decreases could be seen when an earnings call is released. In that area or window, there could be excessive volatility thereby changing the variance of the residuals, far from our assumption of constant variance. We have models to account for these which are the conditional volatility models which we will discuss in a later section.
Arch1 <- arch.test(Model1, lags.multi = 15, multivariate.only = TRUE)Arch1
Again, the results of the ARCH test signify no degree of heteroscedasticity as we fail to reject the null hypothesis. Therefore, we conclude that there are no ARCH effects in this model. However, like with the autocorrelation test, it is possible to register lag effects at subsequent lag orders.
A soft pre-requisite but a desirable one is the normality of the distribution of the residuals. To test for the normality of the residuals, we use the normality.test() command in R which brings in the Jarque-Bera test, the Kurtosis Test, and the Skewness test.
Norm1 <- normality.test(Model1, multivariate.only = TRUE)Norm1
Based on all the three results, it appears that the residuals of this particular model are not normally distributed.
The stability test is some test for the presence of structural breaks. We know that if we are unable to test for structural breaks and if there happened to be one, the whole estimation may be thrown off. Fortunately, we have a simple test for this which uses a plot of the sum of recursive residuals. If at any point in the graph, the sum goes out of the red critical bounds, then a structural break at that point was seen.
Stability1 <- stability(Model1, type = "OLS-CUSUM")plot(Stability1)
Based on the results of the test, there seems to be no structural breaks evident. As such, our model passes this particular test
We will now move on to policy simulations in a regular VAR. We will do the three main ones, which are the Granger Causality, Forecast Error Variance Decomposition, as well as the Impulse Response Functions.
We will test for an overall Granger causality testing each variable in the system against all the others. As we said, there could be a unidirectional, bidirectional, or no causality relationships between variables.
GrangerRRP<- causality(Model1, cause = "RRP")
GrangerRRPGrangerM1 <- causality(Model1, cause = "M1")
GrangerM1GrangerCPI <- causality(Model1, cause = "CPI")
GrangerCPIGrangerlnIP <- causality(Model1, cause = "lnIP")
The command used is suitable for bivariate cases but we will use it in a system of four for now. Some of you may try reducing the VARmodels to two variables to see more straightforward interpretations. To perform the Granger causality test, we use the causality() command. Based on the results, we conclude that CPI Granger causes the other variables but the converse is not seen. Maybe granular variable to variable rather than variable to group relationships would prove more significant.
We will now turn to our impulse response functions. While we can get more impulse response functions than the ones below, we will zero in on the impact of a shock in RRP to the other variables in the system
RRPirf <- irf(Model1, impulse = "RRP", response = "RRP", n.ahead = 20, boot = TRUE)
plot(RRPirf, ylab = "RRP", main = "RRP's shock to RRP")M1irf <- irf(Model1, impulse = "RRP", response = "M1", n.ahead = 20, boot = TRUE)
plot(M1irf, ylab = "M1", main = "RRP's shock to M1")CPIirf <- irf(Model1, impulse = "RRP", response = "CPI", n.ahead = 20, boot = TRUE)
plot(CPIirf, ylab = "CPI", main = "RRP's shock to CPI")lnIPirf <- irf(Model1, impulse = "RRP", response = "lnIP", n.ahead = 20, boot = TRUE)
plot(lnIPirf, ylab = "lnIP", main = "RRP's shock to lnIP")
The irf() command generates the IRFs where we need to specify the model, what the impulse series will be, and what the response series will be. We can also specify the number of periods ahead to see how the impact or shock will progress over time. We then use the plot() command to graph this IRF. The results from the IRF are quite puzzling. Remember that such an increase in the RRP cannot be a reaction of the BSP to what is happening the other variables since it was ordered first. As you can see, in (a), the shock to the RRP will of course increase RRP, this would then lead to a slight fall in the money supply (b) and a fall in output (d). What we also find is that prices increase in the short run but decrease moderately thereafter. According to Christopher Sims, this is a puzzling result. The results suggest that prices go up after an RRP hike. If a monetary contraction reduces aggregate demand (lnIP) thereby lowering output, it cannot be associated with inflation. He goes on to say that the VAR could potentially be miss-specified. For instance, there could be a leading indicator for inflation to which the BSP will reach and which was wrongly omitted from the VAR. The BSP could know that inflationary pressures are about to arrive and counteract that by raising the interest rate.
If you recall basic macroeconomics, a contractionary monetary policy like raising the policy rate (interest rate) will lead to a fall in output. That fall in output should generally weigh inflation down and cause prices to decrease. However, what we see in this model is that prices have increased, for the most part, which is puzzling. Supposedly, exogenous movements of the policy interest rate in the previous estimation were not totally exogenous. In other words, the exogenous shocks were not properly identified. To solve this, Sims added commodity prices to the mix.
We now turn our attention to the forecast error variance decomposition. Again, we can trace the development of shocks in our system to explaining the forecast error variances of all the variables in the system. To do this, we use the fevd() command. Like the IRF, we can also specify the number of periods ahead. For this case, let us just zero in on RRP.
FEVD1 <- fevd(Model1, n.ahead = 10)
We can clearly see in window (a) that the forecast error of the RRP (column 1) at short horizons is due to itself. This is the case because the RRP was placed first in the ordering and that no other shocks affect RRP contemporaneously. At longer horizons say 10 months, we can see that CPI now accounts for about 16 percent and that money supply accounts for about 2 percent respectively.
As we have said, we can also forecast using VAR. To do this, we use the predict() command and generate something called a fan chart which is commonly used in identifying the confidence level of forecasts igraphically. We will set the forecast horizon to 12 months ahead or a full-year forecast.
forecast <- predict(Model1, n.ahead = 12, ci = 0.95)fanchart(forecast, names = "RRP", main = "Fanchart for RRP", xlab = "Horizon", ylab = "RRP")fanchart(forecast, names = "M1", main = "Fanchart for M1", xlab = "Horizon", ylab = "M1")fanchart(forecast, names = "CPI", main = "Fanchart for CPI", xlab = "Horizon", ylab = "CPI")fanchart(forecast, names = "lnIP", main = "Fanchart for lnIP", xlab = "Horizon", ylab = "lnIP")forecast
The figure shows that RRP is expected to decrease slightly then increase, M1 is expected to decrease and so is IP. CPI is expected to decrease slightly in the first few months then rebound moderately. Take note that this is different from the IRFs simply because we are using VAR as a forecasting tool instead of a policy tool.
All in all, I hope with this example you can see the many use cases of the VAR methodology and why many economists continue to use it for its flexibility. Apart from just a sheer forecasting model, it has many more applications such as policy simulation, linear causality analysis, and forecast error decomposition. Before the creation of the VAR, economics pinned everything on theory and practice and believed that economic variables had to live by their code. In practice, however, we know that economic variables behave in manners that may differ from our preconceived notions. As such, setting prior expectations may have deceived us.
For a more hands-on approach, consider watching the videos I made on this topic which are uploaded on YouTube.
 Lütkepohl, H. New introduction to multiple time series analysis.(2005). Springer Science & Business Media.
 Enders, W. Applied econometric time series. (2008) John Wiley & Sons.