# «Value-at-Risk: Strengths, Caveats and Considerations for Risk Managers and Regulators Master Thesis by Bogdan Izmaylov Supervisor: Thomas Berngruber ...»

**σ to estimate quantiles. Thus, VaR is calculated as follows:**

where is the quantile of the distribution for a given confidence level. For the parametric VaR, as can be seen from the formula, the precision of volatility estimation plays a key role for the estimated figure. This is why an immense amount of VaR models have been developed, with rather exotic names as CAViaR (Manganelli & Engle, 2004) and GlueVAR (Belles-Sampera, Guillén, & Santolino, 2014) with the focus on volatility estimation. Overall, the Generalized Autoregressive Conditional Heteroskedastic (GARCH) variance estimator has

tradeoff (Daníelsson, 2011; Dowd, 1998). The GARCH (p,q) variance estimator

**depends on both p lagged squared returns r and q lagged volatility estimates σ:**

In this study, the focus is set on the properties of VaR, assuming that the volatility is estimated with high precision.

2.2 Expected Shortfall (ES) and Tail Conditional Expectation (TCE) A complementary and closely related measure to VaR is the average value of the

**loss when it exceeds the quantile α:**

where VaRγ is the Value at Risk for confidence level γ, which changes from 0 to α.

** Figure 2. Expected Shortfall and VaR.**

Source: own drawing.

Expected shortfall, in contrast to VaR, gives information about the losses, which occur when the confidence level is exceeded and thus can potentially evaluate the extreme losses in the distribution tail.

Tail Conditional Expectation (TCE), also known as Tail VaR (TVaR), measures

It is equivalent to the Expected Shortfall for continuous distributions only, even though many authors use ES, CVAR and TCE interchangeably.

2.3 Coherency of VaR In most cases, VaR is a coherent measure of risk, as defined by Arztner et al.

(1997). As such, it possesses the following properties:

The subadditivity of VaR is one of the most discussed and criticized properties, since in some cases (especially if the returns are not niid), portfolio VaR will be higher than the sum of individual positions VaRs, which discourages diversification (Artzner et al., 1999).

2.4 Precision of VaR P. Jorion (2007) discusses several precision aspects for estimation of VaR. First of all, the measurement errors are considered. The estimated values converge to the true ones as the number of observations approaches infinity. For the real-life small samples, the precision of estimates can be measured by sampling distributions, which can be used to obtain the confidence bands.

Second precision issue is the estimation of means and variances. The estimated mean is normally distributed around the true value, with the error term approaching zero at the rate of square root of inverse number of observations.

The variance estimate is chi-squared distributed around the true value, but the distribution converges to normal for samples with 20+ observations. Overall, the variance is estimated more precisely than the mean (hence usually most of the

confidence interval narrows with the increase in the number of observations.

The third issue is the estimation of the sample quantiles with the nonparametric approach. In contrast to the previous two measures, the estimates converge to their true values much slower with the increase in sample size. For 95% VaR, the confidence band for the quantile centered at 1.645 is quite large for 100 and even 250 observations, narrowing to [1.52, 1.76] for 5 years of daily observations. For higher-precision VaR estimates (with fewer observations in the tail), the confidence interval narrows only to 20% of the true value, meaning that the sample quantile estimates are very unreliable for extreme left-tail probabilities.

The standard errors can be obtained by bootstrapping. For the estimation of TCE and ES, even more observations are needed in order to obtain reliable estimates, since more observations are needed in the tail extremes of the distribution.

When comparing the two approaches, the parametric method produces more precise and efficient forecasts, provided that the distribution is chosen properly – which itself is an interesting topic and would require a separate thesis to be discussed properly.

2.5 VaR in Basel Accords BCBS has chosen VaR as a measure for minimum capital requirements calculations since the introduction of Basel II accord. VaR has to be calculated by the banks for 10-day period and 99% confidence level, after which the obtained forecast is multiplied with a factor of 3 in order to increase the confidence even more. The multiplication factor is justified by the fact that 99% 10-day VaR still allows for bank failures 1% of the time (once every 4 years on average).

Since it is hard to obtain enough observations to calculate and backtest the 10day VaR, BCBS recommends the banks to calculate the 1-day 99% VaR and transform it into 10-day by using the square root of time rule. Furthermore,

increased to 4 in order to correct for model misspecifications and/or errors.

2.6 Backtesting Regardless of the model used to estimate VaR, it can be compared to other models by backtesting it on the realized returns and provide evidence on the precision of the forecasts. Usually, part of the sample data is used for estimation (estimation window), and the forecasts are tested on the rest of the observations.

If the observed loss on a given day is higher than the one predicted by the model, the violation is recorded. Depending on the confidence level, we would expect violations in 5% and 1% of observations for 95% and 99% VaR respectively.

** Figure 3. Backtesting estimation windows.**

T – entire sample length, n – estimation window length. Source: based on Danielsson (2011).

At the end of the backtest, the number of observed violations is compared to the

**number of predicted violations. This gives the violation ratio for the model:**

= Ideally, the ratio should be as close to one as possible, values above one mean that the model underestimates risk (more violations than predicted turn up during the backtesting) and values below one are a sign of over-conservative model (overestimated risk, less violations than predicted). The estimation and

forecasts for time horizons longer than 1 day. For example, for 99% daily VaR we expect around 2-3 trading days in a year (1% of 252) with losses higher than forecasted. This means that to confirm that the model performs well we need on average 20 observations with no violation for 95% VaR and 100 observations with no violation for 99% VaR.

2.7 Time horizon and the square root of time rule For the case of niid returns, the VaR formula can be written in a general form for

**a selected time horizon:**

Where is the time horizon. The square-root of time rule states that the volatility increases with the square-root of time, hence it can be applied to calculation of VaR for longer time horizons. We can calculate the 1-year VaR by multiplying the 1-day VaR forecast, obtained by using daily returns, by the time adjustment factor√252 ≈ 16. In fact, this is the method BCBS recommends for calculating the 10-day VaR.

2.8 Copulas Copulas provide the means of assessing the joint multivariate distribution of assets in a portfolio, while allowing for different types of dependence throughout the distribution. Sklar’s theorem states that for any joint density 12 there exists a

**copula 12 that links the marginal densities 1 and 2 :**

12 (1, 2 ) = 1 (1 ) × 2 (2 ) × 12 [1 (1 ), 2 (2 ); ] The copula contains all the information on the nature of dependence, only one parameter for bivariate density – the correlation coefficient θ, but it does not

marginal densities and dependence. Often, normal copula is used, because it simplifies the calculations, even though it may only poorly approximate the reality.

2.9 Extreme Value Theory (EVT) Most of the statistical methods focus on the entire distribution. EVT, in contrast, focuses only on the tails of the distribution – extremely rare events. EVT has long Figure 4. Types of distribution tails (left tails depicted). Source: based on Danielsson (2011).

been used in natural sciences, and it was adapted for use in finance in the 1990’s.

It is especially useful for calculation of high-confidence level VaR, which reflects the extremely rare losses in the distribution tail. Key inputs for the EVT model are the tail index and the inverse of the tail index (shape) parameters. Usually for risk calculation, one of the three types of tails is considered: finite endpoint, normal and fat-tailed (Student-t).

VaR in the risk management field. The focus is set on the methodology development and the main sources of information about VaR as risk management tool for risk professionals. Also the publications which are pointing out the dangers of using VaR in practice and regulation are taken into account.

This literature would provide a basis for the discussion in section 4.

The technical document on VaR, released by RiskMetrics (1994), has made it available to the public and the measure quickly has gained popularity among risk management professionals. The widespread adoption though did not follow until the fundamental work of P. Jorion in 1997 came out. Until today, three editions of the book have been published, each one incorporating the latest developments in the risk management field. Since its first publication, it has remained one of the most comprehensive sources of VaR methodology and it established itself as a risk analyst’s handbook. His main points are that VaR is an excellent tool, which improves the ability of managers to assess the risks across different categories, while providing a simple figure which can capture the risks of taking complex financial positions and decisions. The author analyses the potential difficulties in calculating VaR for both risk management and regulation purposes. Another quantile risk measure is discussed as complementary to VaR – Expected Tail Loss (ETL), also known as Tail Conditional Expectation (TCE).

Requirements for more observations and thus lower precision are pointed as the biggest drawbacks against using ETL instead of VaR.

Dowd (1998) presents another comprehensive guide to VaR and its application in risk management in his book “Beyond Value-at-Risk: The New Science of Risk Management”. The book addresses the use of VaR in many areas of risk management in a company, while analysing the usefulness of the measure. One

on VaR – the author argues that it is not a universal method and a high degree of managerial discretion is required for effective implementation. The book also criticizes the BCBS multiplication factor of 3 as an arbitrary number, but this issue is addressed by Jorion in a more recent edition of his book, by explaining that the multiplier accounts for the error in estimation of quantiles. The overall focus is on practical implications for financial and non-financial institutions, with analysis of possible caveats of using VaR and how to avoid them.

There, however, have been numerous studies and discussions on whether VaR is able to deliver what it promises. Pritsker (1997) examines the precision of VaR estimates for derivatives using Monte Carlo simulation methods, and finds low accuracy for deep out-of-the-money options. The study stresses the importance of the trade-off between accuracy and computation time, as well as the importance of specifying the distribution of the factor shocks correctly.

Artzner et al. (1999) pioneered the term “coherent risk measures” by defining a set of characteristics which such measures should satisfy. They conclude that VaR in general is not a coherent risk measure, because it is not subadditive for all returns distributions.

Dowd and Cotter (2007) analyze the precision of quantile-based risk measures.

They suggest Monte Carlo simulation as the best method for estimating the precision of such measures. They also conclude that the samples typically available are too small for the estimates to be normally distributed (asymptotic normality). The final finding is that the characteristics of the underlying distributions, especially excess kurtosis, have significant impact on the precision of estimates.

During the interview for Derivatives Strategy (1996), Nassim Taleb has criticised

Limits of VaR" (Derivatives Strategy, 1998) that VaR creates a false sense of security by giving seemingly precise estimates for risks which cannot be captured by conventional statistical methods. According to him, the risk managers who use VaR get too confident in the calculations and trust more the numbers than their experience. He states that VaR cannot capture the correlations between returns and cannot measure the complexity of events that occur in the modern world.