# «Value-at-Risk: Strengths, Caveats and Considerations for Risk Managers and Regulators Master Thesis by Bogdan Izmaylov Supervisor: Thomas Berngruber ...»

VaR can be used to calculate risk from different exposures and then it can be aggregated to obtain the measure of the overall risk. In contrast, summing up two volatilities of exposures in different currencies does not make sense, but aggregating VaRs across different positions can produce a total risk exposure for a company. This quality of VaR as risk metric has made it attractive in the past decades with the rise of enterprise risk management. In the same way as different risk exposures can be aggregated into individual VaR numbers, different positions in a portfolio can be aggregated to produce portfolio VaR. The caveat in risk aggregation is that the overall risk is not simply equal to the sum of individual component measures, because VaR is not sub-additive. Either parametric (analytical) or non-parametric (simulation) methods should be used in order to account for the benefits of diversification when calculating the aggregate VaR.

The simplest way of summing up the individual VaRs produces very crude estimates and disregards any benefits of diversification, thus the resulting number is sometimes called Undiversified VaR. It assumes that the individual positions are perfectly correlated.

Another approach is to estimate the sensitivities of different exposures to the changes in the underlying risk factors. A simple and in most cases sufficiently precise method is to estimate the linear correlations between risks and use the

**correlations in the aggregate VaR computation:**

Where P is the portfolio value and w1 and w2 are the weights of individual positions and ρ12 is the correlation coefficient. This approach allows to calculate aggregate VaR while accounting for the benefits of diversification across different types of risks. According to the study made by Perignon et al. (2008), large banks typically overstate the VaR figures they report and are conservative in the VaR reductions from diversification.

The assumption of linear dependence is not very realistic for financial markets, since the negative returns tend to have stronger impact than the positive returns, which is also pointed out as a stylized fact of non-linear dependence by Danielsson (2011). Linear correlation only approximates the dependency between random variables (returns) sufficiently until a certain point. Taking into account the typical VaR confidence levels (95% and 99%), it is not surprising that at such levels the correlations will differ from the predictions of the above mentioned model. In recent years, the research has been concentrated on using copulas in order to model joint multivariate distributions and improve VaR forecasts for aggregate risk measurement (Gendron & Genest, 2009; Manner & Reznikova, 2012). Copulas are able to capture the non-linearity of the relationships between random variables in the extreme regions of the distributions. Using the conditional copula-GARCH model, Huang et al. (2009), find the method to be robust and allowing flexible distribution, which is an improvement on the traditional methods like HS and MC, when applied to highly volatile market returns. But even such robust models will be useless if the chosen copula does not approximate the real dependencies accurately, which is the choice risk manager has to make.

Another method of calculating aggregate VaR is the full revaluation – calculation of returns for different exposures and positions by using pricing models, and

computational power, especially if combined with MC simulations, when for each position there have to be calculated around 10000 simulations in order to get precise estimates. McKinsey in the Working Paper on Risk # 32 (2012) point out the computational requirement as a big challenge for institutions, with calculations taking 2-15 hours to complete. Another striking fact is that 75% of financial institutions relied on HS methods, which are inferior to MC, since the latter can better predict occurrences in the tails of a distribution. The models used by the majority of institutions provide fast and easy to understand (simulations are perceived as a “black box”) measures, but this simplicity comes at a cost, which has been seen in the extreme losses the companies have suffered during the GFC.

The advantages that VaR provides as an aggregate measure of risk are especially pronounced for financial institutions (for calculation of capital requirements and enterprise risk management) and for clearing houses (for calculations of margin requirements). Financial institutions use VaR to estimate the losses from all business units and calculate the capital requirements (both for regulatory and internal risk management purposes) in order to be able to absorb such losses when they occur. Clearing houses use VaR to calculate the margin requirements for traders based on the positions that they (traders) have taken and not only on their historical performance or credit scores.

The real strength of VaR as an aggregate measure of risk may lie not in the end product of the calculations – the portfolio VaR, but in the measures which are used to calculate it. Component VaR and Marginal VaR can give invaluable insights into how certain positions impact the overall risk. Component analysis can aid the managers in the selection of projects in addition to the NPV analysis by giving them the idea of how a project is contributing to the project portfolio

It is clear that VaR is a very important tool in risk management. It is the industry standard for measuring of aggregate risk exposure, which arises from multiple positions. At the moment there is no better alternative than VaR for aggregating the risk across the institution. Arguments can be made in favour of the ES, but for this measure both the precision and computational requirements prevent its widespread adoption. Clearly, the banks cannot rely on daily full revaluation methods if they take 15 hours to compute. But even with such speed, there would be no difficulty for them to make these calculations once per week in order to check the validity of the simplified calculations. The quality of data and models are as important as the experience and competence of the RMPs using them to make decisions. The knowledge of differences in models and their limitations are crucial for successful risk management.

4.5 VaR use for stress testing (Malz, 2011) discusses VaR in the context of stress testing as a tool for calculation of changes in values of risk factors in a portfolio. VaR shocks are calculated as a product of marginal volatility of a position in portfolio and a multiplier for the quantile (based on the normal distribution). The value of a VaR shock corresponds to maximum loss for the selected confidence level, which for stress

then compared to the predicted losses from the stress test scenarios in order to capture the magnitude of tail events which could not have been predicted by the models that assume normality of returns. Since, by definition, VaR measures the loss during the normal market conditions, stress testing can give the RMPs more information about the risks in the cases when markets are abnormal. The biggest challenge in stress testing methods remains the choice of scenarios and assigning the probabilities to them.

The GFC has shown the failure of the modern stress-testing techniques to predict systemic failures in financial sector. The critique is mainly related to the limited number of scenarios, and inappropriate methods (N. N. Taleb, 2012). Taleb (2012) proposes a heuristic method, which is a scalar measure that contains the information about “the fragility of volatility in the stresses”. The fragility represents the disproportionately higher losses which occur when the stress increases. The heuristic is calculated by performing additional stress tests around the main scenarios by using multiples of mean deviation for scenario variables.

However, Sorge (Sorge, December 2004) argues that the VaR methodology implemented with macroeconomic stress-testing can account for both the extreme shocks and the endogenous prices in the financial sector. If additionally the shocks are calculated for the tests around the main scenario as proposed by Taleb, it can give the RMP the necessary information about the changes is VaR arising from shocks.

One such scenario could be the shift of the loss distribution caused by the endogenous reaction of market participants to a severe shock.

It can also be argued, that with the information about VaR changes, the heuristic will be redundant. The changes in VaR can give a good representation of risk exposure and which factors have the biggest impact on it. Therefore, in stresstesting, the process of VaR calculation is even more important than in other areas.

Of course, this process may provide numerical estimates of losses for lowprobability high-impact scenarios, but the quality of the estimates will largely depend on the closeness of the hypothetical stress scenarios to the realized movements in risk factors. On the other hand, the process itself will give the manager more information about the relationship between risk factors in extreme cases, which can help develop strategies for risk mitigation. Such information can allow RMPs to act on the early signs of adverse movements in risk factors and thus reduce their total impact on the portfolio.

The use of stress testing can improve on the VaR methodology by relaxing the assumption of exogenous market prices, for which VaR has been criticized (Daníelsson, 2011). There are, however, many important issues that need to be considered. First of all, the stress test horizon has to be carefully selected: one day

may involve too many uncertainties. Second, the factors which could affect the prices should be identified and principal component analysis may be used in order to decrease the computational needs. Third, a limit to the number of combinations of such factors should be selected, since it may not always be realistic to assume that there will be maximum adverse changes in all risk factors at the same time. Exactly how realistic it is is definitely not an easy question, since this is the Taleb’s (N. Taleb, 2010) “Black Swan” event scenario. Thus even though the VaR calculations can provide parametric methods for estimation of losses during stress-testing, the inputs for the tests remain key factor for the precision and quality of the tests. These inputs need to be chosen based on knowledge, experience, risk and business understanding of the RMP.

4.6 VaR as a widely adopted measure of risk for financial institutions It is natural for a good risk measure to be quickly adopted by many market participants and few models have had such wide-spread adoption as VaR. Even though there are many different ways/models of calculating it, most of the time only several models are used. Every model has flaws, and as Danielsson (Daníelsson, 2002) argues, most of the models perform much worse than expected. When the markets are heterogeneous and investors are using many different models when making decisions, it is likely that the model errors will impact only a small portion of market participants and will not have any significant impact on the market as a whole. When a large portion of investors is using the same (or very similar) models, then the markets can get dangerously homogenous. In such situation an error in the model can have very big impact on the system: the investors find the same positions attractive, which increases the demand for these positions and drives the prices up, at the same time the investors find the same positions unattractive, which drives the prices of these position down. The key factor is the magnitude of demand changes, which is

big changes in prices, and since most of the models assume that the prices are exogenous (not affected by the market transactions), this can create a dangerous feedback loop of sell-offs and price drops.

Similarly, during the GFC the financial institutions that have taken the same positions (presumably low-risk ones) had to decrease their exposure in order to meet the regulatory capital requirements when the risk estimates of these positions increased. The sell-offs triggered further decrease in the quality of these positions, which further increased VaR and triggered more sell-offs. The key problem is that even if different models are used to assess the risk internally, the need to comply with VaR-based regulations forces the market participants to take actions which could have negative impact on the whole system.

Since there is no model superior to others in all situations, and the market participants have diverse expectations about the reality of financial markets (normality, economic cycle, stationarity, trends, etc.), it is unlikely that the same model will be used by sufficiently big number of investors, unless imposed by some kind of regulation requirement. Even if the VaR is the preferred model for assessing risk, there are so many decisions that have to be made by the RMPs when implementing it, that there will be no significant effects on market prices and feedback loops will not occur. This argument, once again, shows the benefits of a free market system, and it confirms the validity of Goodhart’s law for risk measures and VaR in particular.

4.7 VaR as an instrument for regulation in Basel Accords.

One of the most debated topics on the use of VaR is its implementation in the BCBS Accords (Basel II and III) as a measure of market risk for calculation of capital requirements. The main purpose of the Basel Accords is to prevent systemic crises by requiring the financial institutions to hold a certain amount of

taken and the riskiness of these positions. Since VaR is a tool for measurement of the market risk of a financial institution, it determines the minimum amount of equity capital that this institution should hold for a given level of market risk exposure. The required capital is supposed to be sufficient for sustaining losses arising from such risk.