Modelling Value at Risk: Evidence from the Saudi Stock Market

This paper aims to estimate Value at Risk (VaR) of Tadawul All Shares Index of Saudi Stock Market (TASI) over the period January 2004 – December 2017. It applies the following methods, empirical quartile, historical simulation (HS), and percentile, parametric via delta normal, GARCH, IGARCH, Monte Carlo simulation and bootstrapping simulation. It uses 5% and 1% critical value under Normal distribution. Back- testing based on likelihood ratio LR accepted empirical quartile at both five and one percent, while accepting delta normal, historical simulation, percentile, IGARCH, and Monte Carlo at one percent. The worst loss obtained is approximately 4%.


INTRODUCTION
There is unanimous agreement about the concept of Value-at-Risk (VaR). All definitions comprise four elements, that is single measure of certain amount of portfolio loss over specified period with a probability (1-α) due to market movements (Linsmeier and Pearson 1996, Manfredo and Leuthold 1998, Tsay 2002, Fernandez 2003, Lamantia, Ortobelli, and Rachev 2006, Čorkalo 2011, Bucevska 20 13, Adabi, Mehrara and Mohammadi 2015, Bingqiu (2016 and Glyon 2017). No doubt, that VaR took great importance since the recommendation of Basel Committee in 1996 (Aloui and Ben Hamida 2015). VaR is the dollar or percentage loss in a portfolio (asset) value that will be equal to or exceeded only X percent of time.
There was high expectations laid on attracting foreign portfolio investment because of opening the Saudi stock market to foreign portfolio investment since June 2015. Foreign participation would open up opportunities associated with access to a huge untapped market. Saudi stock exchange has met conditions related to "size and accessibility" criterion. Two major risks are probably face KSA that is an increase in short-term volatility in the financial system and its monetary policy being dependent on that of the US (Aljazira Capital 2015). Altogether, these entail estimating as accurate as possible market risk quantification. The Tadawul All Share Index (TASI) is a major stock market index, which tracks the performance of all companies listed on the Saudi Stock Exchange. The year 1985 witnessed the introduction of index has a base value of 1000 and its recognition on June 30, 2008 (Yahoo Finance). However, factors including a strong economy support Tadawul (Aljazira capital 2015).
The motivation behind this paper is the scarcity of empirical research concerning VaR for Saudi Stock Exchange. To my knowledge, only one paper has dealt with estimating Value at Risk for Saudi and Gulf Cooperation Council stock markets (Aloui, and Ben Hemida 2015). The difference between Aloui & Hemida and me is that they used only non-linear GARCH class

Historical Simulation
The historical simulation approach requires few assumptions about the statistical distribution of the underlying market factors taking the current portfolio, subject it to the actual change (Linsmeir and Pearsom 1996). It does not constrained by requiring assuming normality. All we need to do is to sort data from worst to best so here we are looking at the tail to answer the question what is the α% VaR in other words with (1-α)% confidence level we don't expect the loss to be worse. Then we are answering it by just looking at our historical sample so by means of the percentile function to identify five percent tail of the overall count, the VaR is right outside the five percent tail (Ba ach). Tsay (2001) sets ∆P v = w 0 be the change in value of the assets in the financial position from time t to t + l, designates the cumulative distribution function (CDF) of ∆P v by x y D .Then states the VaR of a long position over the time horizon l with probability p as: The VaR in the (1-α) confidence level is the sample mean minus the percentile of the standard normal distribution multiplied by the portfolio standard deviation (Chen, Chen 2013

Risk Metrics
In the context of risk measurement, a Risk Metrics is the concept enumerated by a risk measure. When choosing a Risk Metrics, an agent is picking an aspect of perceived risk to investigate, such as volatility or probability of default. PH{ = ÄÅÇ6É ÅT zÅÑ9É9Å6 − 1.65Q 0Ü+; PH{(à) = àPH{ â 0 = 0; Q 0 -= JQ 07+ -+ 1 − J w 0 -= äã{åç 1,1 :wÅéèÑÑ ê9ÉℎÅÇÉ íw9TÉ Data: To calculate VaR at 99.9 confidence level, we need not less than1000 observations. Extracting VaR from historical data requires choosing the desired confidence level and picking out the nth observation in the historical data that corresponds to that confidence level. Thus, VaR is the nth percentile of the values in the chosen data set (Tsay 2002).

Conditional Value at Risk (CVaR)
CVaR is generally a better approximation of potential losses for several different scenarios where the highest loss are on the right. We define a probability level (1-α)% and consider the scenarios whose losses exceed this level. The (1-α)% defined as the average of losses in these scenarios. It is an average of expected losses of the lowest cases than a wide range of potential losses (Čorkalo 2011). To find exactly how much we will lose in average in our worst-case scenarios we have to look at CVaR values. CVaR (α %) tells us that in worst ten percent of our returns the average gain will be 0.xxx percent. CVaR (CL) = 1/VaR reference*sum from the first sorted return through VaR cell reference. Count = count built in function, VaR (CL) reference = (1-confidence level (CL)*count (Uryasev and Stan (2011); Čorkalo 2011, Diedwaedo)

Quantile and Order Statistics
Assuming that the distribution of return in the prediction period is the same as that in the sample period, one can use the empirical quantile of the return r to calculate VaR (Tsay 2002). Arrange returns in increasing order. Assume that the returns are independent and identically distributed random variables that have a continuous distribution with probability density function (pdf) f (x) and CDF F(x). D 4 = : -− : : -− : + w ì > + : − : + : -− : + w ì c ; î + < 6: < î -; : ' = î ' 6

Classical value at risk versus GARCH-based models
Volatility clustering occurs when a period of large returns is followed by a period of small returns (Nelson, 1991). The second property (fat tail) indicates that large positive or large negative observations in financial data occur more frequently as compared to the standard normal distribution. Nonlinear dependence explains the relationship between multivariate financial data. Nonlinear dependence between different assets have co-movement in the same direction relevant to some market conditions (Danielsson, 2011).

Backtesting
Backtesting refers to testing a predictive model using existing historic data, i.e. a kind of cross validation applied to time series data in trading strategy, investment strategy or risk modeling. It seeks to estimate the performance of a strategy or model if had been employed during the past period then requires simulating past conditions with sufficient details. One limitation of backtesting the need for detailed historical data, a second limitation its mobility to model strategies that would affect historic prices, finally backtesting like other modeling is limited by potential over-fitting i.e. it is often possible to find strategy that would not work well in the future. Despite these limitations, backtesting proved to be a useful tool. Backtesting refers to testing predictive model using existing historic data. It a kind of cross validation (Hurlin and Tokpavi 2014) We reject the null hypothesis if the outcome is greater than ù + 1 https://www0.gsb.columbia.edu/faculty/pglasserman/Other/masteringrisk.pdf

Monte Carlo Simulation (MCS)
Monte Carlo simulation involves conducting repeated trials of the values of the uncertain input(s) based on some known probability distribution(s) and some known process to produce a probability distribution for the output. That is, each uncertain input or parameter in the problem of interest is assumed to be a random variable with a known probability distribution (Čorkalo 2011, Cheung & Powell 2012. (Chen, Chen 2013) gives the Brownian motion: Where St is the share price at time t, e is the natural log, ∆É is the time increment (expressed as a portion of a year in terms of trading days) and °0 is the randomness at time ∆É introduced to randomize the change in share price. After some rearrangement the above equation becomes Monte Carlo simulation multiplies the last entry by the exponential of the sum of drift and the product of the standard deviation by inverse of the normal distribution.

Bootstrapping Simulation
The key difference between Monte Carlo simulation and bootstrap is that Monte Carlo uses algorithm to generate portfolio path forward in time whereas bootstrap uses the historical return instead of the algorithm. The benefit of the bootstrapping is that it implicitly takes the volatilities and correlations present in the historical data (Dutta and Bhattacharya 2006). The standard steps are first we index the historical returns (daily returns) by means of natural logarithm. The second step is the essence of bootstrapping we randomly select cross-sectional vector that means we go back and select randomly a day from a historical window use that days returns. We use the built-in Excel function [INT(RAND()*last date)+1] to generate random uniform variable that is going to be between 0 &1. Third we random selection to simulate forward that give us the price of portfolio at some point in the future. The fourth step we are going to repeat that as many time as we want which gives us n number of hypothetical portfolio in the future. The final step we sort that list from top to bottom best to worst and look up down the list to find the value at risk. Bootstrapping does not need to specify distributional assumption, not bound to the problem of the normal distribution, the second thing it automatically pick up correlation if it exist between stocks. Bootstrapping is an improvement of basic historical simulation uses random sampling with replacement. Ten day VaR is very common in terms of market risk and stock prices(DuttaA, and Bhattacharya (2006)).

EMPIRICAL RESULTS Data
We downloaded daily data of close prices of TASI from meta-stock covering the period January 2004 to December 2017 containing 3270 observations. We convert the data into log returns (r). There are two advantages of log returns. The main reason why we use log returns because they are time additive (time consistent). The second desirable property is that if these log returns are normally distributed which is a common assumption under short periods then adding these variables produces end log return that is also normally distributed. One disadvantage of log return are not a linear function of the component or asset weights.

Descriptive Statistics
Mean The mean returns is close to zero as well as the median indicates that. There is a wide range of returns, skewed to the left as revealed by the negative sign which is an alarming a matter, while the positive sign of kurtosis suggests leptokurtic distribution means fat tail i.e. higher probability of observations in the tail, too many points cluster close the mean and too many points cluster away from the mean. According to this leptokurtic distribution, we have to be very careful about the negative region.

Unit Root Test
Augmented Dickey Fuller rejects the Null Hypothesis that r has a unit root since all absolute values of t-Statistic at the three critical values (-3.43216, -2.86223, -2.56718) are less than the calculated ADF t-statistic -51.6312 that indicates stationary return data. Results of GARCH(1,1) reveal that the constant term is significant at five percent, whereas news about the volatility from the previous period, and last period forecast variance are significant at one percent. The sum of (α + β = 0.134302) is less than one indicating convergence to the long-run mean ¢ £Ü § = ).))))+-. The value at risk percentages calculated by delta normal valuation method and empirical quantile passed the likelihood rate at 5% since the calculated LR is less than ù ).•®,+ which is 3.84. Hence, there is five percent chance to lose -1.7%, 1.57%, -4.3% or more of the index value Copyright © Society for Science and Education, United Kingdom on any given day respectively. The application of the empirical quantile, historical simulation, percentile and IGARCH gave value at risk percentages that passed the LR at 1% ù ).••,+ that is 6.63. VaR percentages of the delta normal and IGARCH are two extremes, which are different from the other three methods, lie closely to each other. Thus, the empirical quantile is the only method that passed LR test at both critical values.

Monte Carlo Simulation
Samples of five period has been forecasted using Monte Carlo simulation, and has been plotted in the following chart.
The first, third and fifth series were below 114.1 that is TASI end point indicating negative returns, whereas the second and the fourth series showed gains. Series 2 indicates the worst loss at the seventh period (-3%) followed by series 3 (-1.9%) and series 4 (-1.4%) at the sixth period.

Bootstrapping Simulation
Series 3 showed the worst loss, followed by series 1 and 5. The first series has the worst loss in the first four of the simulation period. There is apparent difference between simulation results of Monte Carlo and bootstrapping. Negative returns were dominant in bootstrapping

DISCUSSION
The basic historical simulation is the most popular approach to value at risk employed by companies and banks at least. It assumes mean daily return of zero, often done for a short period. Its advantages stems from its simplicity, flexibility, and free from the complexity of normality assumption. Its major drawback is its need for long time series that requests continuous updating and the jeopardy of picking extreme values. The application of this method to TASI log return introduced two different estimates in magnitude and acceptance by LR backtest. Its VaR 5% is almost half that of VaR 1% in addition to its failure to pass the LR test compared to the VaR 1%. The conditional value at risk is supposed to be an improvement to VaR. It revealed almost two fold VaR percentage compared to basic historical simulation however, it did not pass the LR test. The percentile approach follow suit to historical simulation. The empirical quartile shares the same advantages with the afro said methods; however, its drawbacks are its assumption of unchanging distribution of returns, and its inefficiency if p is close to zero. Nevertheless, it proved to be the only method that passed the backtest at both levels.
The delta normal approach assumes the prevalence of standard normal distribution. It is an alternative to variance/ covariance approach; nonetheless, it captures only linear risk exposure. Its estimate at five percent level is relatively different from other approaches despites its success in passing the LR test. Its VaR 1% did not make it. The application of GARCH family to value at risk produced unfavorable results in terms of backtesting passage specifically GARCH (1,1) and IGARCH (1,1). Another drawback of estimated IGARCH (1,1) its failure to remove the ARCH effect, which accomplished by IGARCH(2,1) Bootstrapping simulation is an improvement to basic historical simulation. It showed worst losses compared with Monte Carlo