INVESTMENT PORTFOLIO OPTIMIZATION WITH GARCH MODELS BY RICHMOND OPARE SIAW (10395081) THIS THESIS IS SUBMITTED TO THE UNIVERSITY OF GHANA, LEGON IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE AWARD OF THE MPHIL STATISTICS DEGREE JUNE, 2014 University of Ghana http://ugspace.ug.edu.gh i DECLARATION Candidate’s Declaration This is to certify that this thesis is the result of my own research work and that no part of it has been presented for another degree in this University or elsewhere. SIGNATURE: ……………………………… DATE: …………………………… RICHMOND OPARE SIAW (10395081) Supervisors’ Declaration We hereby certify that this thesis was prepared from the candidate’s own work and supervised with guidelines on supervision of thesis laid down by the University of Ghana. SIGNATURE: ……………………….…… DATE: ……………………….. DR. KWABENA DOKU-AMPONSAH (Principal Supervisor) SIGNATURE: ……………………………. DATE: ………………………….. DR. F. O. METTLE (Co-Supervisor) University of Ghana http://ugspace.ug.edu.gh ii ABSTRACT Since the introduction of the Markowitz mean-variance optimization model, several extensions have been made to improve optimality. This study examines the application of two models - the ARMA-GARCH model and the ARMA- DCC GARCH model - for the Mean- VaR optimization of funds managed by HFC Investment Limited. Weekly prices of the above mentioned funds from 2009 to 2012 were examined. The funds analysed were the Equity Trust Fund, the Future Plan Fund and the Unit Trust Fund. The returns of the funds are modelled with the Autoregressive Moving Average (ARMA) whiles volatility was modelled with the univariate Generalised Autoregressive Conditional Heteroskedasticity (GARCH) as well as the multivariate Dynamic Conditional Correlation GARCH (DCC GARCH). This was based on the assumption of non-constant mean and volatility of fund returns. In this study the risk of a portfolio is measured using the value-at-risk. A single constrained Mean-VaR optimization problem was obtained based on the assumption that investors’ preference is solely based on risk and return. The optimization process was performed using the Lagrange Multiplier approach and the solution was obtained by the Kuhn-Tucker theorems. Conclusions which were drawn based on the results pointed to the fact that a more efficient portfolio is obtained when the value-at-risk (VaR) is modelled with a multivariate GARCH. University of Ghana http://ugspace.ug.edu.gh iii DEDICATION This research work is dedicated to my parents Mr. Moses Odoi-Opare and Mrs Margaret Odoi-Opare. My brother Isaac Odoi-Opare, my wife Jennifer Acheampomaa Amofa and my lovely daughters Marjorie Nyarkoa Siaw and Shannon Oparebea Siaw. University of Ghana http://ugspace.ug.edu.gh iv ACKNOWLEDGEMENT I give my sincere thanks to the Almighty God for giving me this privilege to contribute to knowledge and especially for His care, protection, directions and encouragement throughout my study. I would like to acknowledge in a special way my Supervisors; Dr. Kwabena Doku-Amponsah and Dr. F. O. Mettle for their support and sacrifices, directions and other assistance they offered to make this research a success. In addition, my appreciation also goes to the lecturers in the Statistics Department for their constructive criticisms and directions and not forgetting all the staff of the Statistics Department of the University. To my Parents, brother, wife, daughters and all my friends, I say may God richly bless you all for the confidence you showed in me as well as the guidance, moral and spiritual support you gave me. Also my sincere thanks also go to you Mr. Emmanuel Akotibe for his immense help. But for his financial support this academic exercise would still remain a dream. Not forgetting Mr. & Mrs Boakye -Yiadom, and Miss Portia Mintah for your spiritual support throughout my period of study. Finally, I give special thanks to all my colleagues especially the 2014 MPhil Students at the Department of Statistics especially, Mr. Mathew Kofi-Ackah Erzoah for being always there to share ideas together, to criticise and also making this research so lively and less stressful. Thank you all and may God richly bless you more abundantly. University of Ghana http://ugspace.ug.edu.gh v TABLE OF CONTENTS DECLARATION ................................................................................................................................ i Candidate’s Declaration .................................................................................................................... i ABSTRACT ....................................................................................................................................... ii DEDICATION .................................................................................................................................. iii ACKNOWLEDGEMENT ................................................................................................................. iv TABLE OF CONTENTS................................................................................................................. viii LIST OF FIGURES ......................................................................................................................... viii LIST OF TABLES ............................................................................................................................ ix CHAPTER ONE ................................................................................................................................ 1 INTRODUCTION .............................................................................................................................. 1 1.1 Introduction .............................................................................................................................. 1 1.2 Problem statement .................................................................................................................... 3 1.3 Objective of the study ............................................................................................................... 3 1.4 Scope of the study ..................................................................................................................... 4 1.5 Significance of the study ........................................................................................................... 4 1.6 Limitations to the study ............................................................................................................. 4 1.7 Organisation of the Study .......................................................................................................... 5 CHAPTER TWO ............................................................................................................................... 6 LITERATURE REVIEW ................................................................................................................... 6 2.0 Introduction .............................................................................................................................. 6 2.1 Theoretical Literature ................................................................................................................ 6 2.1.1 GARCH Models................................................................................................................. 7 2.1.2 Value-at-Risk (VaR) Model ............................................................................................. 12 2.1.3 Risk Forecasting on Multiple Timescales ......................................................................... 15 University of Ghana http://ugspace.ug.edu.gh vi 2.2 Empirical Literature ................................................................................................................ 18 CHAPTER THREE .......................................................................................................................... 26 METHODOLOGY ........................................................................................................................... 26 3.0 Introduction ............................................................................................................................ 26 3.1Descriptive statistics ................................................................................................................ 26 3.1.1 Minimum statistic ............................................................................................................ 27 3.1.2 Maximum statistic ............................................................................................................ 27 3.1.3 Mean statistic ................................................................................................................... 27 3.1.4 Skewness and Kurtosis statistic ........................................................................................ 27 3.2 Time Series Trend Analysis .................................................................................................... 28 3.2.1 Ljung-Box Test ................................................................................................................ 28 3.2.2 Quantile-Quantile (Q-Q) Plot ........................................................................................... 28 3.3 Goodness-of-Fit Tests ............................................................................................................. 29 3.4 Anderson-Darling Test ............................................................................................................ 30 3.5 Kolmogorov-Smirnov Goodness-of-Fit Test............................................................................ 31 3.6 Chi-square Goodness of Fit Test ............................................................................................. 33 3.7 Maximum Likelihood (ML) Estimation Method ...................................................................... 35 4.8 Akaike Information Criterion (AIC) .................................................................................. 37 3.9 Autoregressive Moving Average (ARMA) model ................................................................... 37 3.10 Autoregressive Integrated moving Average (ARIMA) model ................................................ 38 3.11 ARCH and GARCH Models ................................................................................................. 40 3.12 Testing for ARCH Effects ..................................................................................................... 41 3.13 Multivariate GARCH models ................................................................................................ 42 3.13.1 BEKK-GARCH models ............................................................................................. 43 3.13.2 The Constant Conditional Correlation (CCC-GARCH) Model ................................... 44 3.13.3 The Dynamic Conditional Correlation (DCC-GARCH) Model .................................. 45 3.14 Portfolio Optimization Model ............................................................................................... 48 University of Ghana http://ugspace.ug.edu.gh vii 3.15 Value-at-Risk (VaR) ............................................................................................................. 49 3.15.1 Efficiency of a portfolio ............................................................................................. 49 CHAPTER FOUR ............................................................................................................................ 52 ANALYSIS AND DISCUSSIONS ................................................................................................... 52 4.0 Introduction ............................................................................................................................ 52 4.1 The HFC funds ....................................................................................................................... 52 4.2 Time Series ............................................................................................................................. 53 4.3 Descriptive Statistics ............................................................................................................... 56 4.4 Quantile-Quantile (Q-Q) Plots................................................................................................. 58 4.5 Test for Autocorrelation .......................................................................................................... 59 4.6 Tests for Normality ................................................................................................................. 60 4.7 ARMA Models for the funds ................................................................................................... 61 4.8 Estimates of the Mean and Variance .................................................................................. 63 4.9 The Portfolio Optimization Process (ARMA-GARCH) ........................................................... 63 4.10 The Portfolio Optimization Process (ARMA - DCC GARCH)............................................... 70 4.11 Discussion of findings ........................................................................................................... 75 CHAPTER FIVE .............................................................................................................................. 78 SUMMARY, CONCLUSION AND RECOMMENDATION ........................................................... 78 5.0 Introduction ............................................................................................................................ 78 5.1 Summary ................................................................................................................................ 78 5.2 conclusions ............................................................................................................................. 79 5.3 Recommendations................................................................................................................... 80 5.4 Further studies ........................................................................................................................ 80 REFERENCES................................................................................................................................. 82 University of Ghana http://ugspace.ug.edu.gh viii LIST OF FIGURES Figure 4.2.1 Time series of prices for Equity Trust fund……………………….53 Figure 4.2.2 Time series of prices for Future Plan fund…………………..…….54 Figure 4.2.3 Time series of prices for Unit Trust fund………………………….55 Figure 4.2.4 Time series of returns for all three funds...……………………..….56 Figure 4.3 Histograms with normal fits….........................................................58 Figure 4.4 Quantile-Quantile (Q-Q) Plots………………….…………...……..59 Figure 4.9 a Efficient frontier (univariate GARCH)………...…..…………….69 Figure 4.9 b Ratio of mean-VaR (univariate GARCH)……...….….………….69 Figure 4.10 a Efficient frontier (multivariate GARCH)………..……………..74 Figure 4.10 b Ratio of mean-VaR (multivariate GARCH)……….……….…..75 University of Ghana http://ugspace.ug.edu.gh ix LIST OF TABLES Table 4.3.1 Basic statistics of returns ……………………………………………..57 Table 4.5 Autocorrelation between funds……………………………..………….59 Table 4.6 Test for normality………………………….……………………….….60 Table 4.7 a Akaike Information Criterion for ARMA……………..…………….…61 Table 4.7 b Akaike Information Criterion for GARCH…………..………...………62 Table 4.8 Model estimates for Mean and Variance……..………….…………….63 Table 4.9 a Portfolio weights per tolerance level (univariate GARCH)….……...…65 Table 4.9 b Portfolio optimization variables (univariate GARCH)…………..........67 Table 4.10 a Portfolio weights per tolerance level (univariate GARCH)….……….71 Table 4.10 b Portfolio optimization variables (univariate GARCH)…….……........73 University of Ghana http://ugspace.ug.edu.gh 1 CHAPTER ONE INTRODUCTION 1.1 Introduction A portfolio investment is a passive investment in securities, which does not include an active management or control of the securities by the investor. A portfolio investment can also be seen as an investment made by an investor who is not particularly interested in involvement in the management of a company. The main purpose of the investment is solely for financial gain. Hence a portfolio investment must include investment in an assortment or range of securities, or other types of investment vehicles, to spread the risk of possible loss due to below-expectations performance of one or a few of them. Any investor would like to have the highest return possible from an investment. However, this has to be counterbalanced by some amount of risk the investors are able or be willing to take. The expected return and the risk measured by the variance (or the standard deviation, which is the square-root of the variance) are the two main characteristics of an investment portfolio. Studies show that unfortunately, equities with high returns usually correlate with high risk. The behaviour of a portfolio can be quite different from the behaviour of individual components of the portfolio. The risk of a properly constructed portfolio from equities in leading markets could be half the sum of the risks of individual assets in the portfolio. This could be as a result of complex correlation patterns between individual assets or equities. A good optimizer can take University of Ghana http://ugspace.ug.edu.gh 2 advantage of the correlations, the expected returns, the risk (variance) and user constraints to obtain an optimized portfolio (Fernando, 2000). The mathematical problem of portfolio optimization initiated by Professor Harry Markowitz won a Nobel Prize in Economics in 1990 (Markowitz et. al., 1991). Nevertheless, various authors have proposed models which do not imply these problems. Konno (1990) formulated a piecewise risk function to replace the covariance, thus reducing the problem to a linear programming one. The author proved that his model is equivalent to the Markowitz model, when the vector of returns is multivariate normally distributed. Markowitz et al. (1994) later described a method which avoids actual computation of the covariance matrix, and Morita et al. (1989) applied a stochastic linear knapsack model to the portfolio selection model. In recent years, criticism of the basic model has been increasing because of its disregard for individual investors preferences. Konno (1990) observed that most investors do not actually buy efficient portfolios, but rather those behind the efficient frontier. Ballestero and Romero (1996) first proposed a compromise programming model for an ‘‘average’’ investor, which was modified to approximate the optimum portfolio of an individual investor (Ballestero, 1998). A different approach was described in Arthur and Ghandforoush (1987), who proposed the use of objective and subjective measures for assets. Their idea leads to a simple Linear Programming model. Hallerbach and Spronk (1997) argued that most models do not incorporate the multidimensional nature of the problem and outline a framework for such a view on portfolio management. University of Ghana http://ugspace.ug.edu.gh 3 1.2 Problem statement The over reliance of investors and even portfolio managers on cap weighted average in monitoring the benchmark indices only give the investors the general idea about the general market movement. Investments decisions that depend on these benchmarks lead the investors and portfolio managers to underperform since those benchmarks lack the requirement for robust portfolio construction. The representation of the risk associated with equity through the variance in its returns or of the risk associated with an option through its volatility, takes account of both good and bad risks. A significant variance corresponds to the possibility of seeing returns vastly different from the expected return, i.e. very small values (small profits and even losses) as well as very large values (significant profits). The application of equal weights in the portfolio optimization process is based on the assumption of normality. This assumption is proven not to be always true, hence the need for an optimization model that can produce specific weights for each component of the portfolio. 1.3 Objective of the study The general objective of this research is to identify alternative approach to portfolio optimisation to model the various funds of HFC Investment Limited. Specifically, this study seeks to: i. evaluate the performance of the various funds managed by HFC Investment Limited; ii. Employ a recent time series model to forecast future returns of the funds under study; iii. Identify which portfolio mix is indeed optimized. University of Ghana http://ugspace.ug.edu.gh 4 1.4 Scope of the study This study will consider data from three funds managed by the HFC Investment Services Limited namely; HFC Equity Trust, HFC Future Plan Trust, and HFC Unit Trust respectively from January 2009 to June 2012. 1.5 Significance of the study The introduction of the Markowitz Mean-Variance model has paved the way for many researchers to exploit several avenues to construct a robust portfolio that could enhance or improve optimality. This study in similar manner provides an alternative approach (based on the value-at-risk) to reach optimality by constructing models for the estimates of the mean return and variance respectively. The approach used in this study will pave the way for researchers who are interested in this area of study to exploit other avenues of modelling mean and variances in constructing a robust portfolio. The study will also enlighten fund managers as well as investors and other stakeholders who rely solely on the Markowitz model to achieve optimal portfolio. 1.6 Limitations to the study The scanty nature of the data made available to this study did not create a room for the actual distribution of the data to be obtained. Financial institutions are therefore advised to make University of Ghana http://ugspace.ug.edu.gh 5 financial information readily available to researchers for proper and accurate conclusions to be made. 1.7 Organisation of the Study The remaining part of this thesis is organized into four chapters. Chapter two provides a review of related literature on the topic and basic concepts about the topic. Chapter three looks at the methodology that would be engaged to produce the results needed for the analysis. Chapter four gives the analysis and discussion of the major findings of the study and finally Chapter five provides the summary, conclusions and recommendations of the study. University of Ghana http://ugspace.ug.edu.gh 6 CHAPTER TWO LITERATURE REVIEW 2.0 Introduction This chapter is designed to review relevant literatures. Both theoretical and empirical literatures have been reviewed, which are pertinent to the study to achieve desired objectives 2.1 Theoretical Review The subject of portfolio optimization has raised considerable attentions among scholars and researchers. Portfolio optimization has come a long way from Markowitz (1952) seminal work, which introduces return/variance risk management framework. Developments in portfolio optimization are stimulated by two basic requirements: i. Adequate modelling of utility functions, risks, and constraints; ii. Efficiency, that is, ability to handle large numbers of instruments and scenarios. There have been many efforts to comprehend the concept of portfolio optimization using different theoretical models. The increased attention paid to portfolio optimization has therefore brought significant differences about what is portfolio optimization and perceptions about what precisely portfolio optimization is. Fundamentally, the selection of assets or equities is not just a problem of finding attractive investments. Designing the correct portfolio of assets cannot be done by human intuition University of Ghana http://ugspace.ug.edu.gh 7 alone and requires modern, powerful and reliable mathematical programs called optimizers (Chang, 2000). The behaviour of a portfolio can be quite different from the behaviour of individual components of the portfolio. The risk of a properly constructed portfolio from equities in leading markets could be half the sum of the risks of individual assets in the portfolio. This is due to complex correlation patterns between individual assets or equities. A good optimizer can exploit the correlations, the expected returns, the risk (variance) and user constraints to obtain an optimized portfolio. It is widely believed that most of the estimation risk in optimal portfolios is due to errors in estimates of expected returns, and not in the estimates of risk (Chopra and Ziemba, 1993). 2.1.1 GARCH Models ARCH is an acronym meaning Autoregressive Conditional Heteroscedasticity. In ARCH models the conditional variance has a structure very similar to the structure of the conditional expectation in an AR model. The ARCH (1) model is the simplest GARCH model and similar to an AR (1) model. Then we look at ARCH (p) models that are analogous to AR (p) models. Finally, the study looks at GARCH (Generalized ARCH) models that model conditional variances much as the conditional expectation is modelled by an ARMA model. Recent developments in financial econometrics suggest the use of nonlinear time series structures to model the attitude of investors toward risk and expected return. Bera and Higgins (1993) remarked that a major contribution of the ARCH literature is the finding that apparent University of Ghana http://ugspace.ug.edu.gh 8 changes in the volatility of economic time series may be predictable and result from a specific type of nonlinear dependence rather than exogenous structural changes in variables. The ARCH model proposed by Engle (1982) let these weights be parameters to be estimated. Thus, the model allowed the data to determine the best weights to use in forecasting the variance. A useful generalization of this model is the GARCH parameterization. This model is also a weighted average of past squared residuals, but it has declining weights that never go completely to zero. It gives parsimonious models that are easy to estimate and, even in its simplest form, has proven surprisingly successful in predicting conditional variances. The most widely used GARCH specification asserts that the best predictor of the variance in the next period is a weighted average of the long-run average variance, the variance predicted for this period, and the new information in this period that is captured by the most recent squared residual. Such an updating rule is a simple description of adaptive or learning behavior and can be thought of as Bayesian updating. Consider the trader who knows that the long-run average daily standard deviation of the Standard and Poor’s 500 is 1 percent that the forecast he made yesterday was 2 percent and the unexpected return observed today is 3 percent. Obviously, this is a high volatility period, and today is especially volatile, which suggests that the forecast for tomorrow could be even higher. However, the fact that the long-term average is only 1 percent might lead the forecaster to lower the forecast. The best strategy depends upon the dependence between days. To be precise, we can use to define the variance of the residuals of a regression University of Ghana http://ugspace.ug.edu.gh 9 √ [2.1] In this definition, the variance of ԑ is one. The GARCH model for variance looks like this: ( ) [2.1] The econometrician must estimate the constants ; updating simply requires knowing the previous forecast and residual. The weights are ( ) and the long-run average variance is√ ( ) . It should be noted that this only works if , and it only really makes sense if the weights are positive requiring . The GARCH model that has been described is typically called the GARCH (1, 1) model. The (1,1) in parentheses is a standard notation in which the first number refers to how many autoregressive lags, or ARCH terms, appear in the equation, while the second number refers to how many moving average lags are specified, which here is often called the number of GARCH terms. Sometimes models with more than one lag are needed to find good variance forecasts. Although this model is directly set up to forecast for just one period, it turns out that based on the one-period forecast, a two-period forecast can be made. Ultimately, by repeating this step, long-horizon forecasts can be constructed. For the GARCH (1,1), the two-step forecast is a little closer to the long-run average variance than is the one-step forecast, and, ultimately, the distant-horizon forecast is the same for all time periods as long as α + β ˂ 1. University of Ghana http://ugspace.ug.edu.gh 10 The GARCH updating formula takes the weighted average of the unconditional variance, the squared residual for the first observation and the starting variance and estimates the variance of the second observation. This is input into the forecast of the third variance, and so forth. Eventually, an entire time series of variance forecasts is constructed. Ideally, this series is large when the residuals are large and small when they are small. The likelihood function provides a systematic way to adjust the parameters w, α, β to give the best fit. Of course, it is entirely possible that the true variance process is different from the one specified by the econometrician. In order to detect this, a variety of diagnostic tests are available. The simplest is to construct the series of {ԑt}, which are supposed to have constant mean and variance if the model is correctly specified. Various tests such as tests for autocorrelation in the squares are able to detect model failures. Often a “Ljung box test” with 15 lagged autocorrelations is used. The GARCH (1, 1) is the simplest and most robust of the family of volatility models. However, the model can be extended and modified in many ways. Three modifications will be mentioned briefly, although the number of volatility models that can be found in the literature is now quite extraordinary. The GARCH (1, 1) model can be generalized to a GARCH (p, q) model—that is, a model with additional lag terms. Such higher-order models are often useful when a long span of data is used, like several decades of daily data or a year of hourly data. With additional lags, such models allow both fast and slow decay of information. A particular specification of the University of Ghana http://ugspace.ug.edu.gh 11 GARCH (2, 2) by Engle and Lee (1999), sometimes called the “component model,” is a useful starting point to this approach. ARCH/GARCH models thus far have ignored information on the direction of returns; only the magnitude matters. However, there is very convincing evidence that the direction does affect volatility. Particularly for broad-based equity indices and bond market indices, it appears that market declines forecast higher volatility than comparable market increases do. There is now a variety of asymmetric GARCH models, including the EGARCH model of Nelson (1991), the TARCH model threshold ARCH attributed to Rabemananjara and Zakoian (1993) and Glosten, Jaganathan and Runkle (1993), and a collection and comparison by Engle and Ng (1993). The goal of volatility analysis must ultimately be to explain the causes of volatility. While time series structure is valuable for forecasting, it does not satisfy our need to explain volatility. The estimation strategy introduced for ARCH/GARCH models can be directly applied if there are predetermined or exogenous variables. Thus, we can think of the estimation problem for the variance just as we do for the mean. We can carry out specification searches and hypothesis tests to find the best formulation. Thus far, attempts to find the ultimate cause of volatility are not very satisfactory (Engle, 2001). University of Ghana http://ugspace.ug.edu.gh 12 Obviously, volatility is a response to news, which must be a surprise. However, the timing of the news may not be a surprise and gives rise to predictable components of volatility, such as economic announcements. It is also possible to see how the amplitude of news events is influenced by other news events. For example, the amplitude of return movements on the United States stock market may respond to the volatility observed earlier in the day in Asian markets as well as to the volatility observed in the United States on the previous day. Engle, Ito and Lin (1990) call these “heat wave” and “meteor shower” effects. A similar issue arises when examining several assets in the same market. Does the volatility of one influence the volatility of another? In particular, the volatility of an individual stock is clearly influenced by the volatility of the market as a whole. This is a natural implication of the capital asset pricing model. It also appears that there is time variation in idiosyncratic volatility (Engle, Ng and Rothschild, 1992). 2.1.2 Value-at-Risk (VaR) Model Current regulations for finance businesses formulate some of the risk management requirements in terms of percentiles of loss distributions. An upper percentile of the loss distribution is called Value-at-Risk (VaR). For instance, 95% VaR is an upper estimate of losses which is exceeded with 5% probability. University of Ghana http://ugspace.ug.edu.gh 13 The popularity of VaR is mostly related to a simple and easy to understand representation of high losses. VaR can be quite efficiently estimated and managed when underlying risk factors are normally (log-normally) distributed. However, for non-normal distributions, VaR may have undesirable properties (Artzner at al., 1997, 1999) such as lack of sub-additivity, i.e., VaR of a portfolio with two instruments may be greater than the sum of individual VaRs of these two instruments. Also, VaR is difficult to control/optimize for discrete distributions, when it is calculated using scenarios. In this case, VaR is non-convex (definition of convexity in (Rockefeller, 1970)) and non-smooth as a function of positions, and has multiple local extrema (Krokhmal, Palmquist, & Uryasev, 2002). Mostly, approaches to calculating VaR rely on linear approximation of the portfolio risks and assume a joint normal (or log-normal) distribution of the underlying market parameters (Duffie and Pan, 1997; Pritsker, 1997; Simons, 1996). Also, historical or Monte Carlo simulation-based tools are used when the portfolio contains nonlinear instruments such as options (Jorion, 1996; Mauser & Rosen, 1991; Pritsker, 1997). Discussions of optimization problems involving VaR can be found in Litterman (1997 I, 1997 II), Kast et al. (1998), and Lucas & Klaassen (1998). University of Ghana http://ugspace.ug.edu.gh 14 For investors, risk is about the odds of losing money, and VaR is based on that common sense fact. By assuming that investors care about the odds of big losses, VaR can be used to answer the questions, "What is my worst-case scenario?" or "How much could I lose in a really bad month?" The Value at Risk (VaR) also known as expected shortfall is the most widely used and accepted tool for measuring market risk and is now a standard in the financial industry. Intuitively, VaR measures the maximum loss that the value of an asset (or a portfolio of assets) can suffer with an associated probability within specified time-horizon. In statistical terms the VaR can be thought of as a quantile of the returns distribution (Chamu, 2005). Generally, the calculation methods for being in risk is divided in three categories; variance- covariance, Monte Carlo simulation method, historical simulation method. Most literature on VaR focuses on the computation of the VaR for financial assets such as stocks or bonds, and they usually deal with the modeling of VaR for negative returns. Recent examples are the books by Dowd (1998) and Jorion (2000) or the papers by van den Goorbergh and Vlaar (1999), Danielsson and de Vries (2000), Vlaar (2000) and Giot and Laurent (2003). Applications of the ARCH/GARCH approach are widespread in situations where the volatility of returns is a central issue. Many banks and other financial institutions use the concept of “value at risk” as a way to measure the risks faced by their portfolios. The one (1) per cent value at risk is defined as the number of dollars that one can be 99 percent certain exceeds any University of Ghana http://ugspace.ug.edu.gh 15 losses for the next day. Statisticians call this a one per cent quantile, because one per cent of the outcomes are worse and 99 percent are better. 2.1.3 Risk Forecasting on Multiple Timescales The vague word “risk” refers to the degree of future variability of a quantity of interest, such as price return. A risk model is a quantitative approach to making a numerical risk forecast based on observed data, and such models are central to the practice of investing. The classical risk forecast, as developed by Markowitz (1952) and Sharpe (1964), is a forecast of the standard deviation of portfolio return at a fixed time horizon, but there are several other measures of risk in common use, such as value-at-risk (VaR), Expected Shortfall (ES), and others (Artzner et al., 1999 and Rockefeller & Uryasev 2002). Each of these is a kind of measure of the width of a probability density function describing the future return. Ultimately, the real underlying risk forecast is a forecast of the full probability distribution of returns, from which any numerical risk measure is determined. The historical emphasis on a single number to measure risk has tended to hide the fact that most risk models in fact implicitly generate the forecast of a full distribution, and therefore represent an implicit choice of a family of distributions from which the forecast is to be made. This choice is difficult to avoid, even if it is not explicit. University of Ghana http://ugspace.ug.edu.gh 16 Risk management is highly concerned with extreme events. Mathematically, extreme events correspond to what lies in the tails of the probability density function. Quantile-based risk measures like VaR and ES invariably come with a specific confidence level (commonly denoted by α). The confidence level measures the probability of the potential heavy losses not happening and is typically a number less than but very close to one, such as 95%, 99% or even 99.5%. One entrenched empirical observation about financial data is that due to the prevalence of extreme outcomes, return series are heavy-tailed. With thin tails that decay exponentially, normal distribution is apparently a poor candidate for risk modelling. Neither is the empirical distribution defined by the data an ideal choice since information on its tail behaviour is patchy and limited. That’s why more flexible non-Gaussian parametric families such as the Generalised Hyperbolic (GH) distributions are necessary and becoming in common use. Another feature of return data that cannot be overlooked is that they are not independently and identically distributed (i.i.d.). On the other hand, the standard technique of parameter estimation, maximum likelihood method, is based on an input of i.i.d. samples. Therefore, it is premature to calibrate GH distributions to raw data with the Expectation-Maximization (EM) algorithm. This difficulty can be solved by filtering the serially correlated raw time series with autoregressive conditionally heteroskedastic (GARCH) models. University of Ghana http://ugspace.ug.edu.gh 17 Proposed by Bollerslev (1986) as generalizations of the ARCH process (see Engle 1982), the GARCH process has become a well-established time series model and evolved into a huge class consisting of numerous variants and extensions. Surveys of these models can be found in Bollerslev et al. (1994), McNeil et al. (2005) and Jondeau et al. (2007). In practice, low-order GARCH models are most widely used and prove effective in removing serial dependence. With approximately i.i.d. residuals ready, EM algorithms are implemented to calibrate GH parameters and quantiles are then calculated by numerically integrating the calibrated density function. After a very straightforward defiltering step that consists of scaling the quantiles by a volatility term (a product of the GARCH filtering), we arrive at the desired risk forecasts. This filtering-calibration-forecasting approach is adopted in Hu (2005) and Hu and Kercheval (2007). Following a backtesting procedure suggested by Kupiec (1995) and McNeil and Frey (2000), they showed that risk forecasts based on hyperbolic, Normal Inverse Gaussian (NIG), Variance-Gamma (VG) and skewed t distributions unexceptionally outperform normal-based ones, especially at high confidence levels. McNeil et al. (2005) reached similar conclusions with different data and Student t distributions. In addition, it has been reported that unconditional methods (that is, without using GARCH) are, not surprisingly, outperformed by conditional methods based on GARCH. The GARCH- GH model described so far can be characterized as a fixed-frequency method in that filtering, calibration and forecasting are done on the same timescale. In other words, to get risk forecasts over a weekly horizon, the input time series must also be weekly data. University of Ghana http://ugspace.ug.edu.gh 18 The accuracy and stability of maximum likelihood methods, including the EM algorithm, rely on an input sample that’s sufficiently large. As the forecast horizon grows, the price history necessary to collect enough observations gets longer and longer. If the existing data of a particular asset are depleted before reaching the minimum sample size required by the maximum likelihood method, the entire model will be crippled. What is more, even if the price history is infinitely long, data that are decades old may shed little light on the actual behaviour of the time series at present. 2.2 Empirical Review Dijk and Franses (1996) studied the performance of the GARCH model and two of its non- linear modifications to forecast weekly stock market volatility. The models were the Quadratic GARCH (QGARCH) and the Glosten, Jagannathan and Runkle (1993) or (GJR) models which had been proposed to describe the negative skewness in stock market indices. Their forecasting results for the weekly stock market indices showed that the QGARCH model could significantly be improved on the linear GARCH model so that it could be good at calibrating data including extreme events. Based on the results they concluded that the forecasting of GARCH type models appeared sensitive to extreme within sample observations. The GJR model on the other hand was not recommended for forecasting. Garcia et al., (2003) pointed out that, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) models consider, the price series as not invariant (does not have University of Ghana http://ugspace.ug.edu.gh 19 zero mean and constant variance) as it happens in an autoregressive integrated moving average (ARIMA) process. In their study GARCH process measured the implied volatility of a series due to price spikes. Their paper focused on day-ahead forecast of daily electricity markets with high volatility periods using a GARCH methodology. The GARCH model in their study was to provide the twenty four (24) forecasts of the clearing prices for the next day based on historical data. They observed a good performance of the prediction method. The daily mean errors were around 4%; where the lowest mean error was 2:60% and the highest one was 7:60%: In general, they concluded that the results obtained by the model were quite reasonable, as the errors obtained were not larger than 10%. Nevertheless to verify the predictive accuracy of the GARCH model, different statistical measures were utilized such as Mean Week Error (MWE) and Forecast Mean Square Error (FMSE). Assis, Amran and Remali (2006) compared the forecasting performance of different time series methods for forecasting cocoa bean prices at Bagan Datoh cocoa bean. The monthly average data of Bagan Datoh cocoa bean prices graded for the period of January 1992 to December 2006 was used. Four different types of univariate time series models were compared namely the Exponential Smoothing, Autoregressive Integrated Moving Average (ARIMA), Generalized Autoregressive Conditional Heteroscedasticity (GARCH) and the University of Ghana http://ugspace.ug.edu.gh 20 mixed ARIMA/ GARCH models. Root mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Thiel’s inequality coefficient (U- statistics) were used as the selection criteria to determine the best forecasting model. The study revealed that the time series data were influenced by a positive linear trend factor while a regression test results showed the non-existence of seasonal factors. Moreover, the autocorrelation function (ACF) and the Augmented Dickey-Fuller (ADF) test showed that the time series data was not stationary but became stationary after the first differentiating process was carried out. Based on the results of the ex-post forecasting (starting from January until December 2006) GARCH model performed better compared to exponential smoothing, ARIMA and the mixed ARIMA/GARCH model for forecasting Bagan Datoh cocoa bean prices. Kontonikas (2004) analyzed the relationship between inflation and inflation uncertainty in the United Kingdom from 1973 to 2003 with monthly and quarterly data. Different types of GARCH Mean (M)-Level (L) models that allow for simultaneous feedback between the conditional mean and variance of inflation were used to test the relationship. The author found that there was a positive relationship between past inflation and uncertainty about future inflation, in line with Friedman and Ball (2006), who concluded in their study of testing for rate of dependence and asymmetric in inflation uncertainty, that there was a link between inflation rate and inflation uncertainty. University of Ghana http://ugspace.ug.edu.gh 21 Jehovanes (2007) studied the time lag between a change in money supply and the inflation rate response. A modified generalized autoregressive conditional heteroscedasticity (GARCH) model was employed to monthly inflation data for the period 1994 to 2006: In the study the author used the maximum likelihood estimation technique to estimate parameters of the model and to determine significance of the lagged value. The GARCH model produced good results which indicated that a change in money supply would affect inflation rate considerably in seven months ahead. Chong, Loo and Muhammad (2002) used seven years of daily observed Sterling exchange rate. The GARCH model including the family of GARCH Mean models were used to explain the commonly observed characteristics of the unconditional distribution of daily rate of returns series. The results indicated that the hypotheses of constant variance model could be rejected, at least within sample, since almost all the parameter estimates of the ARCH and GARCH models were significant at five percent level. The Q-statistics and the Lagrange Multiplier test revealed that the use of the long memory GARCH model was preferable to the short memory and high order ARCH model. The results from various goodness-of-fit statistics were not consistent for Sterling exchange rates. It appeared that the BIC and AIC test proposed GARCH models to be the best for within sample modelling while the Mean Square Error (MSE) test suggested the GARCH mean. University of Ghana http://ugspace.ug.edu.gh 22 Patev et al., (2009) studied volatility forecasting on the thin emerging stock markets, and their study primarily focused on Bulgaria stock market. Three different models which are Risk Metrics, Exponential Weighted Moving Average (EWMA) with t–distribution and EWMA with GED distribution were employed for investigation. The study results suggested that both EWMA with t–distribution and EWMA with GED distribution have good performance for modelling and forecasting volatility of stock returns of Bulgaria market. The authors also concluded that EWMA model can be effectively used for volatility forecasting on emerging markets. Most studies in the VaR literature focus on the computation of the VaR for financial assets such as stocks or bonds, and they usually deal with the modelling of VaR for negative returns. Recent examples are the books by Dowd (1998) and Jorion (2000) or the papers by van den Goorbergh & Vlaar (1999), Danielsson & de Vries (2000), Vlaar (2000) and Giot & Laurent (2003). Chan, Karceski & Lakonishok (1999) evaluated the performance of models for the covariance structure of stock returns, focusing on their use for optimal portfolio selection. They compared the models' forecasts of future covariances and the optimized portfolio's out-of- sample performance and concluded that a few factors capture the general covariance structure. Also, portfolio optimization helps for risk control, and a three-factor model is adequate for selecting the minimum-variance portfolio. The authors argued that under tracking error volatility criterion, which is widely used in practice, larger differences emerge across the University of Ghana http://ugspace.ug.edu.gh 23 models. Therefore, more factors are necessary when the objective is to minimize tracking error volatility. Caldeira, Moura, and Santos (2012) applied a parsimonious multivariate GARCH specification based on the Fama-French-Carhart factor model to generate high-dimensional conditional covariance matrices and to obtain shortselling-constrained and unconstrained minimum variance portfolios. The authors applied 61 stocks traded on the Sao Paulo stock exchange (BM & FBovespa) and concluded that their proposed specification delivers less risky portfolios on an out-of-sample basis in comparison to several benchmark models, including existing factor approaches. (Caldeira et al., 2012). Wenjing Su and Yiyu Huang (2010) investigated different formulations of multivariate GARCH models in their research by applying two of the popular ones, (the BEKK- GARCH model and the DCC- GARCH model) in evaluating the volatility of a portfolio of zero-coupon bonds. They alluded to the fact that Multivariate GARCH models are considered as one of the most useful tools for analysing and forecasting the volatility of time series when volatility fluctuates over time. This feature demonstrates its availability in modelling the co-movement of multivariate time series with varying conditional covariance matrix. Based on this argument they focused on understanding the model specifications of several widely used multivariate GARCH models so as to select appropriate models; and then University of Ghana http://ugspace.ug.edu.gh 24 construct the BEKK form and the DCC form separately by employing the financial data obtained from the website of the European Central Bank. They went on to diagnose the goodness of fit of the established models based on comparatively few tests specific to multivariate models available. The paper went ahead to compare the fitting performance of these two forms and predict the future dynamics of our data on the ground of these two models. By comparing the goodness of fit through the mean absolute error, they concluded that the fitting performance of the BEKK – GARCH form is better than DCC – GARCH form in our case. According to them difference in performance may be due to the fact that the number of parameters of the BEKK – GARCH model is comparatively more; so that BEKK – GARCH model has a better capability in explaining the information hidden in the history data. In the opposite, the DCC – GARCH model has an advantage over the BEKK – GARCH model in the area of forecasting as the DCC – GARCH model is more parsimonious than the BEKK – GARCH model. In this sense, they suggested the crucial importance to balance parsimony and flexibility when modelling multivariate GARCH models. The authors attributed the inadequacy of their work due to the fact that few tests are applicable to multivariate cases and also due to the difficulty in implementing those extended forms of the tests for detecting the univariate GARCH effect. Wenjing and Yiyu (2010). Giulio Palomba (2006) provided an empirical model for large scale tactical asset allocation (TAA) with multivariate GARCH estimates, given a tracking error constraint. According to University of Ghana http://ugspace.ug.edu.gh 25 him, the Black and Litterman approach makes it possible to tactically manage the selected portfolio by combining information taken from the time varying volatility model with some personal “view” about asset returns. His paper is aimed at suggesting a portfolio optimisation for large scale (TAA) dealing with two aspects: on one hand, the proposed model takes the changing volatility of asset returns over time into account and, on the other, it provides the possibility of using private information in the mean-variance paradigm. An empirical work is proposed to tactically manage some portfolios of interest in the space spanned by absolute risk and total expected return, using data taken from the DJ Euro Stoxx 50 index. According to Giulio Palomba (2006), the FDCC model by Billio, Caporin and Gobbo (2006) is useful to solve the practical problems of forecasting the expected asset returns and their covariance matrix: the possibility to group variables among sectors allows modelling the persistence in volatility in a parsimonious way and does not imply any computational drawback. Moreover, the BL approach instead can represent a good method to incorporate the manager’s views about asset returns in the asset allocation process. The author carried out his analysis on different portfolios located along the mean-variance efficient set (Markowitz, 1959) and the fixed tracking error constrained frontier introduced by Jorion (2003). His work was also based on the following assumptions: the absence of a risk free asset, the possibility of short positions and finally the estimation of a GARCH (1, 1) model in the first step of FDCC. University of Ghana http://ugspace.ug.edu.gh 26 CHAPTER THREE METHODOLOGY 3.0 Introduction This chapter focuses on the various methods that are used to analyse the data involved in the research. The first section discusses the descriptive statistics that are used to investigate the behaviour of the data. Section two discusses the various tests for normality and heteroscedasticity. Section three discusses the GARCH models to be used and the last section talks about the goodness-of-fit tests for the models discussed. 3.1 Descriptive statistics This section first provides a descriptive statistics of the variables to show the mean, standard deviation, skewness and kurtosis of the data. Quantile-Quantile (Q-Q) plots are drawn to establish the nature of the distribution of the data, whether normal or non-normal. The linear correlation between pairs of variables is calculated and a scatter plot drawn in support of that. To test for normality in the data, we shall employ the Jarque-Bera test. The Ljung-Box test is used to test for serial correlation and the LM test will also be used to test for the presence of heteroscedasticity. Autocorrelation function for each of the fund is computed to look for evidence of correlations and stationarity. University of Ghana http://ugspace.ug.edu.gh 27 3.1.1 Minimum statistic This indicates the minimum value in a set of data over a period of time. In this case it gives the minimum return obtained by the various funds over the period of the study. 3.1.2 Maximum statistic This indicates the maximum value in a set of data over a period of time. In this case it gives the maximum return obtained by the various funds over the period of the study. 3.1.3 Mean statistic This measures the average value in a set of data over a period of time. In this case it gives the arithmetic average of the returns measured from the various funds over the period of the study. 3.1.4 Skewness and Kurtosis statistic These statistics are used to provide a summary on the extent of asymmetry and tail thickness of the distribution of the data set. A distribution with a kurtosis value of 3.0 is said to be normal. If the kurtosis value of a distribution is in excess of three (3) is said to be heavy- tailed, signifying that the distribution puts more mass on the tails of its support than a normal distribution. Hence this implies that a random sample selected from such a distribution contains more extreme values (Tsay, 2002). University of Ghana http://ugspace.ug.edu.gh 28 3.2 Time Series Trend Analysis The two main aim of studying time series data are to: i. Identify the nature of the trend represented by the sequence of data. ii. Forecast or predict the future values of the time series variables. Time series analysis consists of methods for analyzing the time series data in order to extract meaningful statistics and other characteristics of the data and also using empirical experiments to explain the trend. This trend once established can be extracted and this can be used in the prediction of future values. 3.2.1 Ljung-Box Test The Ljung-Box test is a text that can be used to check autocorrelation of the residuals in the data. It displays the Ljung-Box test statistic with a corresponding degree of freedom (df) and p-value. This is to test the hypotheses: Ho: there is no residual correlation H1: there is residual correlation The null hypothesis is rejected if the p-value ≤ 0.05. 3.2.2 Quantile-Quantile (Q-Q) Plot The quantile-quantile (Q-Q) plot is a diagnostic graph of the input (observed) data values plotted against the theoretical (fitted) distribution quantiles. Both axes of this graph are in units of the input data set. The plotted points should be approximately linear if the specified University of Ghana http://ugspace.ug.edu.gh 29 theoretical distribution is the correct model. The quantile-quantile plot is more sensitive to the deviations from the theoretical distribution (normal) in the tails and tends to magnify deviation from the proposed distribution on the tails. The Q-Q plot is constructed using the theoretical cumulative distribution function, ( )F x , of the specified theoretical model or distribution. The values in the sample of the data are ordered from the smallest to the largest and are denoted as (1) (2) (3) ( ), , , ... , nX X X X. For 1, 2, 3, ..., i n , ( ) 'iX s are plotted against the inverse cumulative distribution function; 11 2iF n      . [3.1] The Q-Q plot was used in this study to graphically assess goodness of fit of the returns of financial data to the Gaussian distribution. 3.3 Goodness-of-Fit Tests The Goodness of Fit (GoF) tests measures the compatibility of a random sample with a theoretical probability distribution function. In other words, a test for goodness of fit usually involves examining a random sample from some unknown distribution in order to test the null hypothesis that the unknown distribution function is in fact a known, specified distribution. The general procedure consists of defining a test statistic which is some function of the data measuring the distance between the hypothesised distribution and the data, and then University of Ghana http://ugspace.ug.edu.gh 30 calculating the probability of obtaining data which have a still larger value of this test statistic than the value observed. Assuming the hypothesis is true, this probability is called the confidence level. 3.4 Anderson-Darling Test The Anderson-Darling procedure is a general test to compare the fit of an observed cumulative distribution function to an expected cumulative distribution function. The Anderson-Darling test is used to test if a sample of data came from a population with a specific distribution. It is a modification of the Kolmogorov-Smirnov (K-S) test and gives more weight to the tails than the K-S test does. The K-S test is distribution free in the sense that the critical values do not depend on the specific distribution being tested but the Anderson-Darling test makes use of the specific distribution in calculating the critical values. This has the advantage of allowing a more sensitive test and the disadvantage that critical values must be calculated for each distribution. The Anderson-Darling tests the hypothesis: 0 1 : The data follow the specified distribution. : The data do not follow the specified distribution. H H The Anderson-Darling test statistic ( 2A ) is defined as; University of Ghana http://ugspace.ug.edu.gh 31 ∑ ( )[ ( ) ( ( ))] [3.2] Where, F is the Cumulative Distribution Function (CDF) of the specified distribution, and iX is the ordered data. The hypothesis regarding the distributional form is rejected at the chosen significance level ( ) if the test statistic, 2 ,A is greater than the critical value obtained from a table. 3.5 Kolmogorov-Smirnov Goodness-of-Fit Test The Kolmogorov-Smirnov (K-S) goodness of fit test is a nonparametric test used to assess whether a sample comes from a population with a specified distribution. A hypothesis test involves calculation of a test statistic from the data and the probability of obtaining a value at least as large as a tail area if the correct distribution is chosen. The K-S test is based on the Empirical Cumulative Distribution Function (ECDF). It measures the Supremum distance between the cumulative distribution function of the theoretical distribution and the empirical distribution function, over all the sample points. The K-S test is distribution free since its critical values do not depend on the specific distribution being tested. The K-S test is relatively insensitive to differences in the tails but more sensitive to points near the median of the distribution. University of Ghana http://ugspace.ug.edu.gh 32 Definition Let 1 2 3, , , ... , nX X X X be a random sample. The empirical distribution function ( )S x is a function of X , which equals the fraction of 'iX s that are less than or equal to X for each iX , ,iX   ( ) ∑ { } ( ), [3.3] where  ( )ix xI x is an indicator function and { }( ) { [3.4] The empirical distribution function ( )S x is useful as an estimator of ( )F x , the unknown distribution function of the 'iX s . We compare the empirical distribution function ( )S x with the hypothesized distribution function *( )F x to investigate if there is good fit. Conover (1999) states that, Kolmogorov in 1933, suggested the test statistic as be the greatest (denoted by “sup” for Supremum) vertical distance between ( )S x and *( )F x , and is define as; ‖ ( ) ( )‖ [3.5] For testing the hypothesis, * 0 * 1 : ( ) ( ), for all - : ( ) ( ), for at least one value of H F x F x x H F x F x x       University of Ghana http://ugspace.ug.edu.gh 33 The Kolmogorov-Smirnov goodness of fit test is used in this study to test the goodness of fit of the financial data to the Normal, Cauchy and stable  distributions. 3.6 Chi-square Goodness of Fit Test The Chi-square goodness of fit test is the test which makes a statement or claim concerning the nature of the distribution for the whole population and it is a parametric test. The data in the sample is examined in order to see whether this distribution is consistent with the hypothesized distribution of the population or not. One way in which the Chi-square goodness of fit test can be used is to examine how closely a sample matches a population of known distribution. The Chi-Squared test is used to determine if a sample comes from a population with a specific distribution. This test is applied to binned data, so the value of the test statistic depends on how the data is binned. Although there is no optimal choice for the number of bins ( ),k there are several formulas which can be used to calculate this number based on the sample size ( ). The value of k can be determined by the empirical formula; [3.6] The data can be grouped into intervals of equal probability or equal width. The first approach is generally more acceptable since it handles peaked data much better. Each bin should University of Ghana http://ugspace.ug.edu.gh 34 contain at least 5 or more data points, so certain adjacent bins sometimes need to be joined together for this condition to be satisfied. Definition The Chi-Square statistic is defined as; ∑ ( ) [3.7] where iO is the observed frequency for in i , and iE is the expected frequency for in i calculated by 2 1[ ( ) ( )],i iiE N F x F x  [3.8] where F is the cumulative distribution function (cdf) of the probability distribution being tested, N the total sample size and 1 2, x x are the limits for bin i . The test statistic is used for testing the hypothesis; 0 1 : The data follow the specified distribution. : The data do not follow the specified distribution. H H The hypothesis regarding the distributional form is rejected at the chosen significance level ( ) if the test statistic is greater than the critical value defined as 2,k c  , where ( )k c is the degrees of freedom and ( ) where u is the number of estimated parameters from the sample. University of Ghana http://ugspace.ug.edu.gh 35 3.7 Maximum Likelihood (ML) Estimation Method If a random sample is available from a population for which the probability distribution is known except for the value of one parameter ( ), each specific value of  determines one particular distribution from among the large number of theoretically possible distribution. The estimation of  is therefore equivalent to selecting that particular distribution which applies to the population that produced the sample (Odoom, 2007). The Maximum Likelihood Principle suggests that the criterion for making the selection of an estimate should be the probability (or likelihood) which a particular distribution can produce the given sample. Thus, the distribution with the highest probability of producing the sample must be taken as the appropriate distribution that produced the sample. The value of  for that distribution is the Maximum Likelihood Estimate of the unknown  (Odoom, 2007). The Maximum Likelihood Estimation (MLE) technique is asymptotically consistent, that is, as the sample size gets larger, the estimates converge to the right values. It is asymptotically efficient, it produces the most precise estimates and asymptotically unbiased, that is, for large samples one expects to get the right value on average. University of Ghana http://ugspace.ug.edu.gh 36 Let  1 2, ,..., nX X X X be an i.i.d. random sample of size n , then the Maximum Likelihood (ML) estimate of the parameter vector ( ) is obtained by maximizing the log- likelihood function: ( ) ∑ ̃( ) [3.9] where  ;if x  is the probability distribution function (pdf) of the distribution. By maximizing ,L the Maximum Likelihood Estimators (MLE) of ( ) are the simultaneous solutions of (n) equations such that: , [3.10] where ( ) . In general, we do not know the explicit form of the density function and have to approximate it numerically. However, all of them have an appealing common feature – under certain regularity conditions that the ML estimator is asymptotically normal with the variance specified by the Fisher information matrix (DuMouchel, 1973). The equation (3.1.0), can be approximated either by using the Hessian matrix arising in maximization or, as in Nolan (2001), by numerical integration. DuMouchel (1971) developed an approximate ML method, which was based on grouping the data set into bins and using a combination of means to compute the density (fast Fourier transform (FFT) for the central University of Ghana http://ugspace.ug.edu.gh 37 values of x and series expansions for the tails) to compute an approximate log-likelihood function. This function was then numerically maximized. 3.8 Akaike Information Criterion (AIC) This is a measure that is used to select the best model that is fit for our financial data. It is a measure of the relative quality of a statistical model. It offers a relative estimate of the information that is lost when a given model is used to represent the process that generates the data. The AIC value for any statistical model is given by: ( ), [4.1] where k is the number of parameters used in the model, and L is the maximized value of the likelihood function for the model. Given a set of models for a given data set, the model with the minimum AIC value is chosen as the best model to fit the data. 3.9 Autoregressive Moving Average (ARMA) model Autoregressive Moving Average (ARMA) models, sometimes called Box-Jenkins model is a model that is used for understanding and predicting future values of time series. The model is usually used for time series data that are autocorrelated. University of Ghana http://ugspace.ug.edu.gh 38 Given a time series data , the ARMA model is a tool for understanding, and perhaps, predicting future values in the series. The model consists of two time series models, an autoregressive (AR) part, and a Moving Average (MA) part. Given an ARMA (p, q ) model, the p represents the order of the autoregressive part and the q represents the order of the moving average part (Mills, 1990). The ARMA model is given by ∑ ∑ [3.11] 3.10 Autoregressive Integrated moving Average (ARIMA) model An ARIMA model is mostly used in describing non-stationary time series. That is a non- stationary time series has a pronounced trend and do not have a constant long-run mean or variance. The ARIMA model is a generalization of the Autoregressive Moving Average (ARMA) model and it is applied in situations where the time series data show evidence of non-stationarity, where an initial differencing step (corresponding to the “integrated” part of the model) can be applied to remove the non-stationarity. This model is generally referred to as an ARIMA (p, d, q) model, where p, d, and q are non- negative integers that refer to the order of the autoregressive, integrated, and moving average parts of the model respectively. The model forms a very important part of the Box-Jenkins approach to time series modelling. University of Ghana http://ugspace.ug.edu.gh 39 The ARIMA model could be expressed as follows: Given a time series data Xt, where t is an integer index and Xt are real numbers, then an ARMA (p, q) model is given by: ( ∑ ) ( ∑ ) [3.12] Where L is the lag operator, the ’s are the parameters of the moving average part, and the ’s are the error terms. Generally, the error terms are assumed to be independently and identically distributed variables drawn from a normal distribution with zero mean. Assume that the polynomial ( ∑ ) has a unitary root of multiplicity d, then, it can be rewritten as: ( ∑ ) ( ∑ )( ) [3.13] An ARIMA (p, d, q) process express this polynomial factorization property, and is given by: ( ∑ )( ) ( ∑ ) , [3.14] and thus can be thought of as a particular case of an ARIMA(p, d, q) process having the autoregressive polynomial with some roots in the unity. For this reason every ARIMA model with d > 0 is not wide sense stationary (Mills, 1990). Some well-known special cases arise naturally. For example, an ARIMA (0, 1, 0) model is given by University of Ghana http://ugspace.ug.edu.gh 40 , [3.15] which is a random walk. 3.11 ARCH and GARCH Models The AutoRegressive Conditional Heteroskedasticity (ARCH) models are used to describe and model time series. They are often employed when one suspects that at any point in a series, the terms will have a characteristic size or variance. The ARCH models assume the variance of the current error term or innovation to be a function of the error terms of the past time periods. The variance is associated with the squares of the previous innovations. Assuming one wish to model a time series using an ARCH process with as the error terms, then are broken into a stochastic part and a time–dependent standard deviation . According to Engle (1982), an ARCH model could be described as a discrete time stochastic processes { } of the form { [3.16] The random variable is a strong White noise process. The series is modeled as ∑ , [3.17] where and . University of Ghana http://ugspace.ug.edu.gh 41 Engle (1991) has proposed a methodology for testing the lag length of the ARCH errors using Lagrange multiplier test. This is done as follows: 1. Obtain the estimate of the best fitting autoregressive model AR(q) ∑ [3.18] 2. Find the estimate of the squares of the errors ̂ and regress them on a constant and q lagged values ̂ ̂ ∑ ̂ ̂ [3.19] where q is the length of ARCH lags. 3. The null hypothesis is that, in the absence of ARCH components, for all . The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated coefficients must be significant. The GARCH (p, q) model is given by ∑ ∑ , [3.20] where p is the order of the GARCH terms and q is the order of the ARCH terms . 3.12 Testing for ARCH Effects In ARCH modelling, we start by testing for ARCH effects by looking at autocorrelation in the time series of squared returns. The Lagrange multiplier is often used to test for existence of ARCH against the null of constant conditional variances. In addition, the sample University of Ghana http://ugspace.ug.edu.gh 42 autocorrelations of the squared residuals from the model for the conditional mean, which is summarized by the Ljung-Box test statistics could be used. 3.13 Multivariate GARCH models The basic idea to extend univariate GARCH models to multivariate GARCH models is that it is significant to predict the dependence in the co-movements of asset returns in a portfolio. To recognize this feature through a multivariate model would generate a more reliable model than separate univariate models. First, we consider some specifications of an MGARCH model that should be imposed, so as to make it flexible enough to state the dynamics of the conditional variances and covariances. On the other hand, as the number of parameters in an MGARCH model increases rapidly along with the dimension of the model, the specification should be parsimonious to simplify the model estimation and also reach the purpose of easy interpretation of the model parameters. Parsimony however, can cause a reduction in the number of parameters, and may also not be able to capture the relevant dynamics in the covariance matrix (Wenjing Su and Yiyu Huang, 2010). It is therefore important to get a balance between the parsimony and the flexibility when a multivariate GARCH model specification is to be fitted. The MGARCH models must satisfy the positive definiteness of the covariance matrix. University of Ghana http://ugspace.ug.edu.gh 43 3.13.1 BEKK-GARCH models To ensure positive definiteness, a new parameterization of the conditional variance matrix Ht was defined by Baba, Engle, Kraft and Kroner (1990) and became known as the BEKK model, which is viewed as another restricted version of the VEC model. It achieves the positive definiteness of the conditional covariance by formulating the model in a way that this property is implied by the model structure. The form of the BEKK model is as follows ∑ ∑ ∑ ∑ , [3.21] where , , and are parameter matrices and is a lower triangular matrix. The purpose of decomposing the constant term into a product of two triangular matrices is to guarantee the positive semi-definiteness of . Whenever , it will generate an identification problem for the reason, that there is not only a single parameterization that can obtain the same representation of the model. The first-order BEKK model is [3.22] The BEKK model also has its diagonal form by assuming that , and matrices are diagonal. It is a restricted version of the DVEC model. The most restricted version of the diagonal BEKK model is the scalar BEKK one with and where and are scalars. University of Ghana http://ugspace.ug.edu.gh 44 Estimation of a BEKK model still bears large computations due to several matrix transpositions. The number of parameters of the complete BEKK model is ( ) ( ) . [3.23] Even in the diagonal one, the number of parameters soon reduces to ( ) ( ) , [3.24] but it is still large. The BEKK form is not linear in parameters, which makes the convergence of the model difficult. However, the strong point lies in that the model structure automatically guarantees the positive definiteness of Ht. Under the overall consideration, it is typically assumed that in BEKK form’s application. 3.13.2 The Constant Conditional Correlation (CCC-GARCH) Model The Constant Conditional Correlation model was introduced by Bollerslev in 1990 to primarily model the conditional covariance matrix indirectly by estimating the conditional correlation matrix. Given that , a covariance matrix ∑ can be decomposed into: ∑ [3.25] Where R is the correlation matrix, D is a diagonal matrix with vector ( ) on the diagonal and is the i-th series. Since the correlation matrix of foreign exchange rate is observed to be constant over time, Bollerslev (1990) made a suggestion to model the time varying covariance matrix as follows: ∑ , [3.26] University of Ghana http://ugspace.ug.edu.gh 45 where is the constant correlation matrix and is a diagonal matrix. The conditional correlation is assumed to be constant while the conditional variances are varying. Obviously, this assumption is impractical for real financial time series. 3.13.3 The Dynamic Conditional Correlation (DCC-GARCH) Model The DCC-GARCH assumes that returns of the assets, is normally distributed with zero mean and covariance matrix . ( ) ( ) [3.27] The conditional covariance matrix is obtained by employing the use of conditional standard deviations and dynamic correlation matrix. Let be a vector of conditional standard deviations modeled by univariate GARCH process such that ∑ ∑ , [3.28] where and ξt-i ~ N (0, 1). In order to find time varying correlation matrix, Engle (2002) proposes a model for the time varying covariance such that ( ∑ ∑ ) ̅ ∑ ( ) ∑ [3.29] Subject to , , ∑ ∑ . University of Ghana http://ugspace.ug.edu.gh 46 From the above equation, it can be seen that the GARCH process is employed to model time varying covariance matrices. ̅ represents the unconditional covariance and it is initially obtained by estimating the sample covariance. We forecast by the lagged residuals ( ), and they are standardized by conditional standard deviations, and lagged covariances ( ). In estimating conditional covariance matrix, a univariate GARCH is employed to model the conditional standard deviations of each asset in the portfolio. It is important to note that the constraints of GARCH process are still considered in constructing a positive definite covariance matrix. The log likelihood function can be written as below ∑ ( ( ) (| |) ) ∑ ( ( ) (| |) (| |) ) , [3.30] and is used to find estimators of that model. After finding optimum estimators that maximize the log likelihood function above, it is easy to produce covariance series. But, it is necessary to note that each covariance matrix is not constructed by conditional standard deviations yet. This covariance matrix series is generated by relying on initial unconditional covariance matrix. Then new covariance matrix for next time point is generated by previous one and standardized residuals as in simple univariate GARCH process. University of Ghana http://ugspace.ug.edu.gh 47 Hence, univariate GARCH process is employed to extract time varying positive definite correlation matrices from that covariance matrix series such that (√ ) (√ ) [3.31] In that equation, refers to the diagonal matrix of variances which are obtained from the diagonal items of Kt as following [ ] [3.32] The matrix notation for time varying correlation indicates that the way of calculating correlations such as dividing covariances by standard deviations extracted from the covariance matrix. The matrix notation can be interpreted algebraically such that √ [3.33] Finally, conditional covariance matrix can be found by Hadamard product of the matrix of conditional standard deviations and time varying correlation matrix such that [3.34] This methodology gives the conditional covariance matrix for each data point of the calibration period. To find the conditional covariance matrix that is used to optimize weights of assets in the portfolio, one day conditional standard deviations of assets and their dynamic correlation matrix are forecasted. University of Ghana http://ugspace.ug.edu.gh 48 3.14 Portfolio Optimization Model Let denote the return of portfolio at the time , and ( ), the weight of fund . , the portfolio return can be determined using the equation: ∑ [3.35] Where ∑ , ( ) is the return on fund i at time t and is calculated as ( ) [3.36] where is the price of fund i (i=1,…,N) with N, the number of funds analyzed at time ( ), where is the number of observed data. Let ( ) be the mean vector transpose, and ( )the weight vector transpose of a portfolio with the weight vector having the property , where ( ) The mean of portfolio return can be estimated using the equation ∑ , [3.37] where is the mean of stock at time The variance of the portfolio return can be expressed as follows: ∑ ∑ ∑ ; University of Ghana http://ugspace.ug.edu.gh 49 , [3.38] where ( ) denotes the covariance between stock and stock . 3.15 Value-at-Risk (VaR) The Value-at-Risk of an investment portfolio based on standard normal distribution approach is calculated using the equation: { ( ) ⁄ }, [3.39] where the number of funds that is allocated in the portfolio, and , the percentile of standard normal distribution at the significance level . When it is assumed that , the equation reduces to { ( ) ⁄ } [3.40] 3.15.1 Efficiency of a portfolio A portfolio weight is called (Mean-VaR) efficient if there is no other portfolio weight with and To obtain the efficient portfolio, we used the objective function: Maximize { } , [3.41] where, τ denotes the investor risk tolerance factor. By substitution we must solve an optimization problem: {( ) ( ) ⁄ } University of Ghana http://ugspace.ug.edu.gh 50 . [3.42] The objective function is quadratic concave due to positive semi-definite nature of the covariance matrix. Hence, the optimization problem is quadratic concave with a Lagrangean function which is given by: ( ) ( ) ( ) ⁄ ( ) [3.43] Because of the Kuhn-Tucker theorem, the optimality conditions are: . This implies that: ( ) ( ) ⁄ , [3.44] and , which also implies that: . [3.45] For , we have an optimum portfolio based on algebra calculations and ,setting , [3.46] ( )( ) and, [3.47] ( ) ( ) , we have [3.48] ( ) ⁄ [3.49] By condition, and vector of is given by ( ) ( ) [3.44] University of Ghana http://ugspace.ug.edu.gh 51 The substitution of into and respectively gives the optimum of expected return and optimum Value-at-Risk of portfolios. University of Ghana http://ugspace.ug.edu.gh 52 CHAPTER FOUR ANALYSIS AND DISCUSSIONS 4.0 Introduction In this chapter, a discussion is made basically on the return distributions of financial data from funds managed by HFC Investment Limited from 2009 to 2012. Time series plots are employed to demonstrate the progression and the volatility of the variables considered over the period of time. The various diagnostic tests (P-P plot, Q-Q plot and density plot) were used to graphically demonstrate goodness of fit to the Normal distribution. The Goodness-of-Fit tests (Kolmogorov-Smirnov and Chi-square tests) were used to statistically test fitness of the models employed to the financial data. To test for normality in the data, we shall employ the Jarque-Bera test. Serial correlation is tested for using Ljung-Box test and the LM test is also employed to test for heteroskedasticity. The optimization parameters of the financial data considered were estimated using the ARMA-GARCH, and the DCC-GARCH respectively. 4.1 The HFC funds The data analyzed in this study comprises of 178 data points from three funds managed by the HFC Investments Limited. The funds include HFC Equity Trust (ET), HFC Future Plan (FP) and the HFC Unit Trust (UT) respectively. The data points represent the weekly prices of the funds spanning the given time interval. University of Ghana http://ugspace.ug.edu.gh 53 The weekly return on fund i at time t and is calculated as ( ), [4.0] Where is the price of fund i (i=1,…,N) with N, the number of funds analysed at time t (t = 1,…,T) where T is the number of observed data. 4.2 Time Series Generally, the time series of the weekly prices of the three funds show upward trend over the period under discussion. Specifically, the equity trust fund depicts slight fluctuations with a fall in price almost at the beginning. This is shown in Figure 4.2.1. Figure 4.2.1: Equity Trust fund evolution over time Source: HFC Bank (2009 - 2012) NB: The time scale in Figure 4.2.1 is arbitrarily chosen by the R software programme used for the analysis since the researcher is only interested in the trend of the fund. University of Ghana http://ugspace.ug.edu.gh 54 Figure 4.2.2 also shows the time series of the HFC Future Plan over the interval. The series show an increasing trend with a little decline at the latter part. Figure 4.2.2: Future Plan fund evolution over time Source: HFC Bank (2009 - 2012) NB: The time scale in Figure 4.2.2 is arbitrarily chosen by the R software programme used for the analysis since the researcher is only interested in the trend of the fund. The HFC Unit Trust Fund shows a very sharp decline in prices of the fund almost at the beginning of each year over the period. This decline in prices may be the result of increase in exchange rates and inflation respectively at those times. This is displayed in Table 4.2.3. University of Ghana http://ugspace.ug.edu.gh 55 Figure 4.2.3: Unit Trust fund evolution over time Source: HFC Bank (2009 - 2012) NB: The time scale in Figure 4.2.3 is arbitrarily chosen by the R software programme used for the analysis since the researcher is only interested in the trend of the fund. Time series plot of the returns as displayed in Figure 4.2.4 of the various funds under discussion displays the volatility of the funds. The funds are arranged in order (from top to bottom) from Equity Trust, Future Plan and Unit Trust funds. Comparatively, the Equity Trust returns tend to be more volatile among the three with the Unit Trust being more stable. University of Ghana http://ugspace.ug.edu.gh 56 Figure 4.2.4: Time series plot of the funds Source: HFC Bank (2009 - 2012) NB: The time scale in Figure 4.2.4 is arbitrarily chosen by the R software programme used for the analysis since the researcher is only interested in the plot of the funds. 4.3 Descriptive Statistics Table 4.3.1 shows a descriptive statistics of the minimum, maximum, mean, variance, standard deviation, skewness and kurtosis of the returns of the various funds. It can be seen from Table 4.3.1 that Unit Trust had the minimum return of -0.164478 representing an approximate loss of 16.45% in value whiles the Equity Trust had the maximum return of 0.055256 representing an approximate gain of 5.53% in value over the entire period. Also the Future Plan fund had the highest average return coupled with the lowest variability in returns. University of Ghana http://ugspace.ug.edu.gh 57 However, the Unit trust fund had the lowest average return with the highest variability. Furthermore, the Unit Trust and the Future Plan returns are negatively skewed and that shows conformity with most financial data. Finally, all the three funds had positive kurtosis in excess of 3, indicating that the funds have heavier tails as compared with the normal distribution which has a kurtosis of 3.0. The Equity Trust however can be seen to be approximately normal. Table 4.3.1: Summary statistics of the funds Equity Trust Future Plan Unit Trust Minimum -0.050249 -0.036753 -0.164478 Maximum 0.055256 0.025880 0.012145 Mean 0.000715 0.002987 0.000518 Variance 0.000175 0.000052 0.000254 Standard deviation 0.013212 0.007228 0.015937 skewness 0.276766 -0.418791 -7.834369 Kurtosis 3.438914 5.052938 69.837792 Source: HFC Bank (2009 - 2012) Figure 4.3.1 displays histogram plots with Normal fit to the returns of the data under discussion. The funds are arranged in order from top to down as Equity Trust, Future Plan and University of Ghana http://ugspace.ug.edu.gh 58 Unit Trust respectively. The red colour in the histograms on the left is the normal distribution curve. On the right side, the red colour is the empirical distribution of each fund and the grey coloured curve is the normal distribution curve. The plot illustrates that, the returns for each fund is highly peaked and heavy tailed than the normal fit which conformed to the studies in the literature that, assets returns are heavy-tailed. It is observed that the logarithm returns are in the neighbourhood of zero. The only exception is the Equity Fund that seemed approximately normal. Figure 4.3.1: Histogram and Density plot of the funds 4.4 Quantile-Quantile (Q-Q) Plots The Q-Q plots of the funds display the deviations of the data from normality, buttressing the fact that the data are heavy-tailed. This is illustrated in Figure 4.4.1. University of Ghana http://ugspace.ug.edu.gh 59 Figure 4.4.1: Q-Q plots of funds 4.5 Test for Autocorrelation The ljung-Box test is used to test for autocorrelation among the variables in the three funds. The test indicated the presence of autocorrelation among the residuals in both the Equity Trust Fund and the Future Plan Fund since they all had a p-value less than 0.05. The p-value of the Unit Trust however was greater than the significance level given, indicating the absence of autocorrelation. The result is shown in Table 4.5.1. Table 4.5: Ljung-Box Test for Autocorrelations between funds Equity Trust Future Plan Unit Trust Test statistic p-value 56.4726 2.336e-10 73.2823 8.66e-14 0.399 0.9989 University of Ghana http://ugspace.ug.edu.gh 60 4.6 Tests for Normality Various tests were performed on the data in order to test whether the data follow the normal distribution. All the tests showed that the funds considered show deviations from normality, each having a p-value less than 0.05. This is shown in the table 4.6: Table 4.6. Tests for Normality Equity Trust Future plan Unit Trust Jarque Bera p-value 91.2574 2.2e-16 196.0948 2.2e-16 37818.5 2.2e-16 Kolmogorov-Smirnov p-value 0.48 2.2e-16 0.4897 2.2e-16 0.4952 2.2e-16 Anderson – Darling p-value 2.8778 2.901e-07 3.9637 6.593e-10 44.6036 < 2.2e-16 Pearson Chi-Square p-value 22.237 0.1017 61.8208 1.223e-07 722.9075 < 2.2e-16 Shapiro – Francia p-value 0.9295 1.038e-06 0.9074 5.639e-08 0.2346 < 2.2e-16 University of Ghana http://ugspace.ug.edu.gh 61 4.7 ARMA Models for the funds A number of ARMA and GARCH models are considered for each of the funds. The Akaike Information Criterion (AIC) is calculated to identify the best fit model. In Table 4.7 a below, ARMA (1, 1) was the best fit model for Equity Trust and Future Plan funds. The ARMA (1, 1) model produced the smallest AIC among all the models considered for the two funds. The Unit Trust fund on the other hand had ARMA (1, 2) as the best model. Table 4.7 a: ARMA Models for the funds Akaike Information Criterion (AIC) Equity Trust Future Plan Unit Trust ARMA(1,1) -1047.48 -1252.82 -945.93 ARMA(1,2) -958.45 -1251.5 -958.45 ARMA(2,1) -1044.81 -1251.55 -958.4 From Table 4.7 b, various GARCH models are considered for the three funds. The best GARCH model for each fund is selected according to the AIC value. It could be seen that the Equity trust Fund and the Future Plan Fund follow the GARCH (1, 1) model whilst the Unit Trust Fund follows the GARCH (1, 2) according to the AIC value. University of Ghana http://ugspace.ug.edu.gh 62 Table 4.7 b: GARCH Models for the funds Akaike Information Criterion (AIC) Equity Trust Future Plan Unit Trust GARCH(1,1) -6.078459 -7.616262 -8.894389 GARCH (1,2) -6.066539 -7.568044 -8.895592 GARCH (2,1) -6.067144 -7.600056 -8.895592 GARCH(2,2) -6.055584 -7.590923 -8.884031 The respective models for the estimates of the expected means and variances are below: Equity trust Fund [4.2] [4.3] Future Plan Fund [4.4] [4.5] Unit Trust Fund [4.6] [4.7] University of Ghana http://ugspace.ug.edu.gh 63 4.8 Estimates of the Mean and Variance The models above were used to predict the mean and variance of the respective funds and the results are displayed in Table 4.8. The values obtained from the models will be employed in the portfolio optimization process. Table 4.8: Estimates for mean and variance of funds Fund Mean Variance Equity Fund 0.00271231 0.001909301 Future Plan 0.001439837 0.000488596 Unit Trust 0.002469056 0.003011159 4.9 The Portfolio Optimization Process (ARMA-GARCH) From the results in the table above, we produce our mean vector [ ] , The unit vector so established is given by [ ] , The variances in the table above coupled with the covariance between fund i and fund j, for are used to find the covariance matrix, . University of Ghana http://ugspace.ug.edu.gh 64 [ ] whose inverse is calculated to be [ ] Table 4.9 a provide a list of weights for the various funds at specified risk tolerance level, given a percentile of -1.645 at a significance level of 0.05. University of Ghana http://ugspace.ug.edu.gh 65 Table 4.9 a Portfolio weights per tolerance level (from univariate GARCH model) τ W1 W2 W3 0.00 0.1755787760 0.7084985795 0.1159226448 0.20 0.1755787762 0.7084985781 0.1159226448 0.40 0.1755787765 0.7084985780 0.1159226450 ... ... ... ... 7.00 0.1755788852 0.7084984194 0.1159226954 7.20 0.1755789275 0.7084983545 0.1159227160 7.40 0.1755790235 0.7084982176 0.1159227589 7.60 0.1755793646 0.7084977081 0.1159229199 7.80 0.1755770257 0.7085010748 0.1159218357 8.00 0.1755784073 0.7084991166 0.1159224760 8.20 0.1755785649 0.7084988702 0.1159225480 8.40 0.1755786266 0.7084987991 0.1159225759 8.60 0.1755786596 0.7084987524 0.1159225906 8.80 0.1755786795 0.7084987168 0.1159226005 9.00 0.1755786935 0.7084986983 0.1159226064 9.20 0.1755787036 0.7084986849 0.1159226115 9.40 0.1755787108 0.7084986737 0.1159226148 9.60 -0.1755787164 -0.7084986621 -0.1159226174 Here, W1, W2, and W3 refer to the weights allocated to Equity Trust Fund, Future Plan Fund and Unit Trust Fund respectively. Tolerance levels from 9.60 and beyond were not used since University of Ghana http://ugspace.ug.edu.gh 66 they produced negative weights. For instance W1 at tolerance level 9.60 is calculated to be -0.1755787164. Thus our objective function would be maximized based on the tolerance level: . From Table 4.9 b, it could be seen that each risk tolerance level of is associated with different portfolio mean and different portfolio value-at-risk. The results produced a minimum portfolio mean return of 0.0178256289 with a minimum VaR of 0.012949619. This same risk tolerance interval also produced a maximum portfolio mean return of 0.0178256706 and a maximum VaR of 0.012949659. University of Ghana http://ugspace.ug.edu.gh 67 Table 4.9 b: Portfolio optimization variables (from univariate GARCH model) ∑ ̂ ̂ ⁄ 0.00 1 0.0178256604 0.012949629506351 -0.012949629506351 1.376538255329090 0.20 1 0.0178256604 0.012949629485135 -0.005819365326493 1.376538256717440 0.40 1 0.0178256604 0.012949629484432 0.001310898840590 1.376538257539180 ... … … … … … 7.00 1 0.0178256623 0.012949627590902 0.236609644807578 1.376538606152740 7.20 1 0.0178256630 0.012949626807897 0.243739920901361 1.376538745077570 7.40 1 0.0178256647 0.012949625177186 0.250870212795751 1.376539049122910 7.60 1 0.0178256706 0.012949619058910 0.258000574338763 1.376540154361020 7.80 1 0.0178256289 0.012949659056422 0.265130151566230 1.376532680003410 8.00 1 0.0178256540 0.012949635929093 0.272260827686992 1.376537075915620 8.20 1 0.0178256565 0.012949632905975 0.279391133311974 1.376537590380650 8.40 1 0.0178256578 0.012949632137776 0.286521419220887 1.376537775692310 8.60 1 0.0178256584 0.012949631582023 0.293651692932804 1.376537879793480 8.80 1 0.0178256587 0.012949631133424 0.300781961561382 1.376537948623980 9.00 1 0.0178256589 0.012949630916795 0.307912229907415 1.376537991639580 9.20 1 0.0178256591 0.012949630764611 0.315042497419975 1.376538023708020 9.40 1 0.0178256593 0.012949630628658 0.322172763387081 1.376538047093570 9.60 1 -0.0178256593 ........... ………. ………… University of Ghana http://ugspace.ug.edu.gh 68 A plot of the portfolio mean returns against the value-at-risk gives us the efficiency and our maximum portfolio is expected to lie on this frontier. The plot is displayed in Figures 4.9 a After successfully producing a set of efficient portfolios, the main business now is to find out the composition of weights that produced the optimum portfolio. Since investors always require a portfolio that yields maximum returns accompanied with the minimum possible risk, it is assumed for the purpose of this study that investors’ preference is based solely on the returns and risk of a portfolio. Based on this assumption, we find our optimum portfolio by finding the ratio of the mean returns to the value-at-risks generated. The result of these ratios is plotted and displayed Figure 4.9 b. It is established based on our assumption that the portfolio with the maximum ratio is the preferred maximum portfolio. University of Ghana http://ugspace.ug.edu.gh 69 Figure 4.9 a: Efficient frontier (from univariate GARCH model) Figure 4.9 b: Mean-VaR Ratio (from univariate GARCH model) 0.0178256000 0.0178256100 0.0178256200 0.0178256300 0.0178256400 0.0178256500 0.0178256600 0.0178256700 0.0178256800 0. 01 29 49 6 29 50 63 51 0. 01 29 49 6 29 48 44 32 0. 01 29 49 6 29 47 91 37 0. 01 29 49 6 29 45 82 54 0. 01 29 49 6 29 44 06 87 0. 01 29 49 6 29 42 00 95 0. 01 29 49 6 29 39 84 48 0. 01 29 49 6 29 40 31 88 0. 01 29 49 6 29 35 89 68 0. 01 29 49 6 29 29 89 47 0. 01 29 49 6 29 28 98 16 0. 01 29 49 6 29 22 33 40 0. 01 29 49 6 29 21 10 50 0. 01 29 49 6 29 10 20 95 0. 01 29 49 6 28 96 60 08 0. 01 29 49 6 28 84 01 45 0. 01 29 49 6 28 47 00 20 0. 01 29 49 6 28 01 62 19 0. 01 29 49 6 26 80 78 97 0. 01 29 49 6 19 05 89 10 0. 01 29 49 6 35 92 90 93 0. 01 29 49 6 32 13 77 76 0. 01 29 49 6 31 13 34 24 0. 01 29 49 6 30 76 46 11 M EA N R ET U R N S VALUE AT RISK EFFICIENT FRONTIER 1.376528000000000 1.376530000000000 1.376532000000000 1.376534000000000 1.376536000000000 1.376538000000000 1.376540000000000 1.376542000000000 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 29 … 0. 01 29 49 6 28 … 0. 01 29 49 6 28 … 0. 01 29 49 6 28 … 0. 01 29 49 6 28 … 0. 01 29 49 6 26 … 0. 01 29 49 6 19 … 0. 01 29 49 6 35 … 0. 01 29 49 6 32 … 0. 01 29 49 6 31 … 0. 01 29 49 6 30 … R A TI O VALUE-AT-RISK RATIO OF MEAN AND VALUE-AT-RISK University of Ghana http://ugspace.ug.edu.gh 70 4.10 The Portfolio Optimization Process (ARMA - DCC GARCH) The covariance matrix generated from the DCC-GARCH model is given as [ ] The inverse of which is given as [ ] The mean vector generated from the ARMA model is as below. [ ] With the same percentile used as in the above, Table 4.10 a gives the generated weights according to their risk tolerance levels. From Table 4.10 a, it could be seen that a risk tolerance interval of was realised since tolerance levels from 18.40 and beyond produced negative weights. University of Ghana http://ugspace.ug.edu.gh 71 Table 4.10 a: Portfolio weights per tolerance level τ W1 W2 W3 0.00 0.041537063 0.388960864 0.569502 0.20 0.041951306 0.385734118 0.572315 0.40 0.042365759 0.382505743 0.575128 ... ... ... ... 5.2 0.052537396 0.303273855 0.644189 5.40 0.052977796 0.299843364 0.647179 5.60 0.053420155 0.296397614 0.650182 5.80 0.053864554 0.292935967 0.653199 6.00 0.054311078 0.289457774 0.656231 6.20 0.054759811 0.285962371 0.659278 6.40 0.055210841 0.28244908 0.66234 6.60 0.055664256 0.278917206 0.665419 6.80 0.056120148 0.275366038 0.668514 7.00 0.05657861 0.271794851 0.671627 7.20 0.057039738 0.268202898 0.674757 ….. ….. ….. ….. 17.6 0.088610849 0.022279983 0.889109 17.8 0.089496999 0.015377325 0.895126 18 0.090402924 0.008320623 0.901276 18.2 0.091329601 0.001102281 0.907568 18.4 0.092278072 -0.00628582 0.914008 University of Ghana http://ugspace.ug.edu.gh 72 The estimated portfolio optimisation variables in the objective function are displayed in Table 4.10 b, together with their respective tolerance levels. From Table 4.10 b, it could be seen that each risk tolerance level of is associated with different portfolio mean and different portfolio value-at-risk. The results produced a minimum portfolio mean return of 0.020788 with a minimum VaR of 0.002816. This same interval produced a maximum portfolio mean return of 0.024980 and a maximum VaR of 0.003675. University of Ghana http://ugspace.ug.edu.gh 73 Table 4.10 b: Portfolio optimization variables (DCC-GARCH) τ ∑ ̂ ̂ ⁄ 0.00 1 0.020788 0.002816 -0.00282 7.381299309 0.20 1 0.020823 0.002816 0.005513 7.393080759 0.40 1 0.020857 0.002817 0.013869 7.404506983 ... ... ... ... 5.2 1 0.021697 0.002866 0.222783 7.57054186 5.40 1 0.021733 0.00287 0.231851 7.572843968 5.60 1 0.02177 0.002874 0.240949 7.574764724 5.80 1 0.021807 0.002878 0.250079 7.576302304 6.00 1 0.021844 0.002883 0.25924 7.577454811 6.20 1 0.021881 0.002887 0.268432 7.578220278 6.40 1 0.021918 0.002892 0.277656 7.578596663 6.60 1 0.021955 0.002897 0.286913 7.578581849 6.80 1 0.021993 0.002902 0.296202 7.578173641 7.00 1 0.022031 0.002907 0.305524 7.577369765 7.20 1 0.022069 0.002913 0.314879 7.576167866 ….. ….. ….. ….. ….. ….. 17.6 1 0.024677 0.003594 0.865029 6.866178531 17.8 1 0.02475 0.00362 0.87748 6.836957881 18 1 0.024825 0.003647 0.890047 6.806958194 18.2 1 0.024901 0.003675 0.902735 6.776161921 18.4 1 0.024980 …. …. …… University of Ghana http://ugspace.ug.edu.gh 74 A plot of the portfolio the mean returns against the portfolio value-at-risk gives us the efficient frontier on which the maximum portfolio is expected to lie. The plot is displayed in Figure 4.10 a. Figure 4.10 a: Efficient frontier (from DCC GARCH model) Based on the same assumption stated above the optimum portfolio is obtained by finding the ratio of the mean returns to the value-at-risks generated. The result of these ratios is plotted and displayed in Figure 4.10 b. The portfolio with the maximum ratio is the preferred maximum portfolio as it is in the first situation. 0 0.005 0.01 0.015 0.02 0.025 0.03 0. 00 28 16 3 53 0. 00 28 17 1 85 0. 00 28 19 2 54 0. 00 28 22 5 72 0. 00 28 27 1 53 0. 00 28 33 0 17 0 .0 0 2 8 4 0 1 9 0. 00 28 48 7 04 0. 00 28 58 5 96 0. 00 28 69 9 11 0. 00 28 82 6 99 0. 00 28 97 0 19 0. 00 29 12 9 39 0. 00 29 30 5 36 0. 00 29 49 8 96 0. 00 29 71 1 18 0. 00 29 94 3 16 0. 00 30 19 6 15 0. 00 30 47 1 61 0. 00 30 77 1 19 0. 00 31 09 6 75 0. 00 31 45 0 44 0. 00 31 83 4 72 0. 00 32 25 2 41 0. 00 32 70 6 78 0. 00 33 20 1 64 0. 00 33 74 1 41 0. 00 34 33 1 31 0. 00 34 97 7 52 0. 00 35 68 7 39 0. 00 36 46 9 79 M EA N R ET U R N S VALUE-AT-RISK EFFICIENT FRONTIER University of Ghana http://ugspace.ug.edu.gh 75 Figure 4.10 b: Mean-VaR ratio (from DCC GARCH model) 4.11 Discussion of findings Based on the computations from Table 4.9 b, it is found out that the risk tolerance interval of produced a minimum and maximum portfolio return of 0.0178256289 and 0.0178256706 respectively. The table also produced a minimum and maximum value-at-risk of 0.0129496191 and 0.0129496591 respectively. These values were obtained when the VaR was modelled with the univariate GARCH model. When the VaR modelled by the multivariate DCC GARCH was applied to the same optimization process, a risk tolerance interval of was obtained. This interval produced a minimum and maximum value-at-risk of 0.002816 and 0.003675. In addition, a 6.2 6.4 6.6 6.8 7 7.2 7.4 7.6 7.8 0. 00 28 16 3 53 0. 00 28 17 1 85 0. 00 28 19 2 54 0. 00 28 22 5 72 0. 00 28 27 1 53 0. 00 28 33 0 17 0. 00 28 40 1 9 0. 00 28 48 7 04 0. 00 28 58 5 96 0. 00 28 69 9 11 0. 00 28 82 6 99 0. 00 28 97 0 19 0. 00 29 12 9 39 0. 00 29 30 5 36 0. 00 29 49 8 96 0. 00 29 71 1 18 0. 00 29 94 3 16 0. 00 30 19 6 15 0. 00 30 47 1 61 0. 00 30 77 1 19 0. 00 31 09 6 75 0. 00 31 45 0 44 0. 00 31 83 4 72 0. 00 32 25 2 41 0. 00 32 70 6 78 0. 00 33 20 1 64 0. 00 33 74 1 41 0. 00 34 33 1 31 0. 00 34 97 7 52 0. 00 35 68 7 39 0. 00 36 46 9 79 R A TI O VALUE-AT-RISK MEAN-VAR RATIO University of Ghana http://ugspace.ug.edu.gh 76 minimum and maximum portfolio return of 0.020788 and 0.024980 respectively were realised. Generally, from the univariate case, it is seen that there is a positive relationship between the tolerance level and the ratio of the mean return to VaR. An increase in the risk tolerance level results in a corresponding increase in the mean-to-VaR ratio. A sharp decline is experienced in the efficient frontier after the tolerance level of 7.60 after which slight successive increments are experienced but not to the magnitude at risk level 7.60. It is therefore believed, that the optimum portfolio lies on the tolerance level , which produced a mean portfolio of 0.0178256706 with an associated risk, measured by the value-at-risk of 0.012949619058910. In the multivariate case, the efficient frontier showed a positive relationship between the mean and the value-at-risk, with a gradual upward increment in trend. The graph of the mean-VaR ratio showed a gradual increase in trend to a peak value of 7.578596663 and then falls gradually over the entire interval of risk tolerance level. This peak ratio corresponds to a portfolio return of 0.021917843with an associated risk, measured by the VaR of 0.002892071. This occurred at a risk tolerance level of . Based on the calculation in the univariate case, the optimum portfolio produces weights of = 0.1755793646, = 0.7084977081, and = 0.1159229199 respectively. This University of Ghana http://ugspace.ug.edu.gh 77 implies that to obtain an optimum efficient portfolio, an investor must place 0.1755793646, 0.7084977081 and 0.1159229199 as fractions of whatever amount to be invested in the Equity Trust Fund, the Future Plan Fund and the Unit Trust Fund respectively. From the multivariate perspective, the weights generated for the efficient mean-VaR portfolio are placed on the Equity Trust Fund, on the Future Plan Fund and on the Unit Trust Fund respectively. University of Ghana http://ugspace.ug.edu.gh 78 CHAPTER FIVE SUMMARY, CONCLUSIONS AND RECOMMENDATIONS 5.0 Introduction This chapter discusses the summary of all that is entailed in this research and the conclusions that were drawn from the results obtained. The chapter then goes further to give some recommendations that are needed to be adhered to, and also for further studies. Further studies for future research are also provided in this chapter. 5.1 Summary This study focused on the use of the ARMA-GARCH model and the ARMA-DCC GARCH model as improvements on the Markowitz mean-variance portfolio optimization model. The data that was used comprised three of the funds managed by HFC Investment Limited namely, the Equity Trust Fund, the Future Plan Fund and the Unit Trust Fund. The study indeed showed like other financial time series data, the presence of ARCH effects in the residuals. A suitable univariate GARCH and a multivariate DCC GARCH models were therefore used to model the estimates of the variances whiles appropriate ARMA models were applied in modelling estimates for the means. A covariance matrix that was generated in both instances together with the estimates were used in the optimization process. University of Ghana http://ugspace.ug.edu.gh 79 The mean-VaR optimization model was used instead of the Markowitz mean-variance optimization model. This is because related literature proved the value-at-risk to be a better estimate rather than the usual variance. 5.2 Conclusions The research revealed that the financial data deviated from the normal distribution in that they were heavy-tailed. The data was modeled with the appropriate ARMA-GARCH and DCC GARCH models to produce estimates for the mean and variance of the funds that were studied. Based on the investor’s risk tolerance level appropriate weights are generated to produce an efficient portfolio. The risk tolerance levels ranged from for the univariate GARCH while that of the multivariate GARCH ranged from . Levels from 9.60 and 18.40 respectively and beyond in both situations were not feasible since the produced negative weights. On the efficient frontier generated by the efficient portfolios lied the optimal portfolio and this was attained at risk tolerance level of for the univariate and for the multivariate. From the two models it is observed that the multivariate DCC GARCH model produced more efficient portfolio, which requires a comparatively less risk aversion constant or risk tolerance level. University of Ghana http://ugspace.ug.edu.gh 80 The optimum portfolio therefore is the one that places weights of , and as fractions of whatever amount to be invested in the Equity Trust Fund, the Future Plan Fund and the Unit Trust Fund respectively. This portfolio will produce an expected weekly return of approximately 2.192% and a weekly risk level of approximately 0.289% respectively. 5.3 Recommendations The following are the recommendations made to fund managers, investors, financial analysts, stakeholders as well as policy makers; It is recommended that: 1. The weekly logarithm returns of funds are modeled with the ARMA-DCC GARCH model. 2. The Mean-VaR should be employed in the optimization process instead of the usual Mean-Variance. 3. So much assumption should not be made on the distribution of the returns of funds and stocks but rather must be explored to know the exact distribution that they follow. . 5.4 Further studies The following research areas can be looked: 1. Extensions of the ARMA models as well as other multivariate GARCH models could be employed in the estimation of the mean and in modeling residuals. University of Ghana http://ugspace.ug.edu.gh 81 2. The distribution of the error terms in the GARCH model could be modelled with stable distributions 3. Copula models can also be considered. 4. It is also recommended that conditional value-at-risk should also be considered as a measure of risk in the optimization process. 5. Other investor preferences as constraints can be included in the optimization process. 6. Researchers can also explore other options rather than the reliance on historical financial data in portfolio optimization. University of Ghana http://ugspace.ug.edu.gh 82 REFERENCES Alexander, M., Frey, R., & Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Alexander, M., Frey, R., & Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Arthur, L., & Ghandforoush, P. (1987). Subjectivity and portfolio optimization. Advances in mathematical programming and financial planning, 1, 171-186. Artzner, P. (1999). Application of coherent risk measures to capital requirements in insurance. North American Actuarial Journal, 3(2), 11-25. Artzner, P., Delbaen, F., Eber, J. M., & Heath, D. (1999). Risk management and capital allocation with coherent measures of risk. Unpublished working paper, Eidgen. ossische Technische Hochschule, Zurich. Assis, K., Amran, A., & Remali, Y. (2006). Forecasting cocoa bean prices using univariate time series models. Journal of Arts Science and Commerce ISSN, 2229-4686. Baba, Y., Engle, R. F., Kraft, D. F., & Kroner, K. F. (1990). Multivariate Simultaneous Generalized ARCH, Department of Economics, University of California at San Diego. Working Paper. Ball, L. M. (2006). Has globalization changed inflation? (No. w12687). National Bureau of Economic Research. Ballestero, E. (1998). Approximating the optimum portfolio for an investor with particular preferences. Journal of the Operational Research Society, 998-1000. Ballestero, E., & Romero, C. (1996). Portfolio selection: A compromise programming solution. Journal of the Operational Research Society, 1377-1386. Bera, A. K., & Higgins, M. L. (1993). ARCH models: properties, estimation and testing. Journal of economic surveys, 7(4), 305-366. Billio, M., Caporin, M., & Gobbo, M. (2006). Flexible dynamic conditional correlation multivariate garch models for asset allocation. Applied Financial Economics Letters, 2(02), 123-130. Bollerlsev, T., Engle, R.F. and D.B. Nelson, (1994), ARCH models, in: R.F. Engle and D.L. Bollerslev, T., (1986), Generalized autoregressive conditional heterocedasticity, Journal of Econometrics 31, 307-327. University of Ghana http://ugspace.ug.edu.gh 83 Caldeira, J., Moura, G., and Santos, A. (2012). Portfolio optimization using a parsimonious multivariate GARCH model: application to the Brazilian stock market. Journal Economics Bulletin, 32(3), 1848-1857 Chamu Morales, F. (2005). Estimation of max-stable processes using monte carlo methods with applications to financial risk assessment (Doctoral dissertation, PhD dissertation, Department of Statistics, University of North Carolina, Chapel Hill). Chan, L, Karceski, J and Lakonishok, J. (1999). On portfolio optimization: Forecasting covariances and choosing risk models. Review of financial studies. 12(5), 937-974 Chang, T. J., Meade, N., Beasley, J. E., & Sharaiha, Y. M. (2000). Heuristics for cardinality constrained portfolio optimisation. Computers & Operations Research, 27(13), 1271-1302. Chong, C.W., Loo, C.S and Muhammad, A.M (2002); Modelling the Volatility of Currency Exchange Rates Using GARCH Model, Malaysia Press. Chopra, V. K., Hensel, C. R., & Turner, A. L. (1993). Massaging mean-variance inputs: returns from alternative global investment strategies in the 1980s. Management Science, 39(7), 845-855. Danielsson, J., & De Vries, C. G. (2000). Value-at-risk and extreme returns. Annales d'Economie et de Statistique, 239-270. Danielsson, J., & De Vries, C. G. (2000). Value-at-risk and extreme returns. Annales d'Economie et de Statistique, 239-270. Dowd, K. (1998). Beyond value at risk: the new science of risk management. Dowd, K. (1998). Beyond value at risk: the new science of risk management. Duffie, D., & Pan, J. (1997). An overview of value at risk. The Journal of derivatives, 4(3), 7-49. DuMouchel, W. H. (1971). Stable distributions in statistical inference. University Microfilms. DuMouchel, W. H. (1973). On the asymptotic normality of the maximum-likelihood estimate when sampling from a stable distribution. The Annals of Statistics, 948-957. Embrechts, P., Frey, R., & McNeil, A. (2005). Quantitative risk management. Princeton Series in Finance, Princeton, 10. Engle, R. (2001). GARCH 101: The use of ARCH/GARCH models in applied econometrics. Journal of economic perspectives, 157-168. University of Ghana http://ugspace.ug.edu.gh 84 Engle, R. (2001). GARCH 101: The use of ARCH/GARCH models in applied econometrics. Journal of economic perspectives, 157-168. Engle, R. F., Ito, T., & Lin, W. L. (1990). Meteor Showers or Heat Waves? Heteroskedastic Intra-Daily Volatility in the Foreign Exchange Market." Econometrica 58-3. Engle, R., Ng, V., and Rothschild, M. (1992). A Multi-Dynamic Factor Model for Stock Returns. Journal of Econometrics, 52: (1–2), 245–66. Engle, R.F., (1982), Autoregressive conditional heteroscedasticity with estimates of the variance of U.K. in°ation, Econometrica 50, 987-1008. Fernando, K. V. (2000). Practical portfolio optimization. The Numerical Algorithms Group, Ltd White Paper. Franses, P. H., & Van Dijk, D. (1996). Forecasting stock market volatility using (nonlinear) GARCH models. Journal of Forecasting, 229-235. Garcia, R. C., Contreras, J., Van Akkeren, M., & Garcia, J. B. C. (2003). A GARCH forecasting model to predict day-ahead electricity prices. Power Systems, IEEE Transactions on, 20(2), 867-874. Giot, P., & Laurent, S. (2003). Market risk in commodity markets: a VaR approach. Energy Economics, 25(5), 435-457. Giot, P., & Laurent, S. (2003). Value‐at‐risk for long and short trading positions. Journal of Applied Econometrics, 18(6), 641-663. Giulio Palomba, (2006). Multivariate GARCH models and Black-Litterman approach for tracking error constrained portfolios: an empirical analysis Glosten, L.R., Jagannathan, R., and. Runkle D. E (1993). On the Relation Between the Expected Value and the Volatility of the Nominal Excess Returns on Stocks. Journal of Finance. 48: (5)1779–801. Hallerbach, W., & Spronk, J. (1997). A multi-dimensional framework for portfolio management. In Essays in decision making (pp. 275-293). Springer Berlin Heidelberg. Hu, W. (2005). Calibration Of Multivariate Generalized Hyperbolic Distributions Using The EM Algorithm, With Applications In Risk Management, Portfolio Optimization And Portfolio Credit Risk. Hu, W., & Kercheval, A. (2007, September). Risk management with generalized hyperbolic distributions. In Proceedings of the Fourth IASTED International Conference on Financial Engineering and Applications (pp. 19-24). ACTA Press. University of Ghana http://ugspace.ug.edu.gh 85 Jehovanes, A. (2007). Monetary and Inflation Dynamics: A Lag between Change in Money Supply and the Corresponding Inflation Responding in Tanzania. Monetary and Financial Affairs, Department Bank of Tanzania, Dar Es Salaam. Jondeau, E., Poon, S. H., & Rockinger, M. (2007). Financial modeling under non- Gaussian distributions. Springer Science & Business Media. Jorion, P. (1996). Value at risk: a new benchmark for measuring derivatives risk. Irwin Professional Pub. Jorion, P. (1997). Value at risk: the new benchmark for controlling market risk. Irwin Professional Pub. Jorion, P. (2000). Value at risk: The new benchmark for managing market risk. Jorion, P. (2000). Value at risk: The new benchmark for managing market risk. Jorion, P. (2003). Portfolio optimization with constraints on tracking error. Financial Analysts Journal, 59(5), 70-82. Kast, R., Luciano, E., & Peccati, L. (1998, July). VaR and optimization. In 2nd International Workshop on Preferences and Decisions, Trento, July (Vol. 1, No. 3, p. 1998). Koenker, Roger and Gilbert Bassett. (1978). Regression Quantiles. Econometrica. January, 46 (1) 33–50. Konno, H. (1990). Piecewise linear risk function and portfolio optimization. Journal of the Operations Research Society of Japan, 33(2), 139-156. Kontonikas, A. (2004). Inflation and inflation uncertainty in the United Kingdom, evidence from GARCH modelling. Economic modelling, 21(3), 525-543. Krokhmal, P., Palmquist, J., & Uryasev, S. (2002). Portfolio optimization with conditional value-at-risk objective and constraints. Journal of risk, 4, 43-68. Kupiec, P. H. (1995). Techniques for verifying the accuracy of risk measurement models. THE J. OF DERIVATIVES, 3(2). Litterman, R. (1997). Hot spots and hedges (I). RISK-LONDON-RISK MAGAZINE LIMITED-, 10, 42-46. Litterman, R. (1997). Hot Spots and Hedges (II): A selection of tools that help locate the sources of portfolio risk. RISK-LONDON-RISK MAGAZINE LIMITED-, 10, 38-42. University of Ghana http://ugspace.ug.edu.gh 86 Lucas, A., & Klaassen, P. (1998). Extreme returns, downside risk, and optimal asset allocation. The Journal of Portfolio Management, 25(1), 71-79. Markowitz, H. (1952). Portfolio selection*. The journal of finance, 7(1), 77-91. Markowitz, H. (1959). Portfolio Selection: Efficient Diversification of Investments, Wiley, New York, New York. Markowitz, H. M. (1991). Foundations of portfolio theory. Journal of finance, 469-477. Markowitz, H. M., Todd, P., Xu, G., & Yamane, Y. (1992). Fast computation of mean- variance efficient sets using historical covariances. Journal of Financial Engineering, 1(2), 117-132. Markowitz, H., Sharpe, W. F., & Miller, M. H. (Eds.). (1991). The founders of modern finance: their prize-winning concepts and 1990 nobel lectures. Cfa Inst. Markowitz, H.M. (1952), Portfolio selection, The Journal of Finance 7, 77- 91. Mausser, H., & Rosen, D. (1999). Beyond VaR: From measuring risk to managing risk. In Computational Intelligence for Financial Engineering, 1999. (CIFEr) Proceedings of the IEEE/IAFE 1999 Conference on (pp. 163-178). IEEE. McFadden, Handbook of econometrics, Vol. 4 (Amsterdam: Elsevier). McNeil, A. J., & Frey, R. (2000). Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach. Journal of empirical finance, 7(3), 271-300. Mills, T. C. (1990). Time series techniques for economists Cambridge University Press. Cambridge, UK. Morita, H., Ishii, H., & Nishida, T. (1989). Stochastic linear knapsack programming problem and its application to a portfolio selection problem. European Journal of Operational Research, 40(3), 329-336. Nelson, Daniel B. (1991). Conditional Heteroscedasticity in Asset Returns: A New Approach. Econometrica. 59(2), 347–70. Odoom, S.I.K. (2007). Statistical Methods: Basic Concepts and Selected Applications. Dept. of Statistics, pp: 77-84, (Unpublished). Nolan, J. P. (2001). Maximum likelihood estimation of stable parameters. Levy processes: Theory and applications, 379-400. University of Ghana http://ugspace.ug.edu.gh 87 Patev, P., Kanaryan, N., & Lyroudi, K. (2009). Modelling and Forecasting the Volatility of Thin Emerging Stock Markets: the Case of Bulgaria. Comparative Economic Research, 12(4), 47-60. Pritsker, M. (1997). Evaluating value at risk methodologies: accuracy versus computational time. Journal of Financial Services Research, 12(2-3), 201-242. Rabemananjara, R. and J. M. Zakoian. (1993). “Threshold Arch Models and Asymmetries in Volatility.” Journal of Applied Econometrics. January/ March, 8:1, pp. 31–49. Rockafellar, R. T. (1970). Convex Analysis (Princeton Mathematical Series). Princeton University Press, 46, 49. Rockafellar, R. T., & Uryasev, S. (2000). Optimization of conditional value-at-risk. Journal of risk, 2, 21-42. Rockafellar, R. T., & Uryasev, S. (2002). Conditional value-at-risk for general loss distributions. Journal of banking & finance, 26(7), 1443-1471. Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk*. The journal of finance, 19(3), 425-442. Simons, K. (1996). Value at risk: new approaches to risk management. New England Economic Review, (Sep), 3-13. Tsay, R. S. (2002). Tsay, RS, 2002. Analysis of financial time series. Van den Goorbergh, R. W., & Vlaar, P. J. (1999). Value-at-Risk analysis of stock returns historical simulation, variance techniques or tail index estimation? De Nederlandsche Bank NV. Van den Goorbergh, R. W., & Vlaar, P. J. (1999). Value-at-Risk analysis of stock returns historical simulation, variance techniques or tail index estimation? De Nederlandsche Bank NV. Vlaar, P. J. (2000). Value at risk models for Dutch bond portfolios. Journal of banking & finance, 24(7), 1131-1154. Wenjing and Yiyu (2010). Comparison of Multivariate GARCH Models with Application to Zero-Coupon Bond Volatility. Lund University, Department of Statistics. University of Ghana http://ugspace.ug.edu.gh