ESTIMATING DEFAULT RISK OF BANK LOANS IN ZIMBABWE USING THE MOVER-STAYER MODEL
1,2,3Department of Financial Engineering, Harare Institute of Technology, Belvedere, Harare, Zimbabwe
ABSTRACT
This paper estimates default probabilities of bank loans through the use of a mover stayer model, using the data of Zimbabwe. Management of credit risk is an element of financial engineering, which provides safeguard against institutions’ financial failure. The study aimed at estimating default probabilities of bank loans through the use of a mover stayer model. In-order to achieve this, the study compared the predictive power of duration method and cohort method in forecasting default risk of bank loans, tested for presence of time homogeneity and determined the model with an upper hand between the Mover stayer model and the Markov Chain Model in gauging default risk. It was concluded that the cohort approach has an upper hand than the duration approach and that there was time inhomogeneity. There was also significant evidence that the mover-stayer model is a superior and effective way of estimating the risk of default. The effectiveness was shown through the process of back testing. The forecasted results for 2014 were in line with the actual 2014 default results hence the model called the Mover Stayer effectively and competently predicted the risk of default of bank advances and loans.
Keywords:Markov chain Mover-stayer model Default probabilities Time homogeneity.
ARTICLE HISTORY: Received:10 August 2017.Revised:4 May 2018. Accepted:10 May 2018. Published:14 May 2018.
Contribution/ Originality:The study contributes to the existing literature in estimating default probabilities of bank loans using a mover stayer model, with focus on Zimbabwe a developing nation.
Internationally, non-performing loans (NPLs) have hindered the economic growth, development and stability of several nations. The recent credit crisis and its subsequent contagion has led to revisions in existing credit risk management practices and the need for more new approaches to enhance the risk management function. For the banking industry, credit risk estimation and management is a key determinant for solvency and profitability since it lies at the heart of the relationship that exists between the bank and the clients. At times, credit defaults can cause financial institutions to become insolvent. Consequently, the bank’s long-term growth and the competitiveness are negatively affected. Thus, in Zimbabwe the administration of credit risk has gained importance within the financial services sector in general and the banking industry in particular. This apprehension is mirrored in the activities of the Basel Committee on Banking Supervision, which is influential in formalizing the worldwide style to credit risk designed for financial institutions.
Credit risk supervision and management is a key concern for the finance industry. High proportional levels of NPLs imply a high degree of credit risk. The condition in which Zimbabwean banks remain encumbered with huge levels of NPLs that do not decline and persist at high levels for a elongated historical period of time would consequently result in the prolonged sluggishness of the economy. According to Chinamasa (2014) the skyward drift in non-performing loans and topical bank catastrophes in Zimbabwe is a basis for concern. The level of NPLs rose from 1.62% as of June 2009 to 16% as of 31 December 2013 and further increased to 18.5% as at 30 June 2014. The gush in delinquencies and credit losses has inhibited banks’ risk appetite. Hence, banks have progressively implemented a risk averse method to loaning. The Reserve Bank of Zimbabwe is conscious that the problem of exorbitant levels of NPLs, which surpass the global yardstick of up to 5%, can be a risk to monetary firmness and financial growth. Therefore, addressing the challenge of NPLs is vital in order to revitalize the economy of Zimbabwe. The resolution on non-performing loans is therefore an essential condition to rejuvenate the economic status of the nation and to address the malicious circle of low economic development, corporation closures and banks susceptibility.
To this end, the researcher evaluated the risk associated with the assets invested that is loans issued by banks to individuals and corporates in terms of their level of default. The evolution of credit risk as captured by credit ratings or transition matrices was modelled using Markov Chain stochastic processes. These models are increasingly useful in economic claims connected with the management of risk, together with the valuation of portfolio risk, the valuing of credit derivatives, modelling the term structure of credit risk payments, and the calculation of supervisory capital. Markov models are a stochastic progression built on conversion matrices and transition probabilities. More classy cases of high-risk bond rating systems, as delineated by Jarrow and Stuart (1995) and Robert et al. (1997) entail these matrices as an input.
A Mover stayer model and a Markov model were used to simulate the dynamics of credit ratings of a portfolio and come up with transition probabilities. The transition probabilities estimated the risks of downgrading or upgrading and of default from the current credit rating. Credit risk ratings assist in the pricing and hedging of corporate bonds that depend on credit rating. The probability of default is also noted which is of importance to financial institutions to be able to set aside provisions in case of default. An important contribution will be that of default prediction of consumer loans since most banks are not using that aspect.
The philosophy that (Caouette et al., 2008) present is that the management of default risk is a structure of financial engineering where various models and structures are formed that may either avoid financial failure or that may also provide safeguards against it.
Caouette et al. (2008) points out that in financial institutions, default risk is just taken for granted as a primary feature of the organisation. If an institute denies acknowledging the intrinsic risk, then it is not in the industry. Whenever risk survives, then its enemy, risk managing, will also subsist and tackle it. Credit risk management is put in simple terms as the actions implemented by institutions with the intent of declining or avoiding credit risk. While assenting (Njanike, 2009); (Christopher, 2002) articulates that one of the major activity of bank administration is of enlistment of deposits and issuance of credit, however, management of risk is paramount. They outline that aggressive management of credit risk decreases client default risk. Moreover, they both add that the upper hand of a bank based on its competence to handle credit strategically. Kuan and Chung-Yu (2012) point to that Basel 3 clearly places the responsibility on financial institutions to implement best practices of risk management to evaluate their BASEL 3 requirements. In Zimbabwe, the Reserve Bank of Zimbabwe (RBZ) adopted the Risk Based Supervisory (RBS) approach in cognizance of the limitations inherent in the traditional approach, which prescribed a common supervisory approach to all institutions irrespective of differences in business activities conducted, and risk appetites adopted. In managing credit risk, the RBZ recommends that all banks should gain adequate and satisfactory data to permit a wide-ranging appraisal of the true risk summary of the borrower.
Markov processes are usually assumed time-homogeneous. This suggests that the migration matrices will remain the same over time, which allows the approximations to be easy to infer or to extrapolate. Nevertheless, there is proof that grading migrations are time-inhomogeneous. A study from Standard & Poor's demonstrations that the defaulting rates differ a lot over a long period of time. Although proof of time-inhomogeneity has been existing in literature for quite a long time, there is no typical way to attempt to alleviate it. In this research, the in-house data set was tested for the presence of time-inhomogeneity.
Many models have been put forward for the appraisal of risky securities. The models are divided into two main classes: structural and reduced form models. Structural depend on the methodology of Black and Myron (1973) and Merton (1974) where the course motivating defaulting is the value of the organisation. Then again, the reduced-form models sight defaulting as an exogenously unstipulated procedure, rather than as a foreseeable procedure. Even though the structural methods are abstractly important as they offer causativeness for default, reduce-form methods are frequently more manageable mathematically, making them possibly valuable in industry. One notable difference concerning these classes of models is the implied assumption they suggest about administrative pronouncements concerning capital structure. Reduced-form methods try to conquer these disadvantages of structural-form methods, for instance Duffie and Kenneth (1997); Robert et al. (1997) and David and Torben (2002). Reduced-form structures do not have assumptions concerning the capital constitution of the borrowers. Calibration of this default probability is made relating to data from ratings agencies. The initial (Jarrow and Stuart, 1995) model, was formed from matrices of past transition probability from initial ratings and resurgence values at each terminal status. Reduced-form structures can extricate credit risks from authentic market information and are independent on asset worth and leverage. As a result, parameters that are associated to the firm’s cost do not need to be approximated in order to apply them.
The idea of using Markov chain is to illustrate the risk dynamics in provisions of the transition probability amid the diverse grades that rating agencies’ give to the organisation’s bonds. Several researchers, like (Hanson et al., 2007) proved the reality of two different Markov regimes leading the tempo at which credit ratings shift, signifying a stochastic procedure joining two Markov chains, and the name given to it is the mover-stayer model. The model makes an assumption that a populace comprise two unseen groups: a stayer cluster with a zero likelihood of transformation, together with a mover group trending an ordinary Markov progression.
Claudia and Carolin (2008) in their research, they designed the limit distribution for risk grading in accordance to a time-homogeneous Markov chain. This allocation was viewed as an pointer of the risk trend for the bank customers, it was then recommended that if financial institution maintain its authentic customer prospecting guiding principle as it was, subsequently most clients would doubtless be high-risky in the long run.In order to generate more competent approximations of transition matrices, Lando and Skodeberg (2002) instead of using yearly transition rates they considered monthly. In a Markov-chain a transition possibility depends exclusively on the existing state an item is in not on the duration of the time it reached it or the length of time it has been in that position . Thomas and Malik (2010) in his case applied the Markov theory to the UK credit card statistics. The period chosen was a quarter of a year and the position space comprised of five score intervals jointly with states matching the default closed accounts. In that research they compared how well the first order stationary model and the second order stationary performed. Thomas and Malik (2010) Compared the forecasts of the structures with the real performance he did this on an out-of-time section They came to a conclusion that the second order stationary is better at forecasting the credit risk default levels, but the first order stationary model is better at forecasting the quantity who remain in the highest score group level.
The purpose of this research was to make the past information relevant by expanding on the work of Frydman and Schuermann (2008) that of Markov mixture approximates that allow company heterogeneity. Economic cycles and other factors will sometimes affect the rating performance over a long period. The problem is in understanding how to regulate transition ratings and default probability rate forecasts to mirror possible future financial conditions. The Mover Stayer Models allow investment companies and banks to easily predict how obligors are likely to be in terms of credit rating from time to time forecasts. The model is based on history of ratings information, which allows coming up with transition patterns and accurate forecasts. Unlike other methodologies, the Mover Stayer Models enable banks and investment companies to track or monitor credit migrations from time to time and to determine their credit risk at any point in time.
Owing to the sensitive characteristic of internal dataset, the actual ratings were mapped to a 10 tier rating scale. This was to some extent masquerade the original characteristics of the internal information set.
After the cleaning, the statistics set was separated temporarily into an estimation set and a validation set for back testing. The validation set was the data for 2014 while the remainder formed the estimation set. The rationale of the validation investigation was to make sure that approximations from one set had a satisfactory fit on a entirely different set of individuals and corporates loans.
The actual estimation of migration matrices was done in Excel and R software, estimating confidence intervals via a bootstrap method. The estimation was done using the cohort approach and the duration approach.
Markov Chain
There are only finite set of states 1,2,,3,….n and the process can only be in one state at any given point in time.
Transition probabilities are worked out for a possible combination of transition state from state i up to state j - pij is the changeover probability from state i up to state j and prij does not change over a given period under consideration and is not influenced by how the state i was reached, which is by definition the condition of applying Markov processes. To analyse the process the initial state of the process should be known. One stage stationarity
transition probability prij matrix is shown, from i to j
Table-1. Transition Probability Matrix
The above matrix is called a stochastic matrix of transitional probabilities, it is a square matrix with non-negative column, and row elements, the sum of row probability should be equal to one. if S0 is the initial state hence the initial state probabilities would be :
The mover-stayer model, which is continuous-time in nature, is a combination of two autonomous Markov chains. The main chain degenerated into an identity matrix that is its transition matrix is also the identity matrix. Then the supplementary chain is a non-degenerate transition matrix.
The transition matrix, which is the continuous-time mover-stayer Pr (t) is defined as
zero column vector from the default position at the beginning until the finishing time. Pr(t) is now defined as the transition matrix in continuous-time nature of the assimilation procedure of the Markov series model, that is the mover-stayer model can also be written as
In order to exam whether mover-stayer model is superior to gauge credit risk more than the Markov chain Model the likelihood ratio statistics was used.
Two tests were used in this study, one x2 test and one based on bootstrapped confidence intervals. If homogeneous, it then means every component in the immigration matrix ought to be constant. The x2 test is intended to investigate whether to discard a null hypothesis that the yearly migration matrices are the same as the average migration matrix. The test encompassed calculating variances of every component in the matrices.
The further test is to match 95% confidence intervals of Probability of Defaults estimates to understand if they contribute meaningfully diverse estimates between adjacent years. If two approximations vary significantly, then it means that it is highly doubtful that they could yield the similar estimate. If two approximations cannot be the identical, then it should not be constant because of the burdensome undertaking of matching every component and its confidence interval in numerous matrices, this test focused on the Probability of Defaults approximations.
If there is not any critical way to estimate confidence intervals, one alternative is to utilize a resampling method known as bootstrapping. The typical bootstrapping procedure is the one used in this study to estimate confidence intervals for the probability of default. The bootstrap procedure was done via R, in each confidence interval calculation; a sample encompassing one thousand bootstrap replications was utilized. To elucidate, one observed experimental sample was bootstrapped one thousand times (creating one thousand bootstrap replications). From every bootstrap replication, just one migration matrix was projected.
In order to conclude whether there was significant evidence that the Mover Stayer Model is an effective way of estimating credit risk there was need to evaluate the model through back testing and tests for discriminatory power.
The bank under study uses the Modified Standardised Approach, under which credit exposures are graded into the appropriate credit rating category from the Supervisory Regulatory Rating Scale mapped to the 10-tier scale. As shown in Table 2.
Table-2. The Supervisory Regulatory Rating Scale and 10-Tier Scale
Supervisory Rating Scale | 10-tier rating scale | Descriptive Classification | Risk Level |
1 | A | Prime Grade | Insignificant |
2 | B | Strong | Modest |
3 | C | Satisfactory | Average |
4 | D | Moderate | Acceptable |
5 | E | Fair | Acceptable with care |
6 | F | Speculative | Management Attention |
7 | G | Highly Speculative | Special Attention |
8 | H | Substandard | Vulnerable |
9 | I | Doubtful | High Default |
10 | Default | Loss | Bankrupt |
Source: RBZ
There are two credit scorecards, which are part of that system, a retail (individual) scorecard and a corporate scorecard. The retail scorecard is used for rating individual borrowers while the corporate scorecard is used for rating companies including small and medium enterprises.
The Cumulative Accuracy Profile (CAP), is also known as the Lorenz or Power Curve, demonstrates graphically the discriminatory power of a model or rating system. All counterparties are ordered and grouped from low credit quality (Rating class Default/Rehab) to high credit quality (Rating A). The CAP arc is made by plotting the cumulative percentage of defaults against the cumulative percentage of counterparties.
In addition to the CAP curve, the (ROC) Receiver Operating Characteristic also graphically demonstrates the discriminatory power of the default risk model. The underlying idea is that for a perfect rating system, the distribution of defaulters and non-defaulters are separate and have no overlap. In light of the ROC, the researcher evaluated the performance of the model using statistical testing. The potential of correct predictions was statistically tested through the use of a null hypothesis. This is a hypothesis that is used to refute or support an alternative hypothesis.
These tests for discriminatory power were performed on both the duration and cohort approach methods.
Secondary data was used in this study and was collected from a local bank’s credit risk reports, credit reports, policy documents, RBZ Website and the internet. The study used data from a local Zimbabwean bank. This is one of the country’s largest banks by means of capital size and market share. As such, the data was deemed as fairly accurate as possible, because the bank maintains best practices in data recording, presentation and storage, particularly owing to various compliance regulations within the country’s financial sector. As such, the dependability and accuracy of the data from the bank was deemed accurate and reliable by the researcher.
The dataset used contains records of credits for individual and corporate borrowers of a local bank that were in the records ever since January 2009 together with those who joined between beginning of 2009 and end of 2014. The data set comprises of clients' semi-annual internal rating grades alongside the information on their time from the time when their account opened, time to defaulting or the time when the account closed within the above period. The rating effort is supposed to be a mix of Through the Cycle and Point in Time meaning it is a hybrid. The database contains many facts, but there was a first round of sifting from the databank to obtain what will be denoted to as “the original data” in this project. The target population for the study was the number of all customers who have been offered loans by the bank.
A summary of the results is presented below. After cleansing the data and adjusting, the actual approximations of migration matrices were completed in R software. Moreover, approximating their confidence intervals using a bootstrapping method required R, since it involved calculating several thousands of migration matrices
All the migration matrices were one-year migration matrices. When using migration matrices, scores in the rows the are the preceding ratings, then columns are the grades that counterparties drift to. So, when observing Table 2, the number 0.08 with rating D (row), and the column rating as A shows that a client with a rating D has 8% likelihood of existence in rating A in a year's time.
Table-3. Cohort Method Results for the Data Estimation set Showing the Migration Matrix in Percentages.
Ratings | A | B | C | D | E | F | G | H | I | Default |
A | 80.01 | 7.33 | 3.59 | 1.49 | 1.08 | 1.33 | 2.22 | 1.16 | 1.00 | 0.79 |
B | 3.07 | 76.13 | 12.32 | 5.93 | 0.53 | 0.84 | 0.01 | 0.1 | 0.02 | 1.05 |
C | 0.15 | 1.57 | 73.21 | 15.18 | 5.91 | 0.15 | 0.02 | 1.77 | 0.03 | 2.01 |
D | 0.08 | 3.42 | 7.55 | 73.25 | 10.11 | 4.21 | 0.08 | 0.29 | 0.3 | 0.71 |
E | 0.15 | 0.37 | 1.1 | 11.79 | 70.24 | 12.4 | 0.85 | 1.43 | 0.73 | 0.94 |
F | 0.14 | 0.14 | 0.36 | 2.53 | 15.77 | 66.98 | 5.3 | 3.08 | 3.52 | 2.18 |
G | 0.09 | 0.05 | 0.27 | 1.21 | 1.6 | 11.95 | 64.87 | 10.55 | 6.99 | 2.42 |
H | 0.09 | 0.09 | 0.34 | 1.63 | 0.28 | 0.93 | 2.34 | 62 | 2.33 | 29.97 |
I | 0.01 | 0.03 | 0.05 | 0.14 | 0.41 | 0.68 | 0.32 | 0.96 | 29.45 | 67.95 |
Default | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 |
Table-4. Cohort Method Results for the Validation/ Back Testing Data Set Showing the Migration Matrix in Percentages.
Ratings | A | B | C | D | E | F | G | H | I | Default |
A | 80.37 | 7.49 | 3.59 | 1.39 | 0.09 | 1.22 | 2.12 | 2.98 | 0 | 0.75 |
B | 3.21 | 77.2 | 11.81 | 5.56 | 0.52 | 0.81 | 0.01 | 0.01 | 0.02 | 0.85 |
C | 0.17 | 1.56 | 73.8 | 15.02 | 5.81 | 0.12 | 0.08 | 1.4 | 0.03 | 2.01 |
D | 0.06 | 2.52 | 7.43 | 74.16 | 10.45 | 4.09 | 0.15 | 0.22 | 0.21 | 0.71 |
E | 0.19 | 0.45 | 1.01 | 13.4 | 67.14 | 12.52 | 0.71 | 1.54 | 1.13 | 1.91 |
F | 0.15 | 0.06 | 0.29 | 2.11 | 17.13 | 66.41 | 5.04 | 2.92 | 3.87 | 2.02 |
G | 0.02 | 0.02 | 0.27 | 1.26 | 1.94 | 12.75 | 65.45 | 10.31 | 6.04 | 1.94 |
H | 0.01 | 0.07 | 0.22 | 1.52 | 0.31 | 0.81 | 2.52 | 63.2 | 1.81 | 29.53 |
I | 0.01 | 0.01 | 0.03 | 0.09 | 0.39 | 0.59 | 0.51 | 0.94 | 30.32 | 67.11 |
Default | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 |
This part comprises the matrices for the estimate and authentication data sets using the cohort method. The drive is to demonstrate whether the approach of valuing migration matrices to approximate rating movements of companies and individuals appears rational.
In the vocabulary of Markov chains, the two matrices use the default state (Default) as absorbing. All the other states in the validation except defaulting set matrix are interactive. In the authentication data set, one can find out that there are no observations of an A graded corporation or individual being demoted straight to I, but I is accessible from A. This is one of the circumstances that influence the embedding problem seen present in the cohort approximation. The likelihood is nil of state A-I although the validation set comprises several observations spread over six years.
In the approximation set migration matrix, there is communication and coordination for all states save for Default communicates with each other. The Markov chain is reducible because of the absorbing default state. This means that all states with the exception of defaulting state are transient as there is a non-zero chance that they by no means return to that state, for the reason that of the non-zero possibility that they ultimately finish in the absorbing state.
As shown from the approximations, the matrices are closely alike, which is precisely what was expected. Thus on average, the approximations hinge on classification rather than corporation precise features, and this is the purpose of this Mover Stayer Model estimate methodology.
The approximation and validation records sets were also used to compute migration matrices using the duration approach.
Table-5. Duration Method Results For The Estimation Data Set(2009-2013) Showing The Migration Matrix In Percentages.
Ratings | A | B | C | D | E | F | G | H | I | Default |
A | 79.85 | 7.21 | 3.87 | 1.63 | 1.32 | 1.5 | 2.28 | 1.5 | 0.01 | 0.83 |
B | 2.47 | 75.46 | 12.45 | 5,73 | 0.66 | 0.76 | 0.23 | 6.79 | 0.02 | 1.16 |
C | 0.18 | 1.52 | 73.11 | 14.91 | 5.98 | 0.29 | 0.78 | 0.66 | 0.04 | 2.53 |
D | 0.1 | 3.41 | 4.36 | 73.67 | 10.35 | 4.34 | 0.07 | 0.62 | 0.39 | 2.69 |
E | 0.16 | 0.31 | 1.57 | 13.2 | 69.36 | 8.94 | 0.74 | 1.63 | 1.07 | 3.02 |
F | 0.14 | 0.13 | 0.42 | 2.83 | 14.04 | 67.07 | 5.43 | 5.13 | 1.33 | 3.48 |
G | 0.06 | 0.13 | 0.56 | 1.18 | 1.08 | 11.45 | 62.21 | 11.23 | 8.14 | 3.96 |
H | 0.03 | 0.03 | 0.26 | 1.68 | 0.33 | 0.78 | 2.13 | 61.25 | 2.27 | 31.24 |
I | 0.02 | 0.02 | 0.09 | 0.17 | 0.63 | 0.75 | 0.99 | 0.93 | 25.06 | 71.34 |
Default | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 |
Table-6. Duration Method Results for the Back Testing Data Set (2014 Data) Showing The Migration Matrix In Percentages.
Ratings | A | B | C | D | E | F | G | H | I | Default |
A | 79.42 | 6.01 | 3.35 | 2.55 | 1.01 | 1.78 | 2.19 | 1.55 | 1.12 | 1.02 |
B | 2.51 | 75.65 | 10.91 | 7.89 | 0.49 | 1.02 | 0.15 | 0.07 | 0.1 | 1.21 |
C | 0.41 | 1.44 | 73.02 | 14 | 6.82 | 0.26 | 0.09 | 1.65 | 0.06 | 2.25 |
D | 0.09 | 1.45 | 5.41 | 72.96 | 9.41 | 8.46 | 0.15 | 0.65 | 0.39 | 1.03 |
E | 0.17 | 0.36 | 1.01 | 12.01 | 69.44 | 12.14 | 0.99 | 1.8 | 0.98 | 1.1 |
F | 0.13 | 0.42 | 0.18 | 3.02 | 13.95 | 66.9 | 4.89 | 3.48 | 4.21 | 2.82 |
G | 0.04 | 0.06 | 0.33 | 1.39 | 0.88 | 10.06 | 64.67 | 9.3 | 7.78 | 5.49 |
H | 0.01 | 0.03 | 0.25 | 1.59 | 0.64 | 3.12 | 2.17 | 34.45 | 2.39 | 55.35 |
I | 0.02 | 0.02 | 0.04 | 0.12 | 0.53 | 0.71 | 0.69 | 1.27 | 25.41 | 71.19 |
Default | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 |
Again, both matrices look similar. It can be noted that all states save for the absorbing default state are interactive and are transient. In this methodology, the circumstance, this generates the embedding problem, absent when using the duration technique.
Since the matrices are almost the same for both cohort and duration approaches, the information set was then merged to one for further use. The average matrices shown below are the ones where the entire data has been utilized, for both methods the cohort and duration. Both approaches give somewhat different approximations, as shown below.
Table-7. Cohort Method Results Where All Data Has Been Used Showing The Migration Matrix In Percentages.
Ratings | A | B | C | D | E | F | G | H | I | Default |
A | 80.03 | 7.31 | 3.51 | 1.46 | 1.02 | 1.33 | 2 | 1.48 | 1.04 | 0.82 |
B | 3.02 | 76 | 12.45 | 5.93 | 0.56 | 0.87 | 0.01 | 0.02 | 0.03 | 1.11 |
C | 0.15 | 1.5 | 73.46 | 15.11 | 5.87 | 0.16 | 0.02 | 1.68 | 0.04 | 2.01 |
D | 0.08 | 3.42 | 7.46 | 73.25 | 10 | 4.01 | 0.1 | 0.62 | 0.3 | 0.76 |
E | 0.17 | 0.36 | 1.09 | 13.4 | 68.24 | 12.5 | 0.69 | 1.63 | 0.98 | 0.94 |
F | 0.13 | 0.12 | 0.35 | 2.53 | 15.72 | 66.77 | 5.36 | 2.99 | 3.93 | 2.1 |
G | 0.04 | 0.05 | 0.3 | 1.28 | 1.84 | 11.75 | 64.21 | 10.55 | 7.6 | 2.38 |
H | 0.02 | 0.08 | 0.26 | 1.58 | 0.34 | 0.89 | 2.45 | 62.93 | 1.86 | 29.59 |
I | 0.01 | 0.02 | 0.04 | 0.1 | 0.48 | 0.63 | 0.48 | 0.89 | 30.06 | 67.29 |
Default | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 |
Table-8. Duration Method Results Where All Data Has Been Used Showing The Migration Matrix In Percentages
Ratings | A | B | C | D | E | F | G | H | I | Default |
A | 79.42 | 6.01 | 3.32 | 2.61 | 1.04 | 1.7 | 2.32 | 1.49 | 1.08 | 1.01 |
B | 2.53 | 75.68 | 10.81 | 8 | 0.59 | 0.97 | 0.11 | 0.07 | 0.05 | 1.19 |
C | 0.4 | 1.42 | 73.12 | 14.04 | 6.67 | 0.2 | 0.06 | 1.82 | 0.05 | 2.22 |
D | 0.12 | 1.48 | 5.43 | 72.96 | 9.43 | 8.46 | 0.13 | 0.65 | 0.38 | 0.96 |
E | 0.18 | 0.38 | 1.02 | 12.13 | 69 | 12.6 | 0.71 | 1.89 | 0.99 | 1.1 |
F | 0.15 | 0.44 | 0.27 | 2.98 | 13.71 | 66.7 | 5.29 | 3.42 | 4.2 | 2.84 |
G | 0.06 | 0.09 | 0.35 | 1.39 | 0.88 | 10.06 | 63.83 | 10.13 | 7.78 | 5.43 |
H | 0.02 | 0.1 | 0.3 | 1.59 | 0.63 | 3.14 | 2.18 | 34.93 | 1.83 | 55.28 |
I | 0.02 | 0.03 | 0.05 | 0.12 | 0.5 | 0.74 | 0.6 | 1.38 | 25.43 | 71.13 |
Default | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 100 |
It can be noted that the cohort method approximations have a weightier diagonal and lighter tails
From the outcomes it can be noted that off diagonal movements are so different between the procedures, a clear signal of the superior data competence is the duration method. The quantity of off-diagonal movements is roughly 52% higher than the duration method.
As shown in Table 7 the duration method reliably rigidities higher accumulative Probability of Default values for all grades. This is not fully in line with a exploration done by Frydman and Schuermann (2008) where they found this behaviour only at the bottommost assessment levels. Nevertheless, because the duration method provides higher estimations far-off the sloping and records more movement overall than the cohort method, it appears logical that the Probability of Default should be greater.
Furthermore, the embedding problem that was found in several annual cohort matrices and the cohort validation matrix wasn’t seen when utilizing the duration method. The duration technique was competent of approximating confidence intervals for every migration possibilities, as every other non-default records in the migration matrix were not zeros
It can therefore be suggested that, the duration method should preferably be used if possible.
This part of the study emphases on whatever is the core goal of the estimates - to be capable to compute likelihood of defaulting. The projected Probability of Defaults can be seen in the final column of the migration matrices. In tables, 9 and 10 Probability of Default estimations and their computed confidence intervals are presented. The table comprising estimations using bootstrapped confidence intervals were tabulated below for the cohort and duration methods.
Table-9. Cohort Method: Probability of Defaults Estimates Including 95% Confidence Intervals’ Estimates and Lengths of Interval
Cohort Rating Method | Probability of Defaults estimate | Confidence Interval Bootstrapped | Length Confidence Interval Bootstrapped |
A | 0.82 | [0.80%,0.84%] | 0.04% |
B | 1.11 | [1.09%,1,12%] | 0.04% |
C | 2.01 | [2.00%,2.04%] | 0.02% |
D | 0.76 | [0.74%,0.78%] | 0.04% |
E | 0.94 | [0.90%,0.98%] | 0.08% |
F | 2.1 | [1.89%,2.18%] | 0.29% |
G | 2.38 | [2.22%,2.57%] | 0.35% |
H | 29.59 | [29.34%,29.86%] | 0.52% |
I | 67.29 | [67.22%,67.94%] | 0.72% |
Table-10. Duration Method: Probability of Defaults Estimates Including 95% Confidence Intervals’ Estimates and Lengths of Interval
Rating | PD estimate | Confidence Interval Bootstrapped | Confidence Interval Bootstrapped length |
A | 1.01% | [0.99%,1.04%] | 0.03% |
B | 1.19% | [1.18%,1,20%] | 0.02% |
C | 2.22% | [2.20%,2.23%] | 0.03% |
D | 0.96% | [0.94%,0.97%] | 0.03% |
E | 1.1% | [1.07%,1.13%] | 0.06% |
F | 2.84% | [2.72%,2.98%] | 0.26% |
G | 5.43% | [5.22%,5.67%] | 0.45% |
H | 55.28% | [54.95%,55.62%] | 0.67% |
I | 71.13% | [70.89%,71.69%] | 0.80% |
As indicated by the PD tables, the probability of default is greater for the duration method as compared to the cohort method, particularly for the lowermost ratings G, H and I.
This paragraph comprises results associated to testing of time-homogeneity. The initial test is a x2 test as premeditated by Goodman (1957). The other approach to check for time-homogeneity is to study the confidence intervals of the estimates computed from two next to years' information.
Chi-square Test (x2 )
If the computed x2 value surpasses the tabularised value with 10*9*12 = 1080 degrees of freedom, then the null hypothesis cam be rejected the null hypothesis which states that transition possibilities are constant over time.
The tabularised worth of the x2 distribution at the ninety-nine percent level with 1080 degrees of freedom
Table-11. Observed x2 Values for the Comparison of the Average Matrix and Annual Matrices
Calculation method | Cohort | Duration |
Observed value | 17403 | 18088 |
Values are calculated constructed from matrices projected with both methods by means of the full information sample. As illustrated in table 11, the observed values are considerably greater than the tabularised
Therefore, the null hypothesis can be rejected, the movement matrices are constant at the 99% level. Another observation is that the test statistic using the duration approximations are to a certain degree larger.
In this segment, 95% confidence intervals bootstrapped for Probability of Default approximations during 2012 and 2013, will be presented. Next to years are compared to reduce influence of rating representations.
Underneath are tables comprising the bootstrapped confidence intervals and a "Yes" or "No", representative if they are statistically different or not.
Table-12. Bootstrapped 95% Probability Of Default Confidence Intervals For 2012 And 2013
Rating | Cohort 2012 | Cohort 2013 | Diff | Duration 2012 | Duration 2013 | Diff |
A | [0:00%; 0:00%] | [0:00%; 0:09%] | No | [0:01%; 0:02%] | [0:01%; 0:13%] | No |
B | [0:00%; 0:00%] | [0:00%; 0:00%] | No | [0:07%; 0:11%] | [0:02%; 0:03%] | Yes |
C | [0:06%; 0:15%] | [0:01%; 0:05%] | Yes | [0:22%; 0:31%] | [0:05%; 0:10%] | Yes |
D | [0:61%; 0:80%] | [0:18%; 0:30%] | Yes | [0:95%; 1:12%] | [0:28%; 0:37%] | Yes |
E | [1:60%; 1:92%] | [0:83%; 1:06%] | Yes | [2:10%; 2:40%] | [1:06%; 1:26%] | Yes |
F | [4:83%; 6:00%] | [4:55%; 5:62%] | No | [7:51%; 8:68%] | [5:98%; 7:02%] | Yes |
G | [3:75%; 4:73%] | [5:36%; 6:46%] | Yes | [5:37%; 6:31%] | [9:52%; 10:88%] | Yes |
H | [6:80%; 8:90%] | [6:61%; 8:96%] | No | [9:08%; 11:24%] | [13:12%; 15:85%] | Yes |
I | [6:37%; 8:87%] | [4:72%; 6:63%] | No | [12:00%;14:70%] | [8:88%; 10:92%] | Yes |
Tables 12 show comparisons of adjacent years' confidence intervals. Calculations are grounded on the complete sample, and are tabularized for the two methods the duration and cohort methods.
The approximations can be alleged to be statistically different if the confidence intervals do not intersect. Statistically different approximations show that time-inhomogeneity is probable to be existing. The table display that both methods find statistically different valuations in the adjacent year sets considered
Another finding is that the duration method is superior at identifying differences than the cohort method. This is sensible since the duration method is more delicate to rating variations and more efficiently use the data. Another noticeable trend is that the cohort process has difficulties identifying differences for the maximum ratings A to B, while the duration method registers different approximations in most of those circumstances.
To conclude the complete part on time-inhomogeneity analysis, the investigations of x2 values and confidence intervals all point in the same track. It then displays that time inhomogeneity is existing.
4.6. Model Comparison
In order to determine whether mover-stayer model is good to use in estimating credit risk than the Markov chain Model the likelihood ratio statistics was used to test. All Si were set equivalent to zero, then the Markov chain was acquired from the mover-stayer model. For the reason that the mover-stayer model reflected heterogeneity in the population, while the Markov chain model only reflected a single population, it was concluded that the mover-stayer model is a better model. The log-likelihood statistics was then used to test suitability of the mover-stayer model better than the Markov chain model for assessing default risk. The computed log-likelihood test statistic is 437.56 with 1080 degrees of freedom and the observed value from R was LogLik>> 38 therefore the null hypothesis H0: S = 0 is rejected at less than 1% significant level because the computed value exceeds the observed value. S = 0 means we are assuming there are no stayers in the population. Therefore, it can be concluded that the mover-stayer model is superior in determining credit risk.
Table-13. Actual December 2014 Defaults Compared to the Estimated Ones
Rating Grade | Actual | Forecasted: Cohort | Forecasted: Duration | Percentage Differences Cohort | Percentage Differences Duration |
A | 1.06 | 0.75 | 1.02 | 29% | 4% |
B | 1.17 | 0.85 | 1.21 | 27% | -3% |
C | 2.16 | 2.01 | 2.25 | 7% | -4% |
D | 1.14 | 0.71 | 1.03 | 38% | 10% |
E | 1.35 | 1.91 | 1.10 | -41% | 19% |
F | 2.54 | 2.02 | 2.82 | 20% | -11% |
G | 4.98 | 2.94 | 5.49 | 41% | -10% |
H | 55.99 | 29.53 | 55.35 | 47% | 1% |
I | 71.02 | 67.11 | 71.19 | 6% | 0% |
from the above results it shows that the actual results of 2014 have been fully reflected by the duration approach which has most of its probability of default almost the same with the actual ones. Through this back testing analysis we fail to reject the null hypothesis H0 which states that:
There is significant evidence that the mover stayer model is an effective way of estimating default risk. This can be fully supported by the analysis of the percentage differences of the actual compared to the forecasted. Using the duration method not even one rating has a difference that exceed 20% then the cohort also shows that there are less than 50%differences in predicting the default risk of bank loans compared to the actual.
So from the above analysis it can be interpreted that there is significant evidence that the mover stayer model is an effective way of estimating default risk especially using the cohort method.
Table-14. Forecasted Percentage Probability of Default Using the Duration Method
Rating Grade | December 2014 | June 2015 | December 2015 |
A | 1.02 | 1.00 | 0.97 |
B | 1.21 | 1.12 | 1.17 |
C | 2.25 | 2.13 | 2.27 |
D | 1.03 | 1.12 | 1.15 |
E | 1.10 | 1.23 | 1.16 |
F | 2.82 | 2.59 | 2.779 |
G | 5.49 | 5.56 | 5.61 |
H | 55.35 | 55.40 | 55.52 |
I | 71.19 | 71.57 | 72.04 |
The cumulative accuracy profile (CAP) provides a way of visualizing discriminatory power. The key idea is the following: if a rating system discriminates well, defaults should occur mainly among borrowers with a bad rating. CAP curvature is built by plotting the cumulative proportion of defaults against the cumulative percentage of counterparties. It is given as a comparative against the rating model/system under investigation. If the two are close to each other, then the rating system under investigation and the random model are not too different. The resultant Accuracy Ratio, given in Table 14 below will be close to zero. This means that rating system simply allocates obligors randomly to risk grades. The discriminatory influence of a rating arrangement improves the farther the curve for the system under investigation is from the curve for the random model.
From the results shows that rating system’s curve is far from similar to the random model that indicates that model doesn’t allocate obligors to risk grades randomly. The extent of its power is given decisively by the accuracy ratio in Table 15
Table-15. Area-Under-Curve and Accuracy Ratio (AR) For Duration
Area Under Curve | 0.877 |
AR | 75% |
Qualification | GOOD |
The qualification of the Accuracy Ratio states that the cohort approach has good discriminatory power.
Table-16. Area-Under-Curve and Accuracy Ratio (AR) For Cohort.
Area Under Curve | 0.885 |
AR | 77% |
Qualification | GOOD |
It was shown that there is a significant difference between the random model and the cohort curve. The calculated Accuracy Ratio supports this view and demonstrates that the model does have good discriminatory power.
The ROC also as already discussed before, serves to test the discriminatory power of rating systems/models. It aims to show that if “goods” are separated from “bads” then their distributions should be different or should not overlap. The area should be 0.5 for a arbitrary model lacking discriminatory power and it is 1for a perfect ROC model. In practice for any rational rating model, it is between 0.5 and 1.0. Under the ROC for the cohort and duration curves, the areas are 0.928 and 0,951 respectively this is significantly close to the area under the ROC for a perfect model. Therefore, the retail and corporate scorecards has good discriminatory power when using both the cohort and duration approach.
Speculative rating classes had higher default probabilities than other rating classes and a greater population of them are movers while the investment scores rating classes had a high likelihood (more than 65%) of remaining at the original ratings and their probability of default is significantly low being less than 2.5%. On another note, there was significant evidence of downgrade momentum behavior and very few upgrade patterns. Time homogeneity tests clearly showed that time-homogeneity is not present in the data set used hence it was time inhomogeneous. This interval inhomogeneity consequence proposes that banks ought to manage their credit portfolios in line with the age of obligation. The embedding problem that could be seen in the cohort validation matrix does not exist when using the method of duration. The duration system is also capable of approximating confidence intervals for all movement possibilities, it is also simple to follow and implement, and its empirical results can enable banks and financial institutions to monitor their default risk quite closely. By using this duration method, banks and other financial institutions will be able to make more efficient lending decisions and face the Basel Capital Accord requirements.
The results of cumulative PDs are vital, as the projected PDs command the amount of investment buffer required to put inn reserve for e.g. expected losses. This knowledge is thus imperative to be able to put correct prices on dealings such as advances and derivatives that need buffer capital to be set aside to cater for counterparty credit risk.
It can be concluded that the results support the use of a continuous-time model since it produces a tighter confidence interval than that of a discrete-time model. This implies that the transition probabilities can be used to take a more aggressive position with an asset without incurring more risk. The results also proved that the Mover Stayer Model is superior in determining default risk as compared to the Markov chain Model and there was indeed significant evidence that the mover stayer model is an effective way of estimating default risk. The effectiveness was shown through the process of back testing. Overall, the researcher believes that this recommended model may well help financial organizations to measure their credit risk more meritoriously.
Funding: This study received no specific financial support. |
Competing Interests: The authors declare that they have no competing interests. |
Contributors/Acknowledgement: All authors contributed equally to the conception and design of the study. |
Black, F. and S. Myron, 1973. The pricing of options and corporate liabilities. Journal of Political Economics, 81(3): 637-654. View at Google Scholar | View at Publisher
Caouette, J.B., E.I. Altman, P. Narayanan and R. Nimmo, 2008. Managing credit risk: The great challenge for the global financial markets. 2nd Edn.: John Wiley & Sons, Inc.
Chinamasa, P., 2014. Zimbabwe Mid-Year Fiscal Policy Statement, Ministry of Finance and Economic Development, Zimbabwe, July 2014, 32-33.
Christopher, M., 2002. The fundamentals of risk measurement. 1st Edn., Britain: IRM, Airmic, and Alarm.
Claudia, C. and P. Carolin, 2008. Modeling dependencies between rating categories and their effects on prediction in a credit risk portfolio. Applied Stochastic Models in Business and Industry, 24(3): 237-259. View at Google Scholar | View at Publisher
David, L. and S.M. Torben, 2002. Analyzing rating transitions and rating drift with continuous observations. Journal of Banking & Finance, 26(2-3): 423-444.View at Google Scholar | View at Publisher
Duffie, D. and S.J. Kenneth, 1997. Modeling the term structures of defaultable bonds. Review of Financial Studies, 12(4): 687-720.
Frydman, H. and T. Schuermann, 2008. Credit rating dynamics and markov mixture models. Journal of Banking & Finance, 32(6): 1062-1075.View at Google Scholar | View at Publisher
Goodman, A., 1957. Mathematical applications in real world situations. 2nd Edn., New York: John Wiley and Sons. pp: 86-87.
Hanson, S.G., P.M. Hashem and S. Til, 2007. Firm heterogeneity and credit risk diversification. Journal of Empirical Finance, 15(4): 583-612.
Jarrow, R., A. and T.M. Stuart, 1995. Pricing derivatives on financial securities subject to credit risk. Journal of Finance, 50(1): 53-85.View at Google Scholar | View at Publisher
Kuan, C.C. and P. Chung-Yu, 2012. An empirical study of credit risk efficiency of banking industry in Taiwan. Web Journal of Chinese Management Review, 15(1): 1-16. View at Google Scholar
Lando, D. and T. Skodeberg, 2002. Analyzing rating transitions and rating drift with continuous observations. Journal of Banking & Finance, 26(2-3): 423- 444. View at Google Scholar | View at Publisher
Merton, R.C., 1974. On the pricing of corporate debt: The risk structure of interest rates. Journal of Finance, 29(2): 449-470. View at Google Scholar | View at Publisher
Njanike, K., 2009. The impact of effective credit risk management on bank survival. Annals of the University of Petrosani, Economics, 9(2): 173-184. View at Google Scholar
Robert, J., L. David and T. Stuart, 1997. A Markov model for the term structure ofcredit risk spreads. Review of Financial Studies, 10(2): 481-523. View at Google Scholar | View at Publisher
Thomas, L.C. and M. Malik, 2010. Comparison of credit risk models for portfolios of retail loans based on behavioural scores. In, Rausch, Dand Scheule, H (eds.) Model Risk in Financial Crises. London, UK: Risk Books. pp: 209-232.