Portfolio credit risk modelling and CDO pricing : analytics and implied trees from CDO tranches

Publication Type:
Thesis
Issue Date:
2010
Full metadata record
One of the most successful and most controversial innovative financial products in recent years has been collateralised debt obligations (CDOs). The dimensionality of dependency embedded in a typical CDO structure poses great challenges for researchers - in both generating realistic default dynamics and correlation, and in the mean time achieving fast and accurate model calibration. The research presented in this thesis contributes to the class of bottom-up models, which, as opposed to top-down models, start by modelling the individual obligor default process and then moving them up through the dependency structures to build up the loss distributions at the portfolio level. The Gaussian model (Li 2000) is a static copula model. It has only on correlation parameter, which can be calibrated to one CDO tranche at a time. Its simplicity achieves wide spread industry application even though it suffers from the problem of ’correlation smile’. In other words, it cannot fit the market in an arbitrage-free manner in the capital-structure dimension. The first contribution of this thesis is the sensitivities analysis with regard to model parameters of expected losses of CDO tranches in the Gaussian and NIG copula models. The study provided substantial insight into the essence of the dependency structure. In addition, we apply the intensity approach to credit modelling in order to imply market distributions non-parametrically in the form of a binomial lattice. Under the same framework, we developed a series of three models. The static binomial model can be calibrated to the CDS index tranches exactly, with one set of parameters. The model can be seen as a non-parametric copula model that is arbitrage free in the capital-structure dimension. Static models are not suitable to price portfolio credit derivatives that are dynamic in nature. The static model can be naturally developed into a dynamic binomial model and satisfies no-arbitrage conditions in the time dimension. This setup, however, reduces model flexibility and calibration speed. The computational complexity comes from the non-Markovian character of the default process in the dynamic model. Inspired by Mortensen (2006), in which the author defines the intensity integral as a conditioning variable, we modify the dynamic model into a Markovian model by modelling the intensity integral directly, which greatly reduces the computational time and increases model fit in calibration. We also show that, when stochastic recovery rates are involved, there is a third no-arbitrage condition for the expected loss process that needs to be built into the Markovian model. For all binomial models, we adopt a unique optimisation algorithm for model calibration - the Cross Entropy method. It is particularly advantageous in solving large-scale non-linear optimsation problems with multiple local extrema, as encountered in our model.
Please use this identifier to cite or link to this item: