Housing

HouseNetwork-e1707236743536.jpg?fit=200%2C200&ssl=1

February 6, 2024by Cristian0

Our latest preprint, titled “Attention-based dynamic multilayer graph neural networks for loan default prediction”, introduces a novel model that could enhance the accuracy of credit risk assessments.

Credit Scoring and Correlated Default

The inspiration for this work comes from our previous studies, which clearly show that borrowers are not isolated entities, but part of a complex web of connections that can influence their probability of default. This interconnectedness suggests that a borrower’s risk of default may be impacted not just by their financial situation, but also by the network of relationships they are part of.

Our study leverages these insights, proposing a sophisticated model that combines Graph Neural Networks (GNNs) and Recurrent Neural Networks (RNNs) to assess credit risk. This way, the model can use dynamic multilayer networks, each layer reflecting a different source of network connection. The proposed model considers different types of connections between borrowers, such as geographical location and choice of mortgage provider, and the evolution of these connections over time.

How It Works

GNNs are a class of deep learning models designed to operate on graphs — structures that represent relationships between entities. These models are adept at capturing the complex patterns inherent in networks of borrowers. On the other hand, RNNs excel at processing sequential data, making them ideal for analyzing the temporal dynamics of these borrower networks.

The model introduced in our study, by our PhD student Sahab Zandi, in collaboration with Prof. Christophe Mues and Kamesh Korangi from the University of Southampton and Prof. María Óskarsdóttir from Reykjavík University, adds a layer of sophistication with an attention mechanism. This mechanism prioritizes certain time points over others, based on their relevance to the borrower’s default risk. Such an approach allows for a more nuanced analysis, distinguishing the model from traditional methods of credit scoring.

Empirical Evidence of Superior Performance

When tested against a dataset provided by U.S. mortgage financier Freddie Mac, the model not only outperformed traditional credit scoring methods but also offered more in-depth insights into the nature of default risk. We found that the model’s ability to account for the dynamic and multilayered nature of borrower connections enhanced its predictive accuracy. This suggests that the future of credit scoring lies in the ability to understand and model the complex web of relationships that influence financial behaviour.

Looking Ahead

The implications of this study are multifaceted. For lenders, adopting such models could help understand how risk propagates and affects both individual borrowers and their portfolios in a high-stakes market. For borrowers, it could translate into more access to credit by empowering so-called ‘Second Look’ models, which provide thin-file borrowers with a more detailed evaluation. Our results can be part of such evaluation. And for the field of operational research and finance at large, this study paves the way for further exploration into the use of machine learning and network science in multilayered, dynamic, environments.

As we move forward, the exploration of even more sophisticated models — incorporating additional layers to capture a broader array of connections or employing different types of GNNs and RNNs — promises to unlock new insights into credit risk and beyond. The journey towards a more interconnected and intelligent approach to credit scoring is just beginning, and its potential benefits for both lenders and borrowers are immense.

Interested in the topic? Read the working paper on ArXiV!



April 17, 2023by Cristian0

I was on the CBC panel again this weekend! This week we spoke about the BoC’s decision to keep the monetary policy rate steady, the Mercer report on Millennial renters needing 50% more upon retirement (not a fan of the study) and Amazon’s Bedrock & Titan, although my producer cut me off because we were running out of time. I had a lot more to say about AI!

What I didn’t say on Saturday: I believe we will end up in a three-tiered world: A first world of companies developing these models (having the technological capacities and data availability to properly train them). A second world of companies that can take outputs of these models, or available public models and fine-tune them over either private or public infrastructure (BloomberGPT for example and several research projects I am working on). And a third world of companies that will be technology takers and deploy these technologies either via live services (such as Amazon Bedrock) or via prepackaged assistants (such as LLM-powered Bing or Microsoft’s Copilot).

See the panel below.