Our research focuses on using reinforcement learning (RL) to address the credit limit modification problem for companies offering credit card products. This involves two main challenges: defining the RL problem for this specific task and training the RL agent without conducting online experiments with customers.
To define the RL problem, we consider the financial history of credit card holders and the expected losses due to defaults when deciding whether to increase or maintain their credit limits. The actions available are increasing the limit or keeping it the same. We calculate the reward function based on the expected profit, considering the revolving aspect of credit card usage. This differs from previous studies that overlooked this aspect in profit calculations.
To train the RL agent offline, we use a two-stage model to simulate the balance after taking an action. This involves selecting the balance type and predicting the balance amount using a regressor model. Through our experiments, we found that our trained Double-Q learning agent outperformed other strategies, including the one used by Rappi, a Latin American fintech company known for its delivery and commerce services that has also ventured into banking with its RappiCard credit card, and that was our collaborator in this research.
Our research contributes by providing a conceptual framework for applying RL to credit limit adjustments and emphasizes data-driven decision-making rather than relying solely on expert judgments. Furthermore, we discovered that incorporating additional predictors did not improve the performance of our simulator. This implies that fintech companies do not necessarily have an advantage over traditional banking institutions in this specific task. Figure 1 provides an overview of the proposed methodology’s general workflow.
Figure 1: Methodology’s general workflow.
Link to the working paper: https://arxiv.org/abs/2306.15585