A reinforcement learning based collection approach

Placeholder

Publication Date

2023

Advisor

Institution Author

Gönen, Mehmet
Akgün Barış

Co-Authors

Tozlu, Ibrahim
Vardarli, Elif
Pekey, Mert
Yavuzyilmaz, Ufuk
Aydin, Ugur
Koras, Murat

Journal Title

Journal ISSN

Volume Title

Publisher:

IEEE

Type

Conference proceeding
View PlumX Details

Abstract

Reaching out to customers for debt collection is an important process for banks. The most commonly used channels for reaching out are text messages and phone calls. While phone calls are more effective, there is a daily capacity limit. Currently, the customers to be called are determined by a rule-based system. The rules are based on the customer's risk segment and the number of late days. It is anticipated that making a customer specific decision is more efficient than using general segments. In this study, an offline reinforcement learning-based approach that uses the existing data to make call decisions for individual customers has been developed. In this approach, customer information, customer behaviors and previous collection actions define the state space and whether to call the customer or not define the action space. Furthermore, a reward function that uses the call costs and late days is designed. This formulation is then used to learn a Q-value model using the random ensemble method. A call-decision approach is developed using the output of this model alongside daily call capacity and additional rules. In live A-B tests, the developed method was observed to yield better results than the current rule-based method.

Description

Subject

Computer engineering, Electrical and electronic engineering

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note