A reinforcement learning based collection approach

dc.contributor.authorid0000-0002-2483-075X
dc.contributor.authorid0000-0002-4079-6889
dc.contributor.coauthorTozlu, Ibrahim
dc.contributor.coauthorVardarli, Elif
dc.contributor.coauthorPekey, Mert
dc.contributor.coauthorYavuzyilmaz, Ufuk
dc.contributor.coauthorAydin, Ugur
dc.contributor.coauthorKoras, Murat
dc.contributor.departmentDepartment of Industrial Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorGönen, Mehmet
dc.contributor.kuauthorAkgün Barış
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokid237468
dc.contributor.yokid258784
dc.date.accessioned2025-01-19T10:31:11Z
dc.date.issued2023
dc.description.abstractReaching out to customers for debt collection is an important process for banks. The most commonly used channels for reaching out are text messages and phone calls. While phone calls are more effective, there is a daily capacity limit. Currently, the customers to be called are determined by a rule-based system. The rules are based on the customer's risk segment and the number of late days. It is anticipated that making a customer specific decision is more efficient than using general segments. In this study, an offline reinforcement learning-based approach that uses the existing data to make call decisions for individual customers has been developed. In this approach, customer information, customer behaviors and previous collection actions define the state space and whether to call the customer or not define the action space. Furthermore, a reward function that uses the call costs and late days is designed. This formulation is then used to learn a Q-value model using the random ensemble method. A call-decision approach is developed using the output of this model alongside daily call capacity and additional rules. In live A-B tests, the developed method was observed to yield better results than the current rule-based method.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.publisherscopeInternational
dc.identifier.doi10.1109/SIU59756.2023.10223927
dc.identifier.isbn979-8-3503-4355-7
dc.identifier.issn2165-0608
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85173463035
dc.identifier.urihttps://doi.org/10.1109/SIU59756.2023.10223927
dc.identifier.urihttps://hdl.handle.net/20.500.14288/26176
dc.identifier.wos1062571000157
dc.keywordsOffline reinforcement learning
dc.keywordsBanking
dc.keywordsDecision aids
dc.keywordsArtificial intelligence
dc.languageen
dc.languagetr
dc.publisherIEEE
dc.source2023 31St Signal Processing and Communications Applications Conference, Siu
dc.subjectComputer engineering
dc.subjectElectrical and electronic engineering
dc.titleA reinforcement learning based collection approach
dc.title.alternativePekiştirmeli öǧrenme tabanlı bir tahsilat yaklaşımı
dc.typeConference proceeding

Files