Publication:
Event-triggered reinforcement learning based joint resource allocation for ultra-reliable low-latency V2X communications

dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorErgen, Sinem Çöleri
dc.contributor.kuauthorKhan, Nasir
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.date.accessioned2024-12-29T09:41:22Z
dc.date.issued2024
dc.description.abstractFuture 6G-enabled vehicular networks face the challenge of ensuring ultra-reliable low-latency communication (URLLC) for delivering safety-critical information in a timely manner. Existing resource allocation schemes for vehicle-toeverything (V2X) communication systems primarily rely on traditional optimization-based algorithms. However, these methods often fail to guarantee the strict reliability and latency requirements of URLLC applications in dynamic vehicular environments due to the high complexity and communication overhead of the solution methodologies. This paper proposes a novel deep reinforcement learning (DRL) based framework for the joint power and block length allocation to minimize the worst-case decoding-error probability in the finite block length (FBL) regime for a URLLC-based downlink V2X communication system. The problem is formulated as a non-convex mixed-integer nonlinear programming problem (MINLP). Initially, an algorithm grounded in optimization theory is developed based on deriving the joint convexity of the decoding error probability in the block length and transmit power variables within the region of interest. Subsequently, an efficient event-triggered DRL based algorithm is proposed to solve the joint optimization problem. Incorporating event-triggered learning into the DRL framework enables assessing whether to initiate the DRL process, thereby reducing the number of DRL process executions while maintaining reasonable reliability performance. The DRL framework consists of a twolayered structure. In the first layer, multiple deep Q-networks (DQNs) are established at the central trainer for block length optimization. The second layer involves an actor-critic network and utilizes the deep deterministic policy-gradient (DDPG)-based algorithm to optimize the power allocation. Simulation results demonstrate that the proposed event-triggered DRL scheme can achieve 95% of the performance of the joint optimization scheme while reducing the DRL executions by up to 24% for different network settings.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuTÜBİTAK
dc.description.sponsorsNasir Khan and Sinem Coleri are with the department of Electrical and Electronics Engineering, Koc University, Istanbul, Turkey, email: {nkhan20, scoleri}@ku.edu.tr. This work is supported by Scientific and Technological Research Council of Turkey Grant #119C058 and Ford Otosan.
dc.identifier.doi10.1109/TVT.2024.3424398
dc.identifier.issn0018-9545
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85198247239
dc.identifier.urihttps://doi.org/10.1109/TVT.2024.3424398
dc.identifier.urihttps://hdl.handle.net/20.500.14288/23608
dc.identifier.wos1359239100082
dc.keywords6G networks
dc.keywordsDeep reinforcement learning (DRL)
dc.keywordsError probability
dc.keywordsEvent-triggered learning
dc.keywordsFinite block length transmission
dc.keywordsOptimization
dc.keywordsReliability
dc.keywordsReliability engineering
dc.keywordsResource management
dc.keywordsUltra reliable low latency communication
dc.keywordsUltra-reliable and low-latency communications (URLLC)
dc.keywordsVehicle-to-everything
dc.keywordsvehicle-to-everything (V2X) communication
dc.keywordsVehicular networks
dc.languageen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.sourceIEEE Transactions on Vehicular Technology
dc.subjectElectrical and electronics engineering
dc.titleEvent-triggered reinforcement learning based joint resource allocation for ultra-reliable low-latency V2X communications
dc.typeJournal article
dspace.entity.typePublication
local.contributor.kuauthorErgen, Sinem Çöleri
local.contributor.kuauthorKhan, Nasir
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files