Skip to main content
SHARE
Publication

Incentivizing Cooperative Merging Control: Insights from Multi-Agent Deep Reinforcement Learning

by Anye Zhou, Zejiang Wang, Joseph W Beck, Adian S Cook
Publication Type
Conference Paper
Journal Name
IFAC-PapersOnLine
Publication Date
Page Numbers
103 to 108
Volume
59
Issue
3
Conference Name
Intelligent Autonomous Vehicles (IAV)
Conference Location
Phoenix, Arizona, United States of America
Conference Sponsor
AAAC; IFAC
Conference Date
-

Cooperative driving automation enables connected and automated vehicles (CAVs) to devise cooperative merging control, introducing great potentials to alleviate traffic congestion, reduce energy consumption, and enhance safety for highway on-ramp operations. Although numerous CAV cooperative merging algorithms have been developed to improve energy and traffic performance, the agreement-seeking among CAV users and their local benefits have been understudied. This can lead to rejections of cooperative merging plans and jeopardizing CAV performance, as a cooperation may entail certain CAVs to sacrifice their local benefits to achieve a system optimum. To address this issue, the study first leverages multi-agent deep reinforcement learning (MADRL) factoring both local reward and regional reward to demonstrate the discrepancies between CAV users’ local benefits and system optimum. Next, the existence of a correlated equilibrium is proved to characterize the convergence of MADRL training. This further facilitates the incorporation of incentives (computed based on reward discrepancies) to compensate for CAV users’ local benefits and facilitate system-optimal agreements in cooperative merging operations.