This repository is the implementation of the deep reinforcement learning (DRL) framework proposed in the paper "Deep Reinforcement Learning-based Online Resource Management for UAV-Assisted Edge Computing with Dual Connectivity," in IEEE/ACM Transactions on Networking (2023) [Preprint]. The framework is developed to balance computational workloads between a terrestrial base station (the Master eNodeB) and a drone base station (the Second eNodeB) for a dynamic edge computing system with dual connectivity. The framework is an actor-critic DRL structure, where the actor module is implemented using a Convolutional Neural Network (CNN), while the critic module is guided by the Lyapunov optimization framework.
- system_paras.py: define the simulation parameters, edit this file to set up the simulation
- main.py: the main script, run this to start simulation
- memoryTF2conv.py: implementation of the actor-critic framework of DRL
- server.py: class definition of the MEC server (UAV)
- user.py: class definition of the user, use this script to generate user data
- arrival_task.py: implementation of the task arrival model, use this script to generate arrival traffic
- utils.py: includes some utility functions for unit conversion, data export, and plotting
- in_tasks: consists of pickle files - simulation's input data regarding the arrival task
- in_users: consists of pickle files - simulation's input data regarding the channel gains and locations of users
- trained_models: consists of *.json and *.h5 files, parameters of the neural network after training
- sim: after simulation, figures and numerical results are exported here
To run the simulation, please follows these steps:
- Configure system parameters in the file system_paras.py
- Run this file to start the simulation: main.py
- Afrer simulation, figures and numerical results will be exported to the folder sim.
The following figures illustrate an example output when simulating with arrival rate
Training Loss | User's Backlog Queue |
---|---|
@ARTICLE{10102429,
author={Hoang, Linh T. and Nguyen, Chuyen T. and Pham, Anh T.},
journal={IEEE/ACM Transactions on Networking},
title={Deep Reinforcement Learning-Based Online Resource Management for UAV-Assisted Edge Computing With Dual Connectivity},
year={2023},
volume={},
number={},
pages={1-16},
doi={10.1109/TNET.2023.3263538}}
- Linh T. HOANG, d8232104 AT u-aizu.ac.jp
- Chuyen T. NGUYEN, chuyen.nguyenthanh AT hust.edu.vn
- Anh T. PHAM, pham AT u-aizu.ac.jp