Mao, N, Chen, Y, Guizani, M and Lee, GM (2021) Graph Mapping Offloading Model Based on Deep Reinforcement Learning With Dependent Task. In: 2021 International Wireless Communications and Mobile Computing (IWCMC) . (17th Int. Wireless Communications & Mobile Computing Conference - IWCMC 2021, 28 June 2021 - 02 July 2021, Harbin, China).
|
Text
1570699785.pdf - Accepted Version Download (301kB) | Preview |
Abstract
In order to solve the problem of task offloading with dependent subtasks in mobile edge computing (MEC), we propose a graph mapping offloading model based on deep reinforcement learning (DRL). We model the user’s computing task as directed acyclic graph (DAG), called DAG task Then the DAG task is converted into a topological sequence composed of task vectors according to the custom priority. And the model we proposed can map the topological sequence to offloading decisions. The offloading problem is formulated as a Markov decision process (MDP) to minimize the tradeoff between latency and energy consumption. The evaluation results demonstrate that our DRL-based graph mapping offloading model has better decision-making ability, which proves the availability and effectiveness of the model.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Additional Information: | © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Computer Science & Mathematics |
Publisher: | IEEE |
Date Deposited: | 24 May 2021 11:21 |
Last Modified: | 12 Jun 2024 12:25 |
DOI or ID number: | 10.1109/IWCMC51323.2021.9498674 |
URI: | https://researchonline.ljmu.ac.uk/id/eprint/15054 |
View Item |