.

Reinforcement R-learning model for time scheduling of on-demand fog placement

LAUR Repository

Show simple item record

dc.contributor.author Farhat, Peter
dc.contributor.author Sami, Hani
dc.contributor.author Mourad, Azzam
dc.date.accessioned 2021-04-08T14:34:47Z
dc.date.available 2021-04-08T14:34:47Z
dc.date.copyright 2020 en_US
dc.date.issued 2021-04-08
dc.identifier.issn 0920-8542 en_US
dc.identifier.uri http://hdl.handle.net/10725/12676
dc.description.abstract On the fly deployment of fog nodes near users provides the flexibility of pushing services anywhere and whenever needed. Nevertheless, taking a real-life scenario, the cloud might limit the number of fogs to place for minimizing the complexity of monitoring a large number of fogs and cost for volunteers that do not offer their resources for free. This implies choosing the right time and best volunteer to create a fog which the cloud can benefit from is essential. This choice is subject to study the demand of a particular location for services in order to maximize the resources utilization of these fogs. A simple algorithm will not be able to explore randomly changing users’ demands. Therefore, there is a need for an intelligent model capable of scheduling fog placement based on the user’s requests. In this paper, we propose a Fog Scheduling Decision model based on reinforcement R-learning, which focuses on studying the behavior of service requesters and produces a suitable fog placement schedule based on the concept of average reward. Our model aims to decrease the cloud’s load by utilizing the maximum available fogs resources over different locations. An implementation of our proposed R-learning model is provided in the paper, followed by a series of experiments on a real dataset to prove its efficiency in utilizing fog resources and minimizing the cloud’s load. We also demonstrate the ability of our model to improve over time by adapting the new demand of users. Experiments comparing the decisions of our model with two other potential fog placement approaches used for task/service scheduling (threshold based and random based) show that the number of processed requests performed by the cloud decreases from 100 to 30% with a limited number of fogs to push. These results demonstrate that our proposed Fog Scheduling Decision model plays a crucial role in the placement of the on-demand fog to the right location at the right time while taking into account the user’s needs. en_US
dc.language.iso en en_US
dc.title Reinforcement R-learning model for time scheduling of on-demand fog placement en_US
dc.type Article en_US
dc.description.version Published en_US
dc.author.school SAS en_US
dc.author.idnumber 200904853 en_US
dc.author.department Computer Science And Mathematics en_US
dc.relation.journal Journal of Supercomputing en_US
dc.journal.volume 76 en_US
dc.journal.issue 1 en_US
dc.article.pages 388–410 en_US
dc.keywords IoT en_US
dc.keywords On-demand fog en_US
dc.keywords Fog scheduling decision en_US
dc.keywords Average reward en_US
dc.keywords R-learning en_US
dc.keywords Q-learning en_US
dc.keywords Reinforcement learning en_US
dc.keywords Cloud computing en_US
dc.identifier.doi https://doi.org/10.1007/s11227-019-03032-z en_US
dc.identifier.ctation Farhat, P., Sami, H., & Mourad, A. (2020). Reinforcement R-learning model for time scheduling of on-demand fog placement. The Journal of Supercomputing, 76(1), 388-410. en_US
dc.author.email azzam.mourad@lau.edu.lb
dc.identifier.tou http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.php en_US
dc.identifier.url https://link.springer.com/article/10.1007/s11227-019-03032-z en_US
dc.orcid.id https://orcid.org/0000-0001-9434-5322 en_US
dc.author.affiliation Lebanese American University en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search LAUR


Advanced Search

Browse

My Account