Design and Analysis of Buffer-Aided Cooperative Networks using Deterministic and Reinforcement Learning Techniques

LAUR Repository

Show simple item record

dc.contributor.author El Zahr, Sawsan
dc.date.accessioned 2022-08-17T06:52:21Z
dc.date.available 2022-08-17T06:52:21Z
dc.date.copyright 2022 en_US
dc.date.issued 2022-04-29
dc.identifier.uri http://hdl.handle.net/10725/13943
dc.description.abstract Applications enabled by 5G technology resulted in an unforeseen increase in data traffic and data flows from any source to a destination are experiencing increased outages, delays, and drop-outs. Cooperative communication is one way to cope with this problem by integrating relays into systems to help a given source to communicate with its destination efficiently. Relays enable multiple shorter paths of better quality and equipping relays with buffers will further increase the degrees of freedom and allow for the mitigation of the fading effect. On the other hand, these benefits come at the expense of added complexity to the system. The problem of relay selection is a challenging task that can account for multiple parameters such as: the channel state information, the buffer states, and the position of relays. In this thesis, we address this problem in two ways: deterministic and learning-based techniques. Multi-hop systems with buffer-aided half-duplex relays are considered. First, we propose a new relaying strategy that is dynamic and can achieve multiple levels of trade-off between the average packet delay and the outage probability. The system is analyzed in a Markov Chain framework and all theoretical results are checked for accuracy with simulation curves. Asymptotic analysis is the key approach to derive closed-form expressions solely dependent on adjustable parameters of the system. We could prove the superiority of this scheme compared with other benchmark schemes from the literature. Then, further relaying strategies are devised and compared. Additional performance levels are achieved to fit in different applications requirements. Next, for more complex setups, deterministic analysis becomes cumbersome, thus reinforcement learning techniques are used to efficiently boost the performance. A deep RL agent is trained with a joint reward until it converges to an optimum performance. We demonstrate the efficiency of this approach to further increase the throughput of cooperative systems under different interference and design constraints. en_US
dc.language.iso en en_US
dc.subject Buffer storage (Computer science) en_US
dc.subject Reinforcement learning en_US
dc.subject Queuing theory en_US
dc.subject Lebanese American University -- Dissertations en_US
dc.subject Dissertations, Academic en_US
dc.title Design and Analysis of Buffer-Aided Cooperative Networks using Deterministic and Reinforcement Learning Techniques en_US
dc.type Thesis en_US
dc.term.submitted Spring en_US
dc.author.degree MS in Computer Engineering en_US
dc.author.school SOE en_US
dc.author.idnumber 201706764 en_US
dc.author.commembers Tannir, Dani
dc.author.commembers Fawaz, Wissam
dc.author.department Electrical And Computer Engineering en_US
dc.description.physdesc 1 online resource (xii, 79 leaves): col. ill. en_US
dc.author.advisor Abou Rjeily, Chadi
dc.keywords Relaying en_US
dc.keywords Cooperative Networks en_US
dc.keywords Multi-Hop en_US
dc.keywords Buffer en_US
dc.keywords Data Queue en_US
dc.keywords Performance Analysis en_US
dc.keywords Markov Chain en_US
dc.keywords Reinforcement Learning en_US
dc.keywords Outage Probability en_US
dc.keywords Queuing Delay en_US
dc.keywords Diversity Order en_US
dc.keywords Throughput en_US
dc.keywords Optimization en_US
dc.description.bibliographiccitations Includes bibliographical references (leaf 64-69). en_US
dc.identifier.doi https://doi.org/10.26756/th.2022.425
dc.author.email sawsan.elzahr@lau.edu en_US
dc.identifier.tou http://libraries.lau.edu.lb/research/laur/terms-of-use/thesis.php en_US
dc.publisher.institution Lebanese American University en_US
dc.author.affiliation Lebanese American University en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search LAUR

Advanced Search


My Account