Now showing 1 - 3 of 3
  • Placeholder Image
    Publication
    Implementation of Max-SNR opportunistic scheduling in cross layer design
    The spectrum efficiency in wireless networks is becoming increasingly important to satisfy the expanding need for remote services, particularly reasonable remote Internet services. Various wireless users will encounter distinct circumstances on the channel, which are time-varying and location dependent at a given time. To exploit the wireless time-varying nature, a cross-layer design method called Opportunistic scheduling is used. Opportunistic scheduling increases the overall system performance and user fairness requirements under certain Quality of Service (QoS). The main idea behind this opportunistic scheduling algorithm is to make use of the time-varying channel and a user with the highest channel condition should be scheduled at a specified moment. The progressions in Wireless innovation made opportunistic scheduling a famous research point as of late. The demand for QoS provisioning is increasing and using a scheme which allows only users with best channel conditions to transmit at high transmission power cannot be satisfied. The objective of this paper is to implement opportunistic scheduling while adhering to fairness and QoS constraints, using Max-Min fair algorithm. In brief, a wireless network has been simulated using Qualnet simulator. An opportunistic scheduler that uses Max-Min fairness scheduling has been implemented, at the Media Access Control (MAC) layer. Here, the base station gathers the Signal to Noise Ratio (SNR) of all the nodes and then schedules the users using Max-Min algorithm, based on these SNR values. The same scenario is then implemented using Strict Priority, which is a Non-opportunistic scheduling algorithm. The resulting throughput, fairness, delay and jitter of both the algorithms are then compared. � BEIESP.
  • Placeholder Image
    Publication
    Distributed Deep Reinforcement Learning using TensorFlow
    (2018)
    Ajay Rao P
    ;
    Navaneesh Kumar B
    ;
    Cadabam S
    ;
    Deep Reinforcement Learning is the combination of Reinforcement Learning algorithms with Deep neural network, which had recent success in learning complicated unknown environments. The trained model is a Convolutional Neural Network trained using Q-Learning Loss value. The agent takes in observation, i.e. raw pixel image and reward from the environment for each step as input. The deep Q-learning algorithm gives out the optimal action for every observation and reward pair. The hyperparameters of Deep Q-Network remain unchanged for any environment. TensorFIow, an open source machine learning and numerical computation library is used to implement the deep Q-Learning algorithm on GPU. The distributed TensorFIow architecture is used to maximize the hardware resource utilization and reduce the training time. The usage of Graphics Processing Unit (GPU) in the distributed environment accelerated the training of deep Q-network. On implementing the deep Q-learning algorithm for many environments from OpenAI Gym, the agent outperforms a decent human reference player with few days of training. � 2017 IEEE.
    Scopus© Citations 9
  • Placeholder Image
    Publication
    Social-sine cosine algorithm-based cross layer resource allocation in wireless network
    Cross layer resource allocation in the wireless networks is approached traditionally either by communications networks or information theory. The major issue in networking is the allocation of limited resources from the users of network. In traditional layered network, the resource are allocated at medium access control (MAC) and the network layers uses the communication links in bit pipes for delivering the data at fixed rate with the occasional random errors. Hence, this paper presents the cross-layer resource allocation in wireless network based on the proposed social-sine cosine algorithm (SSCA). The proposed SSCA is designed by integrating social ski driver (SSD) and sine cosine algorithm (SCA). Also, for further refining the resource allocation scheme, the proposed SSCA uses the fitness based on energy and fairness in which max-min, hard-fairness, proportional fairness, mixed-bias and the maximum throughput is considered. Based on energy and fairness, the cross-layer optimization entity makes the decision on resource allocation to mitigate the sum rate of network. The performance of resource allocation based on proposed model is evaluated based on energy, throughput, and the fairness. The developed model achieves the maximal energy of 258213, maximal throughput of 3.703, and the maximal fairness of 0.868, respectively. � 2021 Institute of Advanced Engineering and Science. All rights reserved.
    Scopus© Citations 3