Sensitivity Analysis of Reinforcement Learning to Schedule the battery in Grid-tied microgrid
This research paper explores the application of offline reinforcement learning (RL) in controlling battery operation in a grid-connected microgrid. The study investigates the impact of different parameters on the performance of the RL algorithm, such as the number of discretization levels, gamma, and alpha values. The results show that the convergence time and optimality of the RL algorithm are affected by the choice of these parameters. The research concludes that carefully selecting the discretization levels of state-action spaces and RL hyperparameters is crucial for optimal RL algorithm performance. The benchmark offline sensitivity analysis can be compared in the future with other RL approaches, such as function approximation or DRL methods.
Copyright (c) 2022 University of Sindh Journal of Information and Communication Technology
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
University of Sindh Journal of Information and Communication Technology (USJICT) follows an Open Access Policy under Attribution-NonCommercial CC-BY-NC license. Researchers can copy and redistribute the material in any medium or format, for any purpose. Authors can self-archive publisher's version of the accepted article in digital repositories and archives.
Upon acceptance, the author must transfer the copyright of this manuscript to the Journal for publication on paper, on data storage media and online with distribution rights to USJICT, University of sindh, Jamshoro, Pakistan. Kindly download the copyright for below and attach as a supplimentry file during article submission