
You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
Efficient optimal power flow learning: A deep reinforcement learning with physics-driven critic model

The transition to decarbonized energy systems presents significant operational challenges due to increased uncertainties and complex dynamics. Deep reinforcement learning (DRL) has emerged as a powerful tool for optimizing power system operations. However, most existing DRL approaches rely on approximated data-driven critic networks, requiring numerous risky interactions to explore the environment and often facing estimation errors. To address these limitations, this paper proposes an efficient DRL algorithm with a physics-driven critic model, namely a differentiable holomorphic embedding load flow model (D-HELM). This approach enables accurate policy gradient computation through a differentiable loss function based on system states of realized uncertainties, simplifying both the replay buffer and the learning process. By leveraging continuation power flow principles, D-HELM ensures operable, feasible solutions while accelerating gradient steps through simple matrix operations. Simulation results across various test systems demonstrate the computational superiority of the proposed approach, outperforming state-of-the-art DRL algorithms during training and model-based solvers in online operations. This work represents a potential breakthrough in real-time energy system operations, with extensions to security-constrained decision-making, voltage control, unit commitment, and multi-energy systems.
Deep reinforcement learning, Operable power flow, TK1001-1841, Production of electric energy or power. Powerplants. Central stations, Physics-driven policy gradient, Holomorphic embedding, Real-time economic control
Deep reinforcement learning, Operable power flow, TK1001-1841, Production of electric energy or power. Powerplants. Central stations, Physics-driven policy gradient, Holomorphic embedding, Real-time economic control
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).0 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average
