
You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
Coordinated active and reactive power dynamic dispatch strategy for wind farms to minimize levelized production cost considering system uncertainty: A soft actor-critic approach

With the rapid increasing of wind power generation in the power system, the coordinated dispatch of active and reactive power for each wind turbine (WT) in the wind farm (WF) becomes the critical issue for the safe and stable of power grid. Considering the time-varying characteristic of the WF, this can be regarded as a decision-making problem under uncertainty. To this end, this study formulates the active and reactive power dispatch problem of WF as a Markov decision process (MDP) allowing for the system uncertainty, e. g. wind speed, reactive power demand and wake effect. Then, an agent is trained via deep reinforcement learning algorithm (DRL) to solve the MDP to obtain the optimal dispatch policy with the minimizing levelized production cost (LPC) target. Finally, the proposed method is tested on an 80 MW WF and some benchmark methods are utilized to act as comparison examples. Simulation results show that, compared with other methods, the proposed dispatch strategy can provide more appropriate active and reactive reference for each wind turbine to extend lifetime of WF, resulting in less LPC.
- Southwest University of Science and Technology China (People's Republic of)
- Aalborg University Library (AUB) Denmark
- Aalborg University Library (AUB) Aalborg Universitet Research Portal Denmark
- Southwest University of Science and Technology China (People's Republic of)
- Aalborg University Denmark
Deep reinforcement learning, Active and reactive power dispatch strategy, Wind power, Markov decision process, Levelized production cost
Deep reinforcement learning, Active and reactive power dispatch strategy, Wind power, Markov decision process, Levelized production cost
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).6 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Top 10%
