Reads very similar to some blackouts we had in Australia. Weakly connected grids with vast geographical distances leading to oscillations that took down the grid.
You can also read numerous stories of how Australia's lithium ion grid storage systems have prevented blackouts in many cases. https://www.teslarati.com/tesla-big-battery-south-australia-... The fact is that the batteries responsiveness is the fastest of any system at correcting gaps like this. 50/60hz is nothing for a lithium ion battery nor are brief periods of multi-gigawatt draw/dumping as needed.
> Completely solved with lithium based grid storage at key locations btw.
That's because Australia has a moderate amount of renewables and prefers to burn fossil fuels. Right now, around 25% of the electricity in Australia is generated by solar or wind.
Spain is past 50% of renewable generation, and their problems are much bigger.
But Australia is tiny: 27M people. Just Spain is twice as big, and the European grid serves 500M people, we don't have the same problems, and probably can't solve them with the Australian solutions.
> The ultimate cause of the peninsular electrical zero on April 28th was a phenomenon of overvoltages in the form of a "chain reaction" in which high voltages cause generation disconnections, which in turn causes new increases in voltage and thus new disconnections, and so on.
> 1. The system showed insufficient dynamic voltage control capabilities sufficient to
maintain stable voltage
> 2. A series of rhythmic oscillations significantly conditioned the system, modifying
its configuration and increasing the difficulties for voltage stabilization.
If I understand it correctly (and like software, typical), it was a positive feedback-loop. Since there wasn't enough voltage control, some other station had to be added but got overloaded instead, also turning off, and then on to the next station.
Late addition: It was very helpful for me to read through the "ANNEX X. BRIEF BASICS OF THE ELECTRIC SYSTEM" (page 168) before trying to read the report itself, as it explains a lot of things that the rest of the report (rightly) assumes you already know.
I think your interpretation is correct. The voltage control is done at the high level of the grid, meaning the control covers bigger generation stations and major substations. Even if it’s small generator, rotating machinery, you won’t have strict voltage control other than its own AVR. The problem I see here is that we embed smaller individual generations at the lower level, where they pump the generated power to the grid at the medium voltage level. When you have majority of your generation at this level, you won’t have strict control over voltage and even frequency, I assume. I’m still digesting the report, but what I am after is whether they really neglected it and if it is not possible to do voltage control with 50% generation coming from renewable and through medium voltage level, aka lower level.
7.1(b) seems to be saying that generators connected at 200kV adjust their reactive power generation/absorption in real time according to the voltage they observe, based on a lookup table provided by the grid operator.
This seems sort of sensible according to my limited understanding of the theory of AC grids. You can write some differential equations and pretend everything is continuous (as opposed to being a LUT with 11 steps or so), and you can determine that the grid is stable.
However, check out this shorter report from red eléctrica:
Apparently these 220kV plants are connected to the 400kV grid via transformers in substations that are not owned by the generator operators. And those transformers have “tap changers” that attempt to keep the 220kV secondary side at the correct voltage within some fairly large voltage range on the 400kV side. Won’t this defeat the voltage control that the 220kV generators are supposed to provide? If the grid voltage is high, then absorption of reactive power is needed [0], and the generators are supposed to determine that they need to absorb reactive power (which they can do), but if the tap changer changes its setting, then the generator will not react correctly to the voltage on the 400kV side.
In other words, one would like the generator to absorb reactive power according to P_reactive(primary voltage • 220/400), but the actual behavior is P_reactive(primary voltage • 220/400 • tap changer position), the tap changer position is presumably something like 400/primary voltage, and I don’t understand how the result is supposed to function in any useful way. Adding insult to injury, the red eléctrica repoet authors seem to be suggesting that a bunch of tap changers operators didn’t configure their tap changes well enough to even keep secondary voltages in range.
Does anyone with more familiarity with these systems know how they’re supposed to work?
[0] I can never remember the sign convention for reactive power.
While the overall reason for the mass failure you cite is correct - a cascading failure - the interesting bit here are the oscillations that lead to it.
It looks very much like this was driven by algorithmic volatility trading of electricity spots - overproduction, price goes negative, buys placed, production ramps in response to rising price, price rises, sells placed, production falls due to falling price. The period of the oscillations in the grid seen before the blackout suggest a relatively slow cycle, and what they describe in the report sounds very much like this was an interaction between price-driven supply and real world supply.
It does speak to there being inadequate storage available on the grid to smooth demand and therefore pricing, but it also suggests that in certain conditions a harmonic can be set up between the market and price-driven production with catastrophic consequences.
It's pretty much their one and only chance to warn the authorities that there's a risk, so if they choose to ignore it, well, nobody can claim they weren't informed.
Page 130 is where the actual human readable summary is. Although the previous pages were pretty detailed in explaining the cumulative instabilities.
Sadly, some news outlets are probably only going to look at the recommendations and read "cybersecurity" and (even though they are common sense recommendations) assume there might be more to say about the matter.
Ed: Do I need a /s tag here or something? My point was that we shouldn't worry too much about about the presentation of the report, its actual contents will be spun to suit any narrative regardless.
There's been a shit-ton of misinformation about cyberattacks within the first hour of the outage, and the public were unfortunately very receptive to it, so I guess they're trying to preempt those concerns?
Individual generators monitor Voltage, Frequency, and reactive power (≈ how much current is out of phase with voltage) to make decisions about injecting more or less power into the network. This is just historically how they've always been doing it.
Due to interactions between different generators, there can be instabilities causing voltage or frequency or reactive power to deviate outside of spec. A simple example might be two generators where one surges while the other drops back, then vice versa. The measurement (by the network operator) of these effects is poor for Spain - shown by the simple example that they have large oscillations that they couldn't explain.
There's path dependent healing and correction of problems by different generators, which overall leads to network stability. However the network operator here is not actually resolving cause and effect, and does not have the insight to manage their stability properly.
In this case you can see them trying a few things to inject changes that they hope will bring stability - e.g. tying many connections hoping that adding generators together into one network will resolve to a stable outcome.
Are there countries that have a better design for their electricity network control systems?
Disclaimer: I don't design electricity networks nor electricity markets. And the above is ignoring loads (loads are mostly less problematic for control than generation).
I suppose other system operators might have better a state estimator and wide area monitoring system. But real-time system operation is universally an engineer sitting behind a desk, looking at their screen, and trying to make the best decision with whatever data they have.
The actions that were taken did not strike me as out of the ordinary.
So, the problem was a local voltage oscilation, where the high voltages caused generators to shut off.
How do these oscilations start? I understand that voltage isn't necessarily equal across the network, where frequency is. But that only allows oscillating, it doesn't cause it. Is this a basis inductor capacitor oscillation? Is it the small delay in inverters between measuring voltage and regulating their output? (seems unlikely, given that renewables aren't blamed) or is there some other source of (delayed) feedback.
And why do generators cut off at a high voltage? Is it a signal of 'too much power'? Is it to protect the generator from some sort of damage?
For the oscillations, the European grid in general is large enough that the time it takes for the energy to flow (at some fraction of the speed of light!) from one side of it to the other is not negligible: it's not a case of delays at the power plant, but delays in the network itself which can cause the various natural and artificial feedback loops in the circuit to start to become unstable and oscillate. In this specific incident, there's some implication in the report that the largest oscillation was unusual and may have been generated by single plant essentially oscillating on its own, for reasons unknown.
In either case, the oscillations were not the direct cause of the blackout: they were controlled, but the steps to control them put the system into a more fragile state. This is because of reactive power. The voltage in the system is due to both the 'real power', i.e. the power generated by the plants and consumed by consumers in the grid each cycle of the 50Hz AC, but also 'reactive power', which is energy that is absorbed by the consumers and the grid itself (all the power lines and transformers) and then bounced back to the generators each cycle. This is the basic 'inductor-capacitor' oscillation. This reactive power is considered to be 'generated' by capacitance and 'consumed' by inductance, though this distinction is arbitrary.
So, after the grid operator had stopped the oscillations, the grid was 'generating' a lot more reactive power, because damping down the oscillations generally involves connecting more things together so they don't fight each other as much. It also _lowered_ the grid voltage on average, so various bits of equipment were essentially adjusting their transformer ratio with the high-voltage interconnect to try to adjust for it.
Apart from these measures, the generators on the grid are generally supposed to contribute towards the voltage regulation, which helps with both damping these effects and reducing the change of the runaway spike that happened. But crucially, there's a difference between what they (by regulation, not necessarily technical capacity!) do. The traditional generators have active voltage control, which means they actively adjust how much reactive power they generate or absorb depending on the voltage on the lines. Renewable generators, by contrast, have a fixed ratio: they will be set to generate or absorb reactive power at a certain percentage of the real power (a few percent usually), they don't actively adjust this (they're not allowed to under the rules of the grid).
So, after the oscillation, the grid is generating a lot of reactive power and the power plants are absorbing it, but there's a lot of renewables around, which can't actively control voltage, they're just passively contributing a certain amount. Then there's a fairly rapid drop in real power output, which seems to be related to the energy market as some plants decide to curtail. This is expected, but renewables can do it pretty quickly compared to conventional plants. This means that the amount of reactive power being absorbed drops, i.e., counterintuitively a plant producing less power means the voltage rises.
In theory, there should be enough voltage control from conventional sources to deal with this, but in general they prove to not absorb as much reactive power as they were expected to, and the report calls out one plant which seems to just not be doing any control at all, it's more or less just doing something random. This means the voltage keeps rising, and, perhaps in part due to the adjustments in the transformer ratios, this means another plant trips off, at a lower voltage than it should (this is, basically, for protection: the equipment can only take so much voltage before it's damaged, but there's rules about what level of voltage it should withstand and, in extreme cases, for how long). This then makes the voltage rise more, and it's a fairly rapid cascade of failure from there, and many plants kick offline in a matter of seconds, and only then does the frequency of the grid start to drop significantly, but it's already too late because there's too much demand for the supply.
The recommendations of the report basically boil down to:
- Figure out why the plants (renewable and conventional) didn't have the capabilities the grid operator thought they did (or why they were actively causing problems), and fix them.
- Fix the regulations so renewable plants are allowed to contribute to active voltage control, and incentivize them to do so.
- Adjust the market rules so that plants have to give more notice before increasing or decreasing supply in response to prices
- Improve the monitoring of the grid and add other tools to help with voltage control (including better interconnects with the rest of Europe)
> Non-confidential version of the report
of the committee for the analysis
of the circumstances surrounding
the electricity crisis of the
April 28, 2025
Now I'm curious about what's in the confidential version of the report.
What is curious to me is that there's a possibility that a single plant in conjunction with natural oscillations caused enough trouble to start a doom scenario.
Oscillation -> damping -> possibly faulty equipment and possibly lack of power plants to absorve the reactive load -> 0 voltage in two countries and some neighbouring regions
There's also the possibility that Portugal put too much demand on the market due to negative prices, but I'm not sure if it was explained how much that had an effect on the whole thing.
It doesn't look like this report really identifies the root causes...
I would like to see: "We have simulated the complete 200 and 400 kV grid of the iberian peninsula and western europe, and can reproduce the situation that occurred. Any one of the following changes would have prevented the issue, and we suggest implementing them all for redundancy. This simulation will be re-run every day from now on to identify future cases similar incidents could occur"
When skimming through the report I got to think of the oscillation problem in RIP routing protocol. Although it isn't the same thing, but it shows the complexity of the problem to anyone who thinks there is a single solution to it.
When the story of the 6 missing minutes broke our, I saw a map of the European grid, with real time and historic grid behaviour,showing over- and underproduction. Does someone have a link?
Why so many pages of "Recommendation: implement multi-factor authentication" and other IT security irrelevancies? Did they need to pad out the number of pages?
> In the systems with network traffic evaluation probes, no records consistent with unauthorized activity have been observed, such as lateral movements, network traces or file movements for vulnerability exploitation or privilege escalation, among others.
> However, as is common in networks and information systems in any sector, other risks have been identified, such as vulnerabilities, deficiencies or inadequate configurations of security measures, which may expose networks and systems to potential risks, for which a series of measures are proposed.
Infrastructure in general has pretty terrible security practices, so I won't bemoan someone finding a useful soapbox to remind them to shape up a bit, even if it isn't the core cause of this particular issue (and it's probably also a reaction to various rumours/speculation about a cyberattack).
It's a difficult read, so let me paste here what Claude Sonnet 4 summarised:
Based on this comprehensive report on the April 28, 2025 electrical blackout in Spain, I can summarize the key reasons why it happened:
## Primary Cause: Voltage Control Crisis
The blackout was fundamentally caused by *insufficient dynamic voltage control capacity* in the system, which led to a catastrophic "chain reaction" of overvoltages. Here's how it unfolded:
### The Perfect Storm of Contributing Factors
*1. Inadequate Voltage Control Resources*
- Only 11 thermal power plants were coupled with voltage control obligations (the lowest number recorded in 2025)
- One planned voltage control plant in the southwest failed the previous evening and wasn't replaced
- Several connected plants didn't provide expected reactive power absorption during critical moments
*2. System Oscillations Weakened the Grid*
- Multiple oscillations occurred throughout the morning (starting at 5:49 AM)
- Two major oscillations at 12:03 PM (0.6 Hz) and 12:19 PM (0.2 Hz) significantly stressed the system
- The first oscillation was traced to anomalous behavior at a specific photovoltaic installation
- Measures taken to dampen these oscillations (increased grid meshing, reduced interconnection flows) inadvertently contributed to voltage increases
*3. The Fatal Chain Reaction (Phase 2-3)*
Starting at 12:32 PM:
- Voltages began rising rapidly across the transmission network
- Generation facilities started disconnecting due to overvoltages, beginning with renewable plants
- Each disconnection removed reactive power absorption capacity and reduced line loading
- This caused further voltage increases, triggering more disconnections
- The process accelerated into an unstoppable cascade
### Key Timeline
- *12:32 PM*: Sustained voltage increases begin
- *12:32:57*: First major generation loss (355 MW at Granada)
- *12:33:16*: Second major loss (730 MW at Badajoz)
- *12:33:17*: Third major loss (550 MW at Sevilla)
- *12:33:30*: Complete system collapse to zero voltage
### Why Couldn't It Be Stopped?
Once the chain reaction began, stopping it would have required massive reactive power absorption capacity that simply wasn't available. The system's protective mechanisms (like demand disconnection) actually made the overvoltage problem worse by further reducing grid loading.
## Broader Context
The report emphasizes this was a *multifactorial event* - no single failure explains it entirely. Contributing factors included:
- Low electrical demand creating capacitive effects in the highly meshed grid
- Quarter-hourly market changes causing rapid generation adjustments
- Spain's weak interconnection with Europe (only 3% vs. 15% target)
- Complex renewable evacuation infrastructure with inadequate protection settings
The restoration process took until 7:00 AM the next day to reach 99.95% supply restoration, though it was considered exemplary by international standards.
As moderators we can only guess why people flag things, but there are other reasons why people may have flagged that comment, the foremost being that it broke the guidelines due to its inflammatory style.
On Hacker News we want to be able to discuss difficult topics involving arguments people may find counter to their assumptions, but you need to express things in a way that's persuasive rather than combative.
What I'm reading from that quote is that the issue wasn't renewables as such, but an issue of power generation reacting too quickly and too intensely to price fluctuations. "Renewables" only matter insofar as they're the sort of generation that, under the current regulatory regime, get to react to those pricing changes.
Should’ve said ‘not enough spinning mass’ and it’d be perfectly fine for the politically correct and mean the same thing. This was highlighted as a risk for years and it finally materialized.
People are having three different conversations at the same time:
– the concrete causes of this specific blackout;
– how the existing grid is not prepared to deal with the current energy mix;
– the energy policy of the past decades, from the nuclear moratorium in the 80s to the large subsidies for renewable generation of the past couple decades.
A person's strong opinion on any one of these issues will inevitably influence their opinion on the others.
It's worth pointing out that the worst part of the behaviour of renewables specifically in this incident (a fixed power factor for managing reactive power), is currently mandated by the regulations in Spain, even though many of them are already equipped to do voltage control.
AnotherGoodName|8 months ago
https://en.wikipedia.org/wiki/2016_South_Australian_blackout
Completely solved with lithium based grid storage at key locations btw. This grid storage has also been massively profitable for it's owners https://en.wikipedia.org/wiki/Hornsdale_Power_Reserve#Revenu...
Australia currently has 4 of the 5 largest battery storage systems under construction as a result of this profit opportunity; https://en.wikipedia.org/wiki/Battery_energy_storage_system#...
You can also read numerous stories of how Australia's lithium ion grid storage systems have prevented blackouts in many cases. https://www.teslarati.com/tesla-big-battery-south-australia-... The fact is that the batteries responsiveness is the fastest of any system at correcting gaps like this. 50/60hz is nothing for a lithium ion battery nor are brief periods of multi-gigawatt draw/dumping as needed.
There's even articles that if Europe investing in battery storage systems like Australia they'd have avoided this. https://reneweconomy.com.au/no-batteries-no-flexibility-spai...
londons_explore|8 months ago
Actually this is typically an issue for grid batteries.
Spinning generators can easily briefly go to 10x the rated current for a second or so to smooth out big anomalies.
Stationary batteries inverters can't do 10x current spikes ever - the max they can get to is more like 1.2x for a few seconds.
That means you end up needing a lot of batteries to provide the same spinning reserve as one regular power station.
cyberax|8 months ago
That's because Australia has a moderate amount of renewables and prefers to burn fossil fuels. Right now, around 25% of the electricity in Australia is generated by solar or wind.
Spain is past 50% of renewable generation, and their problems are much bigger.
xwolfi|8 months ago
diggan|8 months ago
> The ultimate cause of the peninsular electrical zero on April 28th was a phenomenon of overvoltages in the form of a "chain reaction" in which high voltages cause generation disconnections, which in turn causes new increases in voltage and thus new disconnections, and so on.
> 1. The system showed insufficient dynamic voltage control capabilities sufficient to maintain stable voltage
> 2. A series of rhythmic oscillations significantly conditioned the system, modifying its configuration and increasing the difficulties for voltage stabilization.
If I understand it correctly (and like software, typical), it was a positive feedback-loop. Since there wasn't enough voltage control, some other station had to be added but got overloaded instead, also turning off, and then on to the next station.
Late addition: It was very helpful for me to read through the "ANNEX X. BRIEF BASICS OF THE ELECTRIC SYSTEM" (page 168) before trying to read the report itself, as it explains a lot of things that the rest of the report (rightly) assumes you already know.
leymed|8 months ago
amluto|8 months ago
https://www.boe.es/buscar/doc.php?id=BOE-A-2000-5204
7.1(b) seems to be saying that generators connected at 200kV adjust their reactive power generation/absorption in real time according to the voltage they observe, based on a lookup table provided by the grid operator.
This seems sort of sensible according to my limited understanding of the theory of AC grids. You can write some differential equations and pretend everything is continuous (as opposed to being a LUT with 11 steps or so), and you can determine that the grid is stable.
However, check out this shorter report from red eléctrica:
https://d1n1o4zeyfu21r.cloudfront.net/WEB_Incident_%2028A_Sp...
Apparently these 220kV plants are connected to the 400kV grid via transformers in substations that are not owned by the generator operators. And those transformers have “tap changers” that attempt to keep the 220kV secondary side at the correct voltage within some fairly large voltage range on the 400kV side. Won’t this defeat the voltage control that the 220kV generators are supposed to provide? If the grid voltage is high, then absorption of reactive power is needed [0], and the generators are supposed to determine that they need to absorb reactive power (which they can do), but if the tap changer changes its setting, then the generator will not react correctly to the voltage on the 400kV side.
In other words, one would like the generator to absorb reactive power according to P_reactive(primary voltage • 220/400), but the actual behavior is P_reactive(primary voltage • 220/400 • tap changer position), the tap changer position is presumably something like 400/primary voltage, and I don’t understand how the result is supposed to function in any useful way. Adding insult to injury, the red eléctrica repoet authors seem to be suggesting that a bunch of tap changers operators didn’t configure their tap changes well enough to even keep secondary voltages in range.
Does anyone with more familiarity with these systems know how they’re supposed to work?
[0] I can never remember the sign convention for reactive power.
madaxe_again|8 months ago
While the overall reason for the mass failure you cite is correct - a cascading failure - the interesting bit here are the oscillations that lead to it.
It looks very much like this was driven by algorithmic volatility trading of electricity spots - overproduction, price goes negative, buys placed, production ramps in response to rising price, price rises, sells placed, production falls due to falling price. The period of the oscillations in the grid seen before the blackout suggest a relatively slow cycle, and what they describe in the report sounds very much like this was an interaction between price-driven supply and real world supply.
It does speak to there being inadequate storage available on the grid to smooth demand and therefore pricing, but it also suggests that in certain conditions a harmonic can be set up between the market and price-driven production with catastrophic consequences.
tofflos|8 months ago
Cybersecurity and digital systems was not the issue but gets thirteen pages of proposed measures. I feel this could have been left out.
Electric System Operation was the issue and gets seven pages of proposed measures.
leymed|8 months ago
https://d1n1o4zeyfu21r.cloudfront.net/WEB_Incident_%2028A_Sp...
razakel|8 months ago
It's pretty much their one and only chance to warn the authorities that there's a risk, so if they choose to ignore it, well, nobody can claim they weren't informed.
rcarmo|8 months ago
Sadly, some news outlets are probably only going to look at the recommendations and read "cybersecurity" and (even though they are common sense recommendations) assume there might be more to say about the matter.
decimalenough|8 months ago
Oh wait, they already did: https://www.telegraph.co.uk/business/2025/06/18/renewable-en...
Ed: Do I need a /s tag here or something? My point was that we shouldn't worry too much about about the presentation of the report, its actual contents will be spun to suit any narrative regardless.
Nextgrid|8 months ago
robocat|8 months ago
Due to interactions between different generators, there can be instabilities causing voltage or frequency or reactive power to deviate outside of spec. A simple example might be two generators where one surges while the other drops back, then vice versa. The measurement (by the network operator) of these effects is poor for Spain - shown by the simple example that they have large oscillations that they couldn't explain.
There's path dependent healing and correction of problems by different generators, which overall leads to network stability. However the network operator here is not actually resolving cause and effect, and does not have the insight to manage their stability properly.
In this case you can see them trying a few things to inject changes that they hope will bring stability - e.g. tying many connections hoping that adding generators together into one network will resolve to a stable outcome.
Are there countries that have a better design for their electricity network control systems?
Disclaimer: I don't design electricity networks nor electricity markets. And the above is ignoring loads (loads are mostly less problematic for control than generation).
scrlk|8 months ago
The actions that were taken did not strike me as out of the ordinary.
rocqua|8 months ago
How do these oscilations start? I understand that voltage isn't necessarily equal across the network, where frequency is. But that only allows oscillating, it doesn't cause it. Is this a basis inductor capacitor oscillation? Is it the small delay in inverters between measuring voltage and regulating their output? (seems unlikely, given that renewables aren't blamed) or is there some other source of (delayed) feedback.
And why do generators cut off at a high voltage? Is it a signal of 'too much power'? Is it to protect the generator from some sort of damage?
rcxdude|8 months ago
For the oscillations, the European grid in general is large enough that the time it takes for the energy to flow (at some fraction of the speed of light!) from one side of it to the other is not negligible: it's not a case of delays at the power plant, but delays in the network itself which can cause the various natural and artificial feedback loops in the circuit to start to become unstable and oscillate. In this specific incident, there's some implication in the report that the largest oscillation was unusual and may have been generated by single plant essentially oscillating on its own, for reasons unknown.
In either case, the oscillations were not the direct cause of the blackout: they were controlled, but the steps to control them put the system into a more fragile state. This is because of reactive power. The voltage in the system is due to both the 'real power', i.e. the power generated by the plants and consumed by consumers in the grid each cycle of the 50Hz AC, but also 'reactive power', which is energy that is absorbed by the consumers and the grid itself (all the power lines and transformers) and then bounced back to the generators each cycle. This is the basic 'inductor-capacitor' oscillation. This reactive power is considered to be 'generated' by capacitance and 'consumed' by inductance, though this distinction is arbitrary.
So, after the grid operator had stopped the oscillations, the grid was 'generating' a lot more reactive power, because damping down the oscillations generally involves connecting more things together so they don't fight each other as much. It also _lowered_ the grid voltage on average, so various bits of equipment were essentially adjusting their transformer ratio with the high-voltage interconnect to try to adjust for it.
Apart from these measures, the generators on the grid are generally supposed to contribute towards the voltage regulation, which helps with both damping these effects and reducing the change of the runaway spike that happened. But crucially, there's a difference between what they (by regulation, not necessarily technical capacity!) do. The traditional generators have active voltage control, which means they actively adjust how much reactive power they generate or absorb depending on the voltage on the lines. Renewable generators, by contrast, have a fixed ratio: they will be set to generate or absorb reactive power at a certain percentage of the real power (a few percent usually), they don't actively adjust this (they're not allowed to under the rules of the grid).
So, after the oscillation, the grid is generating a lot of reactive power and the power plants are absorbing it, but there's a lot of renewables around, which can't actively control voltage, they're just passively contributing a certain amount. Then there's a fairly rapid drop in real power output, which seems to be related to the energy market as some plants decide to curtail. This is expected, but renewables can do it pretty quickly compared to conventional plants. This means that the amount of reactive power being absorbed drops, i.e., counterintuitively a plant producing less power means the voltage rises.
In theory, there should be enough voltage control from conventional sources to deal with this, but in general they prove to not absorb as much reactive power as they were expected to, and the report calls out one plant which seems to just not be doing any control at all, it's more or less just doing something random. This means the voltage keeps rising, and, perhaps in part due to the adjustments in the transformer ratios, this means another plant trips off, at a lower voltage than it should (this is, basically, for protection: the equipment can only take so much voltage before it's damaged, but there's rules about what level of voltage it should withstand and, in extreme cases, for how long). This then makes the voltage rise more, and it's a fairly rapid cascade of failure from there, and many plants kick offline in a matter of seconds, and only then does the frequency of the grid start to drop significantly, but it's already too late because there's too much demand for the supply.
The recommendations of the report basically boil down to:
- Figure out why the plants (renewable and conventional) didn't have the capabilities the grid operator thought they did (or why they were actively causing problems), and fix them.
- Fix the regulations so renewable plants are allowed to contribute to active voltage control, and incentivize them to do so.
- Adjust the market rules so that plants have to give more notice before increasing or decreasing supply in response to prices
- Improve the monitoring of the grid and add other tools to help with voltage control (including better interconnects with the rest of Europe)
unknown|8 months ago
[deleted]
decimalenough|8 months ago
Now I'm curious about what's in the confidential version of the report.
londons_explore|8 months ago
ranguna|8 months ago
Oscillation -> damping -> possibly faulty equipment and possibly lack of power plants to absorve the reactive load -> 0 voltage in two countries and some neighbouring regions
There's also the possibility that Portugal put too much demand on the market due to negative prices, but I'm not sure if it was explained how much that had an effect on the whole thing.
londons_explore|8 months ago
I would like to see: "We have simulated the complete 200 and 400 kV grid of the iberian peninsula and western europe, and can reproduce the situation that occurred. Any one of the following changes would have prevented the issue, and we suggest implementing them all for redundancy. This simulation will be re-run every day from now on to identify future cases similar incidents could occur"
baq|8 months ago
unknown|8 months ago
[deleted]
JanneVee|8 months ago
hyperman1|8 months ago
gred|8 months ago
diggan|8 months ago
> However, as is common in networks and information systems in any sector, other risks have been identified, such as vulnerabilities, deficiencies or inadequate configurations of security measures, which may expose networks and systems to potential risks, for which a series of measures are proposed.
rcxdude|8 months ago
nraynaud|8 months ago
icar|8 months ago
Based on this comprehensive report on the April 28, 2025 electrical blackout in Spain, I can summarize the key reasons why it happened:
## Primary Cause: Voltage Control Crisis
The blackout was fundamentally caused by *insufficient dynamic voltage control capacity* in the system, which led to a catastrophic "chain reaction" of overvoltages. Here's how it unfolded:
### The Perfect Storm of Contributing Factors
*1. Inadequate Voltage Control Resources* - Only 11 thermal power plants were coupled with voltage control obligations (the lowest number recorded in 2025) - One planned voltage control plant in the southwest failed the previous evening and wasn't replaced - Several connected plants didn't provide expected reactive power absorption during critical moments
*2. System Oscillations Weakened the Grid* - Multiple oscillations occurred throughout the morning (starting at 5:49 AM) - Two major oscillations at 12:03 PM (0.6 Hz) and 12:19 PM (0.2 Hz) significantly stressed the system - The first oscillation was traced to anomalous behavior at a specific photovoltaic installation - Measures taken to dampen these oscillations (increased grid meshing, reduced interconnection flows) inadvertently contributed to voltage increases
*3. The Fatal Chain Reaction (Phase 2-3)* Starting at 12:32 PM: - Voltages began rising rapidly across the transmission network - Generation facilities started disconnecting due to overvoltages, beginning with renewable plants - Each disconnection removed reactive power absorption capacity and reduced line loading - This caused further voltage increases, triggering more disconnections - The process accelerated into an unstoppable cascade
### Key Timeline - *12:32 PM*: Sustained voltage increases begin - *12:32:57*: First major generation loss (355 MW at Granada) - *12:33:16*: Second major loss (730 MW at Badajoz) - *12:33:17*: Third major loss (550 MW at Sevilla) - *12:33:30*: Complete system collapse to zero voltage
### Why Couldn't It Be Stopped?
Once the chain reaction began, stopping it would have required massive reactive power absorption capacity that simply wasn't available. The system's protective mechanisms (like demand disconnection) actually made the overvoltage problem worse by further reducing grid loading.
## Broader Context
The report emphasizes this was a *multifactorial event* - no single failure explains it entirely. Contributing factors included: - Low electrical demand creating capacitive effects in the highly meshed grid - Quarter-hourly market changes causing rapid generation adjustments - Spain's weak interconnection with Europe (only 3% vs. 15% target) - Complex renewable evacuation infrastructure with inadequate protection settings
The restoration process took until 7:00 AM the next day to reach 99.95% supply restoration, though it was considered exemplary by international standards.
fuoqi|8 months ago
tomhow|8 months ago
As moderators we can only guess why people flag things, but there are other reasons why people may have flagged that comment, the foremost being that it broke the guidelines due to its inflammatory style.
On Hacker News we want to be able to discuss difficult topics involving arguments people may find counter to their assumptions, but you need to express things in a way that's persuasive rather than combative.
https://news.ycombinator.com/newsguidelines.html
pdpi|8 months ago
matsemann|8 months ago
Your other comment probably got flagged because it started with a huge straw man and had multiple unwarranted jabs in it.
baq|8 months ago
felipeerias|8 months ago
– the concrete causes of this specific blackout; – how the existing grid is not prepared to deal with the current energy mix; – the energy policy of the past decades, from the nuclear moratorium in the 80s to the large subsidies for renewable generation of the past couple decades.
A person's strong opinion on any one of these issues will inevitably influence their opinion on the others.
rcxdude|8 months ago
wavefunction|8 months ago
>the most plausible explanation is that it is due to market reasons (prices)
Seems to be market conditions or manipulations or inefficiencies in the market.