If every severe warning resulted in severe outcomes, it would not be risk management, it would be certainty – and we don’t live in that world.
In the aftermath of Cyclone Vaianu, a predictable pattern emerged. Warnings were issued, preparations were made and states of emergency were declared in parts of the country. In many places, the worst-case scenarios did not come about. Almost immediately, the question followed: was it an overreaction? Or, more bluntly, did we all just cancel plans for nothing?
Much has been written attempting to answer this question, and it’s an understandable one. I think it is also, in many cases, the wrong question to ask.
Weather forecasting is not a system of prediction in the way most people might intuitively understand it. And it does not tell us what will happen. It describes what might happen, based on evolving probabilities, which are then translated into decisions that are usually under time pressure and with incomplete information.
A forecast of extreme rainfall is not a guarantee of flooding. It is an indication that the conditions exist for flooding to occur, potentially with significant consequences. Acting on that information is not about being right or wrong in hindsight. It is about managing risk in advance.
Behind every forecast sits a set of statistical models running multiple scenarios at once. Rather than producing a single outcome, they generate a range of possibilities – often referred to as an ensemble – each slightly different depending on initial conditions. When forecasters talk about a high likelihood of heavy rainfall, they are drawing on how many of those model runs converge on a similar outcome.
The difficulty is that this probabilistic reasoning rarely survives intact once it reaches the public. A “70% chance” is easily heard as near-certainty – right up until it doesn’t happen, at which point it becomes evidence that it never was.
When the most severe impacts do not eventuate, warnings are often judged against what actually happened, rather than what could reasonably have happened. The absence of disaster becomes evidence that the risk was overstated. It is a comforting conclusion. It also allows us to believe we can discount the next warning a little more easily. Part of the issue lies in how uncertainty is translated once it moves beyond technical circles.
Terms like “atmospheric river” or “extreme weather event” have specific meanings within meteorology (unlike the more informal “weather bomb,” which sounds like something designed by a marketing team). But in public conversation, they can begin to sound less like descriptions of conditions and more like descriptions of outcomes.
The distinction matters. Language intended to convey heightened risk can easily be interpreted as a prediction of severity. When that severity is not experienced directly, it can feel as though the warning itself was exaggerated, rather than recognising that the underlying message was probabilistic all along.
A recent report in Stuff on a potential El Niño shift for our winter ahead illustrates how easily this translation occurs in practice.
The language is careful but layered: a “formidable” event, a “greater than 60% probability” of becoming strong, and repeated references to what “could” happen in the months ahead. None of this is incorrect. It reflects the underlying science. But read as a whole, it carries a different weight.
“Formidable” does not sound like a probability. It sounds like something you might definitely want to cancel a weekend for. A “60% chance” begins to feel less like uncertainty and more like something already in motion. And the accumulation of “coulds” creates a general sense that something significant is expected, even if the specifics remain unclear. And, in any regard, what is someone expected to do with a “could”?
This is not a failure of science. It is a function of translation. Individually, the elements attempt to describe uncertainty. Together, they begin to imply an outcome.
In any probabilistic system, there will be times when high-risk scenarios do not fully unfold. That does not invalidate the assessment. It is an inherent feature of the system. If every severe warning resulted in severe outcomes, it would not be risk management; it would be certainty – and we do not live in a world which offers that, however much we sometimes expect to.
There is also an asymmetry in how we judge these decisions. Over-preparation is visible and often inconvenient: cancelled plans, empty supermarket shelves, a sense that perhaps it was all a bit much. Under-preparation, when the worst does not occur, is invisible. It leaves no trace. It is difficult to credit a decision for something that never happened.
This dynamic is not unique to weather. Economic forecasting offers a close parallel. Predictions about inflation, interest rates or the likelihood of a recession are often interpreted as definitive, even though they are based on incomplete and evolving data. When those forecasts shift, the narrative becomes one of error or inconsistency. But the underlying reality is that the system itself is uncertain.
The issue is not that the data is wrong. Again, it is that probability is being treated as prediction, and then judged accordingly.
More broadly, we are uncomfortable with uncertainty. Faced with probabilistic information, we tend to resolve it into something more definite, if only to make it easier to live with. Either the risk was real, or it was not. Either the response was justified, or it was excessive. The more difficult middle ground – that the risk was credible, the response proportionate and the outcome ultimately less severe than it might have been – is harder to hold. It is also where most real-world decision-making takes place.
If we continue to treat risk warnings as overreactions whenever the worst does not occur, we create an incentive for decision-makers to hesitate the next time. And that is precisely when the consequences are likely to be most serious. The challenge is not simply to improve forecasts or refine communication. It is to develop a more mature relationship with uncertainty itself, one that recognises that the absence of disaster is not proof that the risk was never there.

