Why demand forecasting is a bit like ‘tracking the storm’

Getting your forecasts right is a complicated business and a continuous endeavour, writes Tom Bacon

The fall is a wonderful time of year here in Denver, but we could get a snowstorm as early as October. So, we pay close attention to weather forecasts, even though weather forecasters here are known to get it wrong (of the ten snowstorms forecast over a certain period, only two happened). There are two reasons for often hyped-up forecasting:

1. Meteorologists can only correctly predict storms if nothing else changes during the time it takes for the storm to move into the region – which simply doesn’t happen. There is considerable uncertainty (statistical variance) around any prediction.
2. The prognosticators are biased toward warning us about a possible storm rather than forecasting sunny skies and being wrong.

Demand forecasting has some similarities to forecasting the weather. Like meteorology, we use sophisticated, statistical models to predict demand – ceterus paribas, as if nothing else changes. And then we update those forecasts regularly based on new, updated information. When the most recent data says demand is strong, we raise the forecast only to bring it back down if the positive momentum stalls.

Our models are designed to be robust. We set parameters to capture trends more or less quickly but the default normally requires relatively consistent demand strength over time to change the longer-term forecast. Still, ceterus paribus rarely prevails, in weather forecasting or in market demand, so the forecasts change continuously.

And, in demand forecasting, we predict much further out than meteorologists dare. For long-term weather forecasting, use an almanac: the almanac predicts how cold next winter will be. The almanac, however, doesn’t venture to forecast which day next winter it will snow over six inches. On the other hand, with revenue management, we attempt quite precise forecasting – by flight, by date, by time of day, by fare level, up to 360 days in advance. 

Flight 102, March 4, 2015, leaving at 6 pm
Forecast Demand by Fare Level

$50  -  $100 passengers….72 days
$101 - $120 passengers….34 days
$121 - $150 passengers….21 days
$151 - $180 passengers….10 days
$181 - $210 passengers…..6 days
$211 - $275 passengers…..5 days
$276 - $400 passengers…..2 days

This seems ridiculously precise. Such precision is necessary, nonetheless, to restrict the <$100 fares even ten months out. If there is a surge in low fare bookings, we do not want to fill the plane if higher fare demand is expected later on. Of course, we don’t rely on our initial, perhaps pretty crude, forecast very long; we adjust this forecast, based on bookings versus forecast (maybe even daily thereafter) potentially 359 times before the flight departs.

This helps explain how fares can change so suddenly. The $100 fare could be offered one day and then in response to strong demand, closed out, driving the published fare up to $120; and then brought back down as the velocity of $120 fare demand softens.

There is nothing wrong with this process – airlines should use whatever information they have to construct the best forecast at each point in time. 

Of course, along with any demand forecast, however, the system calculates the statistical variance in demand – and thus the relative uncertainty -- around any forecast. 

Just as the meteorologist attaches a probability to his forecast (30% chance of rain), the forecast algorithm calculates a probability of demand for each fare level.

In fact, just as weather forecasters place more emphasis on extremes (the probability of snow), demand forecasters do as well. Is there historical data that supports much higher than average premium demand for a particular flight? Is there a high probability of such premium demand?

In the above example, the forecast for passengers who pay more than $275 is two passengers. If this forecast is robust – with two passengers consistently flying on that trip – then two seats should be held for such high fare, close in demand. On the other hand, if the two-passenger forecast has considerable uncertainty – it could be four passengers or it could be zero; the system will favour lower fare demand that wants those seats – ‘a bird in the hand’. Setting aside seats for the high fare demand is dependent on the forecast demand, the uncertainty, and the fare difference (a high fare difference will drive more value to holding out for that demand).

Revenue managers can set a parameter to adjust the forecast more or less quickly, based on observed demand. If the analyst wants the forecast to adjust more quickly – as, for example, the analyst is aware of a recent structural change in the marketplace – he can set the parameter accordingly; in fact, three to five observations of strong demand could completely change the forecasted outlook. As a default, however, the system will largely ignore sudden blips and only slowly incorporate them into the forecast as consistent strength is observed.

Perhaps if you live in southern California, you can find highly accurate meteorologists (it’s another sunny day here) but many of the rest of us stay tuned to the weather channel for regular updates. Demand forecasting for an airline is similarly a continuous endeavour. 

Tom Bacon is a 25-year airline veteran and industry consultant in revenue optimisation. 

Questions? Contact Tom at tom.bacon@yahoo.com or visit his website http://makeairlineprofitssoar.wordpress.com/

We send a once-a-week email with the best stories from EyeforTravel. Sign up for our weekly newsletter (sent every Friday)

Related Reads

comments powered by Disqus