4 metrics for measuring airline revenue performance

Complicated revenue management drives unsatisfactory results, but it doesn’t have to be that way, writes Tom Bacon

Let me start this story with a quote from Scott Kirby, President, United Airlines at the time of the airline’s Third Quarter Earnings Call, October 2016.

“In July as United was seeing stronger demand and bookings were coming in strong at the domestic entity, [revenue management] tried to boost the demand [forecast]… And so, we took a set of markets and didn’t do anything, and we took another set of markets and increased local demand by a 100% in the forecast, another set of markets and increased both the local and the flow demand by 20%. Those were massive, massive changes in demand, and you would expect that to lead to massive changes in output. And at the end of the experiment, those three sets of entities, those three different experimental groups were 0.2% different in outcomes. What that tells you is - and what that really is, we have a big complicated system.”

Having a big complicated [RM] system is true for most airlines. Indeed revenue management (RM) systems are often difficult for the analysts to understand and properly maintain. 

But Kirby’s example is totally unintuitive – changing forecast demand should drive different inventory allocations and thus different revenue results. More specifically: higher local demand would mean the system would set aside fewer seats for lower-yield flow passengers; higher local and flow demand would drive the system to allocate fewer seats for low fare passengers in either category. If demand really were higher than history would suggest, the third scenario should result in the highest revenue, making sure there were sufficient seats for higher fare demand. If underlying demand hadn’t changed as much, on the other hand, saving seats for higher fare passengers would result in more empty seats, meaning that the base case should produce the most revenue. For all three strategies to produce roughly the same revenue means the system is operating in a way that is hard to understand.

Most airlines adjust demand at a market level just as Kirby describes in response to observed booking trends. But most airlines do not conduct such tests on subsets of the system and they do not review outcomes in a disciplined way. Thus, most airlines do not really know if their RM systems are working properly; most do not know if such forecast changes drive additional revenue. Of course, most airline revenue management departments would resist a conclusion that they, the analytical experts, did not understand their systems.

Greater understanding can only be achieved by greater focus on actual revenue performance

In general, RM systems use demand forecasts as inputs and then they rely on highly sophisticated optimisation models (essentially ‘black boxes’) to allocate seats across both passenger itineraries and fare levels. Since the models are highly complex, RM analysts are coached to focus on the inputs – the forecasts – and to let the sophistication kick in thereafter.

Kirby’s example, however, suggests this isn’t enough. Greater understanding can only be achieved by greater focus on actual revenue performance.

Kirby’s example reminds us that actual revenue performance is as related to inventory controls – inventory-related metrics like bucket closings, empty seats, sell-up – as forecast accuracy. In his example, potentially, inventory-related metrics didn’t change as much as the demand forecast changed. Contrary to the conventional hands-off approach, analysts do need to understand how the system is adjusting inventory in response to demand.

Reports need to be built around the allocation process and results.  Sample metrics include:

1. Bucket closings.  Which fare buckets were closed historically? Did the closing result in empty seats or additional sell-up? Did the plane still go out full?  What do closings look like going forward? How does the change in closings going forward relate to the change in the demand forecast?

2. Fare mix on full flights.  Airlines are constantly frustrated by full flights that lose money – the prevailing fare just isn’t high enough or not enough passengers are paying full fare. Potentially, the airline isn’t tapping into underlying high fare demand because of a downward cycle on availability.

3. Local versus flow.  Although O&D revenue management has been proven to drive incremental revenue through more optimal allocation of seats on connecting flights, it adds considerable complexity to the RM process at an airline. O&D RM will not allocate more seats to the local market no matter how much local demand increases – up 10%, up 20%, up 50% – if the local fares are too low. Also, empty feeder flights will attract more connect availability relative to local demand. Although these allocations are ‘optimal’ given the local fares or the lack of demand on the feeders, they can have a spiraling-down impact on performance: as less local demand is accommodated, the systems set aside fewer and fewer seats for local, making overall network profitability potentially more difficult.

4. Bottom-line Revenue.  Revenue Management accountability means bottom line revenue. Not just forecast accuracy. Not just competitive fares. Not just reduced denied boarding. So, rather than separately managing pricing, forecasting, and inventory controls – three different functions in many airlines – RM must be accountable for total revenue. All functions need to be working together, and cognizant of their role in overall revenue.

Tom Bacon has been in the business for 25 years as an airline veteran and now an industry consultant in revenue optimisation. Email Tom at or visit his website

Related Reads

comments powered by Disqus