Skip to content
Search
Generic filters
Exact matches only

AI for revenue growth: using ML to drive more valuable pricing | by Daniel Burke | Aug, 2020

How a national fitness brand used machine learning to optimize pricing and grow revenue by 11%

Daniel Burke
Photo by Micheile Henderson on Unsplash

Pricing optimization is a powerful lever for revenue growth, yet it’s too often put in the too-hard basket by too many companies.

This is because traditional pricing optimization methods can be both complex to implement and limited in their ability to accurately capture the full range of factors that can impact pricing.

Machine learning (ML) is well-suited to pricing optimization problems — both for its ability to handle complex features, as well as its ability to generalize to new situations. Moreover, recent advances in managed services has put these ML solutions within reach of virtually any organization.

In this anonymized example we explore how a company with no data science expertise was able to use managed ML services to implement an ML-powered pricing strategy that performed 2x above traditional approaches and resulted in estimated revenue growth of 11%.

FitCo is a premium fitness brand, based in Los Angeles, that operates a portfolio of over 600 gym and fitness center locations across the United States.

Having grown rapidly by acquisition over the past several years, management attention had now turned its attention to boosting organic revenue growth, which had been stubbornly flat on a per-studio basis.

FitCo had identified FitClass — its suite of specialty fitness classes — as a prime source of organic growth. Specifically, it had identified pricing of these classes as a major potential area of improvement.

FitClasses are a popular offering across FitCo’s brands. They are premium experiences catering to niche fitness demand and sold on a pay-per-class basis on top of standard memberships.

Whilst FitCo had ensured a consistent user experience across its portfolio, local operators were still able to set schedules and prices for FitClasses in their studios with nearly total independence. As a result, prices varied widely between classes and locations.

Whilst FitCo understood that some of this variance reflected local conditions, they also suspected there was considerable room for improvement in the way prices were set across its portfolio.

FitCo had undertaken a pricing exercise two years ago in which prior management had opted to centralize FitClass pricing and institute a blanket price increase of between 10% and 20% across the board.

This blunt approach had not been successful. It had failed to take into account the price elasticity of demand of its customer set across the wide range of classes and locations, and the price increases actually resulted in overall revenue declines of 2% as the subsequent reduction in demand in many classes outstripped the increases in price. They were forced to unwind the price changes a couple months later.

Though painful, that experience had at least been useful in giving FitCo a pretty solid dataset on price elasticity of its FitClass customer set. It could chart how class demand changed in response to price increases for each of three utilization bands — high (>85%), medium (50–85%) and low (<50%). Modeling out the various impacts of price increases on demand, FitCo estimated a revenue full potential of 15% from more efficient pricing.

To capture that higher revenue potential, FitCo now only needed to be able to accurately predict demand for classes into the future — whether current or new — to accurately model price impacts on revenue. This would enable FitCo to determine whether and how much of a price increase each class could profitably sustain.

FitCo had initially attempted this using traditional rules-based methods — effectively a series of if/then statements to set bands based on certain conditions. With some extensive trial and error they had managed to write a function that was estimated to generate about 5% in additional revenue. This wasn’t bad, but this approach had two primary limitations: (1) it failed to fully account for interrelationship of the wide variety of factors relevant to each class — it incorrectly predicted too many classes in the wrong band, causing a decline in usage — and (2) it failed to generalize to new class schedules or details at any given location — it couldn’t adequately account for new combinations of factors or scenarios.

Looking for alternative approaches, FitCo turned to ML.

ML is well-suited to these types of classification problems precisely because of its ability to process a wide range of factors and generalize to unseen or new situations.

However, like most organizations of its size and in its industry, FitCo did not have ML capabilities or a team of data scientists on hand to design, build and deploy an ML solution. This had previously been a major barrier to ML adoption. Today though, the availability of ML managed services have largely democratized access to ML capabilities.

For their solution, FitCo chose Amazon SageMaker, which featured among other things, an AutoML capability called AutoPilot that could take a simple tabular dataset and automate the process of building an ML workload around it.

AutoPilot takes a simple tabular dataset and builds an ML workload around it

With AutoPilot, FitCo no longer needed a team of data scientists to get the benefit of ML. Instead, they were able to drive this initiative with a three-person project team consisting of the CFO as business owner, CTO as technology owner, and a single back-end developer responsible for building and integrating the solution.

Training data

To build their training dataset, FitCo gathered historical utilization data for each of their classes over the past two years.

Utilization for each class was expressed as a percentage of total places filled. FitCo converted the data in this column into ‘high’ , ‘medium’ and ‘low’ based on the utilization bands above and named this column ‘target’. This would be the column to be predicted by the ML model.

They then combined this data with a set of internal features they thought likely to indicate utilization. They also added a range of external data that they expected to be relevant. The result was a dataset with 800K instances and which contained the following features:

  • type of class (categorical)
  • location (categorical)
  • day of week (categorical)
  • time of day (numerical)
  • instructor (categorical)
  • studio brand (categorical)
  • is a public holiday (binary)
  • is a school holiday (binary)
  • external temperature (numerical)
  • target (categorical)

FitCo did some basic feature engineering to better organize and format this data set, converted it to csv format and saved in an S3 bucket. They now had a dataset with which they could train their ML model.

Amazon SageMaker AutoPilot

FitCo chose Autopilot for its ability to simplify and streamline the core components of the machine learning process. AutoPilot automates the process of exploring data, engineering features, testing different algorithms, and selecting the best model. All it requires is that you provide a tabular dataset.

In addition, it automatically surfaces the code base it used, adding visibility and reproducibility into the process. This was an important differentiator for FitCo’s CTO because it gave FitCo the opportunity to explore, and learn from, the steps that had been taken to generate the model, as well as giving it a code base that it could modify and optimize it into the future.

To start the AutoPilot process, FitCo used the no-code interface available within Amazon SageMaker Studio. This required three key steps:

  1. Name their experiment
  2. Point SageMaker to the s3 bucket where their training file was located, and
  3. Define the variable to be predicted
AutoPilot’s no-code interface puts ML within reach of any organization

Once these details were entered, they simply hit Create Experiment and FitCo’s ML model build was underway, running a range of trials to determine the best performing ML approach.

Complete list of trials run by AutoPilot to determine best performing model

This process took about an hour to complete. Once it had concluded, FitCo could then simply sort the list of trials to find the best performing model. They were able to generate the notebook that contained the code of this model, and they were also able to deploy the model to a SageMaker endpoint that enabled them to further test the inferences (predictions) generated by the model on new data, or even put it into production.

When modeled against FitCo’s test dataset, the ML model outperformed their rules-based approach by 2x, increasing revenue by an estimated 11% overall.

The ML model outperformed rules-based approaches by 2x

This performance improvement stemmed primarily from the higher precision of the ML predictions vs the rules-based approach. In a multi-class classification problem, the challenge is not only to predict the right class, but to minimize the cost of inaccuracies. For example, it is less costly to inaccurately predict a low demand class as ‘medium’ than as ‘high’. Specifically with FitCo’s price elasticity profile, the cost of that particular error was approximately 4x greater in the form of lower demand.

A comparison of this performance between traditional and ML approaches can be seen below, and illustrates that the traditional approach actually outperformed ML in accurately predicting high demand classes. The issue is that it was unable to do so while also accurately predicting medium and low demand classes. Moreover, it made costly errors in inaccurately predicting low classes as high demand classes. The ML model was better able to more holistically map the shape of the data to account for both situations.

Rules-based approaches incorrectly predicted low classes as high in 25% of cases

These errors came at a considerable cost to revenue for the rules-based approach that the ML model could avoid. For example, inaccurately predicting a low demand class as high demand resulted in a demand decline of nearly 50%, far offsetting the 30% higher price paid by the remaining members.

The matrix below shows the revenue growth impact of each type of prediction, expressed as the difference in performance between traditional and ML methods.

ML model was able to generate 5.5% higher revenue by more consistent accuracy overall

Though traditional methods managed to beat the ML model in correctly labeling a higher proportion of high classes as high (a metric known as ‘recall’), it did so by also inaccurately labeling many more ‘medium’ and ‘low’ demand classes as high too (known as ‘precision’).

As a result, although its accurate high predictions generated 1.4% higher revenue, that came at a cost of losing 2.8% of revenue from the demand declines of incorrectly charging higher prices to higher elasticity medium and low classes. A similar pattern emerged in low predictions; the ML model’s greater accuracy meant that it only reduced the price of classes for which one would expect to see higher demand.

As a result of this higher precision —the more accurate prediction of both high and low classes — the ML model was able to generate 5.5% higher revenue overall, more than double that of rules-based approaches.

Pricing optimization is a powerful lever for revenue growth, and the application of ML provides a powerful solution, which can often outperform traditional approaches.

In FitCo’s case, the application of ML to their challenge generated a 2x uplift in revenue growth vs their best performing rules-based alternative, and produced an estimated 11% uplift in revenue.

FitCo’s example helps demonstrate both how ML can be applied to optimize pricing, as well as the way managed services like SageMaker AutoPilot are able to put these powerful ML solutions within reach of virtually any organization.