Machine Learning for Sales Forecasting: A Capstone Project with Columbia University

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

This past semester we have been collaborating on a machine learning Capstone Project with Columbia University’s Master of Science in Applied Analytics: capstone projects are applied and experimental projects where students take what they have learned throughout the course of their graduate program and apply it to examine a specific area of study.

 

Capstone projects are specifically designed to encourage students to think critically, solve challenging data science problems, and develop analytical skills. Two group of students built an end-to-end data science solution using Azure Machine Learning to accurately forecast sales. Azure Machine Learning is a cloud-based environment that you can use to train, deploy, automate, manage, and track ML models.

 

Azure Machine Learning can be used for any kind of machine learning, from classical machine learning to deep learning, supervised, and unsupervised learning. Whether you prefer to write Python or R code or zero-code/low-code options such as the designer, you can build, train, and track highly accurate machine learning and deep-learning models in an Azure Machine Learning Workspace.

 

To explore the solution developed by students at Columbia University, you can look at their Time-Series-Prediction repository on GitHub. In this article, we use an approach also used by Columbia University students, which is Automated Machine Learning (Automated ML or AutoML) to train, select, and operationalize a time-series forecasting model for multiple time-series. Make sure you have executed the configuration notebook before running this notebook.

 

Automated ML (as illustrated in Figure 1 below) is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality:

clipboard_image_0.png

Figure 1 - Automated ML process on Azure - Source: www.aka.ms/AutomatedMLDocs

 

The examples below use the Dominick's Finer Foods data set from James M. Kilts Center, University of Chicago Booth School of Business, to forecast orange juice sales. In the rest of this article we will go through the following steps:

 

clipboard_image_1.png

 

1. Azure ML set up and create a machine learning experiment

 

 

import azureml.core import pandas as pd import numpy as np import logging from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig

 

 

 

As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem:

 

 

 

ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = 'automl-ojforecasting' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace'] = ws.name output['SKU'] = ws.sku output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Run History Name'] = experiment_name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T

 

 

2. Create a compute target

You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.

If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service:

 

 

from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Choose a name for your cluster. amlcompute_cluster_name = "cpu-cluster-oj" found = False # Check if this compute target already exists in the workspace. cts = ws.compute_targets if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute': found = True print('Found existing compute target.') compute_target = cts[amlcompute_cluster_name] if not found: print('Creating a new compute target...') provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6" #vm_priority = 'lowpriority', # optional max_nodes = 6) # Create the cluster compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config) print('Checking cluster status...') # Can poll for a minimum number of nodes and for a specific timeout. # If no min_node_count is provided, it will use the scale settings for the cluster. compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20) # For a more detailed view of current AmlCompute status, use get_status().

 

 

3. Loading and handling historical data

You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called WeekStarting, so it will be specially parsed into the datetime type:

 

 

time_column_name = 'WeekStarting' data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name]) data.head()

 

 

Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred.

The task is now to build a time-series model for the Quantity column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of Store and Brand. To distinguish the individual time-series, we thus define the grain - the columns whose values determine the boundaries between time-series:

 

 

grain_column_names = ['Store', 'Brand'] nseries = data.groupby(grain_column_names).ngroups print('Data contains {0} individual time-series.'.format(nseries))

 

 

For demonstration purposes, we extract sales time-series for just a few of the stores:

 

 

use_stores = [2, 5, 8] data_subset = data[data.Store.isin(use_stores)] nseries = data_subset.groupby(grain_column_names).ngroups print('Data subset contains {0} individual time-series.'.format(nseries))

 

 

4. Splitting data and uploading it to data store

We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns:

 

 

n_test_periods = 20 def split_last_n_by_grain(df, n): """Group df by grain and split on last n rows for each group.""" df_grouped = (df.sort_values(time_column_name) # Sort by ascending time .groupby(grain_column_names, group_keys=False)) df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n]) df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:]) return df_head, df_tail train, test = split_last_n_by_grain(data_subset, n_test_periods)

 

 

The Machine Learning service workspace, is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create tabular datasets for training and testing:

 

 

train.to_csv (r'./dominicks_OJ_train.csv', index = None, header=True) test.to_csv (r'./dominicks_OJ_test.csv', index = None, header=True) datastore = ws.get_default_datastore() datastore.upload_files(files = ['./dominicks_OJ_train.csv', './dominicks_OJ_test.csv'], target_path = 'dataset/', overwrite = True,show_progress = True) #Let’s now create the dataset that we will use for our training part: from azureml.core.dataset import Dataset train_dataset = Dataset.Tabular.from_delimited_files(path=datastore.path('dataset/dominicks_OJ_train.csv')) train_dataset.to_pandas_dataframe().tail()

 

 

5. Modeling and training

For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. Automated ML will undertake the following pre-processing steps:

  • Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span
  • Impute missing values in the target (via forward-fill) and feature columns (using median column values)
  • Create grain-based features to enable fixed effects across different series
  • Create time-based features to assist in learning seasonal patterns
  • Encode categorical variables to numeric quantities

In this notebook, AutoML will train a single, regression-type model across all time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please check out the forecasting grouping notebook.

You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:

 

 

target_column_name = 'Quantity'

 

 

The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.

For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole data set is a single time-series. We also pass a list of columns to drop prior to modeling.

The logQuantity column is completely correlated with the target quantity, so it must be removed to prevent a target leak.

The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organization that needs to estimate the next month of sales would set the horizon accordingly. 

Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a specific procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the validation_data parameter of AutoMLConfig:

 

 

time_series_settings = { 'time_column_name': time_column_name, 'grain_column_names': grain_column_names, 'drop_column_names': ['logQuantity'], # 'logQuantity' is a leaky feature, so we remove it. 'max_horizon': n_test_periods } automl_config = AutoMLConfig(task='forecasting', debug_log='automl_oj_sales_errors.log', primary_metric='normalized_mean_absolute_error', experiment_timeout_minutes=15, training_data=train_dataset, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, n_cross_validations=3, verbosity=logging.INFO, **time_series_settings)

 

 

You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes. Information from each iteration will be printed to the console:

 

 

remote_run = experiment.submit(automl_config, show_output=False) remote_run remote_run.wait_for_completion()

 

 

6. Retrieve the best model and forecasting

Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation data set:

 

 

best_run, fitted_model = remote_run.get_output() print(fitted_model.steps) model_name = best_run.properties['model_name']

 

 

Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:

 

 

X_test = test y_test = X_test.pop(target_column_name).values X_test.head()

 

 

To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data:

 

 

# The featurized data, aligned to y, will also be returned. # This contains the assumptions that were made in the forecast # and helps align the forecast to the original data y_predictions, X_trans = fitted_model.forecast(X_test)

 

 

If you are used to scikit pipelines, perhaps you expected predict(X_test). However, forecasting requires a more general interface that also supplies the past target y values. Please use forecast(X,y) as predict(X) is reserved for internal purposes on forecasting models. 

 

7. Evaluate the model

To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).

It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows:

 

 

from forecasting_helper import align_outputs df_all = align_outputs(y_predictions, X_trans, X_test, y_test, target_column_name) from azureml.automl.core._vendor.automl.client.core.common import metrics from matplotlib import pyplot as plt from automl.client.core.common import constants # use automl metrics module scores = metrics.compute_metrics_regression( df_all['predicted'], df_all[target_column_name], list(constants.Metric.SCALAR_REGRESSION_SET), None, None, None) print("[Test data scores]\n") for key, value in scores.items(): print('{}: {:.3f}'.format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b') test_test = plt.scatter(df_all[target_column_name], df_all[target_column_name], color='g') plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8) plt.show()

 

 

8. Operationalization: deploy the model as a Web Service on Azure Container Instance

Operationalization means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model:

 

 

description = 'AutoML OJ forecaster' tags = None model = remote_run.register_model(model_name = model_name, description = description, tags = tags) print(remote_run.model_id)

 

 

For the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run:

 

 

script_file_name = 'score_fcast.py' best_run.download_file('outputs/scoring_file_v_1_0_0.py', script_file_name) from azureml.core.model import InferenceConfig from azureml.core.webservice import AciWebservice from azureml.core.webservice import Webservice from azureml.core.model import Model inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script = script_file_name) aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 2, tags = {'type': "automl-forecasting"}, description = "Automl forecasting sample service") aci_service_name = 'automl-oj-forecast-01' print(aci_service_name) aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig) aci_service.wait_for_deployment(True) print(aci_service.state) aci_service.get_logs()

 

 

9. Call the service and consume the model

Finally, in order to call the service and consume your machine learning model, you can run the following script:

 

 

import json # The request data frame needs to have y_query column which corresponds to query. X_query = X_test.copy() X_query['y_query'] = np.NaN # We have to convert datetime to string, because Timestamps cannot be serialized to JSON. X_query[time_column_name] = X_query[time_column_name].astype(str) # The Service object accept the complex dictionary, which is internally converted to JSON string. # The section 'data' contains the data frame in the form of dictionary. test_sample = json.dumps({'data': X_query.to_dict(orient='records')}) response = aci_service.run(input_data = test_sample) # translate from networkese to datascientese try: res_dict = json.loads(response) y_fcst_all = pd.DataFrame(res_dict['index']) y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms') y_fcst_all['forecast'] = res_dict['forecast'] except: print(res_dict) y_fcst_all.head()

 

 

10. Final resources to learn more

To learn more, you can read the following articles and notebooks:

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.