Learn how to implement a fully automated forecast analysis system with model comparison within SAP Advanced Planning & Optimization demand planning.
Key Concept
A process chain is a series of processes that waits in the background for an event. Some of these processes can trigger a separate event that starts other processes, hence forming a chain. Using a process chain in SAP Advanced Planning & Optimization (SAP APO) enables you to put business logic in these background processes. SAP offers a set of standard functions for process chains for special functions in SAP APO demand planning, such as performing macros in the background. Use of ABAP programs allows you to use any kind of program logic. You can schedule process chains like normal background jobs for batch processing.
Many people have asked me, “Why don’t you use the SAP Advanced Planning & Optimization (SAP APO) functionality for automatic model selection?” This question is easy to answer. SAP APO offers the forecast strategy 50 (automated model selection), but a planner cannot influence the system by choosing a certain strategy since the system decides by itself which strategy to use. In my experience this strategy doesn’t find the best fitting model as it checks only a small set of possibilities and parameters. Wouldn’t it be better if you could select a set of best fitting models in an early project phase and have the system check which model fits the best?
SAP APO macros, forecast errors, and process chains can help you solve this problem. First, predefined SAP APO forecast models help you calculate statistical forecasts for all relevant planning objects. Then, the macros allow you to calculate the mean absolute percentage error (MAPE) for all these possible forecasts. Later you compare the different forecasts by checking the MAPE for each model. Finally, you can release the whole calculation as a background calculation by using SAP APO process chains. These process chains can then calculate the forecast models, compare the MAPEs, and find the best fitting model.
While there are several ways to approach this situation, this article describes a solution I prefer. The solution I describe here offers several advantages. First, for the user, the whole calculation and the model comparison are done in the background, so you don’t have to worry about finding the best forecast model. Next are the technical benefits. In SAP APO each key figure is stored in SAP liveCache. For every third planning object, key figure, and time bucket, one object in the liveCache must be reserved. These objects are known as Time Series.
My method helps to reduce the number of Time Series in the SAP liveCache. Here’s how. Demand planning (DP) often takes place on different aggregation levels. These could be, for example, a material group (bundle of materials with similar characteristics, such as products that are produced on the same capacity line) and the planning material itself (e.g., the different products of a company). One main technical recommendation for SAP APO systems is to keep the planning objects and therefore the amount of Times Series as low as possible. Thus, my recommendation is to do the planning on an aggregated level wherever possible. The process chain functionality I describe here requires at least SAP SCM 4.0.
An Example of the Process
The reduction of the Time Series is realized by using two specialized planning areas with different planning object structures (POS). POS 1 includes all objects as main characteristics (in this example: material, plant, and planning group). POS 2 includes only relevant main characteristics for planning on an aggregated level (plant and planning group). In my example, I call the planning area and planning object structure bundle the planning area (PA). PA 1 is used for planning on the detailed level in the standard planning area. PA 2 is used for forecasting on an aggregated level in the forecast planning area.
Let me give you an example of how two specialized planning areas help to reduce the number of planning objects in your scenario. I will make a calculation example to show you the main advantage of planning on an aggregated level (reduction of Times Series objects). Scenario 1 shows what happens if you use only one planning area (not recommended). Scenario 2 shows what happens when you use the proposed method of planning on an aggregated level.
Scenario 1
All key figures for forecasting are calculated in only one planning area:
Number of characteristic value combinations (CVCs): | 20,000 |
Number of key figures: | 20 |
Number of periods: 36 in the past and 18 months in future ? | 54 |
? total amount of Time Series objects: 20,000 x 20 x 54 = | 21,600,000 |
Scenario 2
Key figures for forecasting are calculated in a separate forecast planning area:
Number of CVCs planning area 1 (same as scenario 1): | 20,000 |
Number of standard key figures: | 10 |
Periods: | 54 |
? number of Time Series objects: | 10,800,000 |
| |
Number of CVCs planning area 2 (due to lower aggregation level): | 5,000 |
Number of forecasting key figures: | 10 |
Periods: | 54 |
? number of Time Series objects: | 2,700,000 |
| |
? total result in liveCache: 10,800,000 + 2,700,000 = 13,500,000 |
Example data: | |
Material 1: | 1100001 |
Plant: | P01 |
Planning Group: | 1100001P001 |
| |
Material 2: | 1100002 |
Plant: | P01 |
Planning Group: | 1100001P00 |
In Scenario 2, the two CVCs are compressed to one combination if planning takes place on an aggregated level (in the example: P01 - 1100001P00). The different materials are not necessary any longer if you plan on an aggregated level.
In Scenario 1, 21,600,000 liveCache objects are necessary for 20,000 CVCs, 20 key figures, and 54 periods. In Scenario 2, 20,000 CVCs are created in planning area 1 and 5,000 are created in planning area 2. In each planning area there are only 10 key figures with 54 periods each. The main reduction of Time Series objects results from the much fewer CVCs in planning area 2 by planning on an aggregated level.
After examining the two scenarios, you can see how significant the drop in the number of planning objects is. Therefore the Time Series objects in the SAP liveCache can be reduced if you use two specialized planning areas. For realizing this scenario, simply build a second planning object structure with only two instead of three main characteristics (aggregated planning as shown above). Once you’ve finished the basic work by setting up the two planning areas and planning object structures, the next step is to define planning books and data views for these two planning areas.
Two SAP APO process chains are also necessary. The first chain calculates the best fitting forecast model in the forecast planning area while a second process chain copies the best fitting forecast key figure from the forecast planning area to the standard planning area. (Details for these two process chains are described later in the “System Setup and Forecast Preparation” section).
You have to set up the model selection manually, but just once during the initial buildup of the scenario. My proposal is to select representative planning objects and analyze a set of forecast models with different parameters. If you're not sure which to use, start with strategy 11 (First-order exponential smoothing) or strategy 20 (Forecast with trend models). Also, the Croston’s method, a forecast method specifically developed for sporadic trends, could be very interesting if the demand is strongly sporadic. In short, Croston’s method uses exponential smoothing first-order to calculate a mean time interval between the values in the Time Series that are not equal to zero. This method is also proposed as SAP APO standard forecast strategy number 80.
System Setup and Forecast Preparation
I will now describe the detailed system setup in two sections. This first section explains the basic system work and forecast settings. After this, the necessary planning areas, forecast models, and macros are built up. I will not further describe how to configure planning areas and forecast models in detail, but in the “System Setup – Details About Important Macros” section I will give detailed descriptions of the macros that are important for forecast calculation.
Step 1. Create two planning areas. The first thing to do is to create two planning areas with the relevant key figures (transaction code /SAPAPO/MSDPADMIN). Use the function copy planning area data (transaction code /SAPAPO/TSCOPY) later to copy data from the forecast planning area to the standard planning area according to the selection criterion.
Step 2. Create a table for forecast model control in the SAP Data Dictionary. The necessary values are:
- Planning group (for aggregated planning)
- Plant
- Assigned forecast model
This table is used later in a user function macro to compare the results and create an alert if a better fitting model has been found.
Step 3. Analyze the best fitting forecast models. I suggest using a set of about 30 different forecast models and parameters. In most scenarios you should get, as a result, a set of about 10 best fitting models that you can use later for automating your forecast. After you have identified the best fitting set of standard models for your planning data, you customize the SAP APO forecast profiles with transaction code /SAPAPO/MC96B.
For later comparison of the different forecast results, you must identify the most meaningful forecast error. I suggest that you use the MAPE, because it doesn’t compensate positive and negative deviations between the real sales quantity and predicted quantity (part of the formula). The second advantage of this value is that single outliers in time history have a lower influence than if you use other forecast errors such as Mean Square Error (MSE) or Root of Mean Square Error (RMSE). This error is later used to find the best fitting forecast model for each CVC in the batch run.
In this step for each of the most relevant products you calculate an interactive ex-post forecast (called this because past values are used to calculate a forecast in the future) with several forecast models in order to find out which forecast models deliver the best results. Interactive ex-post forecast is the type of forecast that is executed manually for a single product in the (forecast) planning book (SAPAPO/SDP94). The results are stored for comparison and identification of the best fitting forecast models in an external Microsoft Excel sheet.
The identified forecast models have to be included as single steps in the process chain and deliver forecast values for each model.
Step 4. Set up two process chains. The chains perform the forecasting and data copy between the two planning areas. Let me describe the chains in further detail.
Process Chain 1
Process Chain 1 is used to do the forecasting and model comparison according to the MAPE values in the specialized forecast planning area. You schedule this chain to run once a month. In my experience this is enough because history values are best when a month is finished. It is best to schedule the chain at the beginning of a new month.
Use these steps for setup:
Step 1. Create Time Series objects for the planning area (program /SAPAPO/TS_PAREA_INITIALIZE). I normally do the initialization of the planning area only according to the planning horizon: for example, 36 months in the past and 18 months in the future. This guarantees that only the time horizon that is really necessary is initialized (remember the reduction of Time Series objects).
Step 2. Delete all key figures for the forecast analysis by using a macro. This prevents old values from being stored in the planning area. I discuss this further in the System Setup – Details About Important Macros section.
Step 3. Copy the key figure for forecasting from the standard planning area to the forecasting planning area. I’ve used the sales quantity for creating the forecast. That means that the calculation makes sense only for a finished month. Otherwise the result would change daily according to the changes in sales quantity. Copy the key figure by using transaction /SAPAPO/TSCOPY for all materials.
Step 4. Calculate all created forecast models. For this purpose create single activities for mass processing with transaction /SAPAPO/MC8T (Figure 1). These activities can be used in SAP APO to run macros or calculate forecast models. After creating the activity it is assigned to a background job using transaction code /SAPAPO/MC8D. This SAP APO DP background job is later put into an SAP APO process chain.

Figure 1
Create an activity for mass processing
After the creation of an activity, the forecast models are assigned to it in transaction /SAPAPO/MC8T (Figure 2). All you have to do is click the Create button in Figure 1. In this example the Master Profile SO03 is assigned the Univariate Profile SO03.

Figure 2
Assign the forecast model to activity for mass processing
Step 5. Calculate the statistical forecast error. This is done by performing a macro that is integrated in Process Chain 1 and therefore starts automatically when the chain run is triggered. I discuss this further in the “System Setup – Details About Important Macros” section.
Step 6. Write the best fitting forecast model. Based on the calculation result, write the model you think fits best into the steering table. This steering table for copy control is used in the second process chain to control which forecast model result is copied into the main planning area.
Process Chain 2
Process Chain 2 copies the assigned forecast model for each CVC daily according to the entry in the steering table from the forecast planning area into the standard planning area. For this purpose the SAP APO standard transaction /SAPAPO/TSCOPY is used. The Selection Condition selects the relevant data to copy (Figure 3).

Figure 3
Select the data
The copy is then made according to the attribute Assigned FC Model, in this example for CM01 in Figure 4. This means that the system checks for each CVC and which model was identified as the best. It then copies the forecast result into the standard planning area.

Figure 4
Select the best forecast
By using the transaction /SAPAPO/TSCOPY, the process chain performs the forecast calculation for the CVC in the forecast planning area, and only the relevant forecast result (exactly the best fitting model) is copied back into the standard planning area.
System Setup – Details About Important Macros
In this section I’ll explain how to set up the relevant macros for forecast calculation and error analysis.
Step 1. Delete all old key figures in planning book. The macro in Figure 5 simply deletes all old key figures in the forecast planning book except the row that contains the history key figure with 42 iterations, from December 2006 (M 12.2006) until May 2010 (M 05.2010). Don’t delete the history key figure; otherwise, you can’t perform any forecast. The intention behind this macro is to eliminate any remaining data relics of previous calculations that could cause calculation errors.

Figure 5
Deletion of macros for all calculated key figures
Step 2. Analyze the statistical MAPE. The macro in Figure 6 calculates the MAPE using a standard macro function for all periods and for each of the used forecast models. Figure 6 shows the part of the macro that calculates the MAPE for exponential smoothing first order with alpha = 0.1 using the standard macro function MAPE( ).

Figure 6
Calculation of MAPE with macro function MAPE( )
The calculated values for each period are written in row Temp.MAPE (Figure 7).

Figure 7
Store the MAPE for each forecast model in key figure Temp.MAPE
In the next step the macro calculates the average MAPE value using all values of row Temp-MAPE. This result is then written in a specific cell in the Prognosis result row. After these two steps, the system performs the calculation of MAPE and the average MAPE for all eight implemented forecast models. So in my example you find the average MAPEs for each used model in the Prognosis result row. This means the first cell contains the average MAPE for exponential smoothing first order (alpha = 0.1) and continues to the eighth cell containing the average MAPE of Croston’s Method with alpha 0.9 (this is an example set of forecast models that can be used). In the last step the macro identifies the minimal value in row Temp.MAPE and writes it to row Min. MAPE. This is done by using the standard macro function MIN( ).
Step 3. Fill the steering table for model comparison. In this step the model that delivers the lowest forecast error is identified and written to the material steering table. For this purpose the macro works with two steps, illustrated in Figure 8. First the value in the first cell of the Prognosis result row, which in my example represents the result of exponential smoothing first order (alpha = 0.1), is checked against the value contained in row Min. MAPE. If these two values are equal, then exponential smoothing first order is identified as the best fitting forecast model. Then, in a second step, with the help of custom function module Z_X_Z4PROAL ( ), the String ESF001 (exponential smoothing first order, alpha = 0.1) is written as the best fitting model for the corresponding CVC into the steering data table.
If these two compared values are not equal, the macro logic jumps to the second value in row Prognosis result (represents result exponential smoothing first order, alpha = 0.2) and compares it again with the lowest MAPE value. This step is repeated in equal procedure for all forecast results in the Prognosis result row until the system identifies the model that delivers the lowest forecast error.
After this step you have found the best fitting model for each characteristic value combination according to the forecast error. This model is assigned to the planning combination in the steering table and is used for copy control in transaction /SAPAPO/TSCOPY.

Figure 8
Fill control table with best fitting forecast model
Interesting SAP Notes for user function macros:
- 631499 – Using the user function Macro
- 418801 – Creating a user exit Macro
Andreas Gründobler
Andreas Gründobler is an IT analyst at OSRAM Opto Semiconductors GmbH Regensburg, Germany. After finishing his studies of information technologies at the University of Applied Sciences in Regensburg, he was project leader in charge of implementing several modules of SAP APO. He has five years of experience with SAP SCM planning purposes, especially demand planning, and is a certified SAP Solution Consultant in Planning & Manufacturing.
If you have comments about this article or publication, or would like to submit an article idea, please contact the editor.