Learn how to avoid mistakes that plague many SAP implementations associated with flawed testing approaches. Take away valuable information that you can use as a baseline for either enhancing the status quo at your respective projects or for avoiding potential mistakes before testing is initiated.
Key Concept
Initial implementations and major system upgrades usually include several different types of testing. Unit testing is the lowest level of testing, occurring at the transaction code level. This type of testing verifies transaction functionality and the master data. Scenario testing verifies smaller processes within an individual module and may be tested with interfaced or converted data. This type of testing can also include role-based testing if end-user roles are defined. User Acceptance Testing (UAT) allows end users to evaluate the system and training materials prior to deploying the application. You can incorporate feedback from UAT as enhancements or fixes for future system releases. Other common types of testing include development, integration, regression, and performance testing.
In an SAP ERP global rollout, an implementation team creates an initial template for one division and uses it as the standard for other divisions. Companies conduct testing activities related to test planning, test execution, test reporting, and defect management on the initial template and repeat them at other divisions or units.
However, it’s important to understand how successful these testing activities are in the initial template and how well coordinated the activities are. Important questions to consider include:
- What is the quality of the test deliverables?
- How are scenarios developed to ensure that all requirements are verified prior to going live?
- What is the quality of the test scripts and how can these be reused?
- How much time was allocated to testing? What heuristics, if any, were used to estimate how much time was needed in the project schedule to conduct testing activities?
- How well did the project team members, consisting of functional resources, developers, and team leads, understand their testing tasks and activities?
- What about indirect project resources such as external auditors — were they able to successfully validate the test results?
- How can the testing timeline be shrunk without affecting the end-user experience or creating more production support tickets for the production team?
These are questions that few SAP ERP projects take the time to document and address. Even when companies allocate at least 50% of their time to testing activities, they often neglect to document the lessons learned and develop action items to address these lessons learned. Therefore, I provide lessons learned and testing practices collected from various SAP ERP global rollouts to help you improve testing tasks and activities in your next project.
The lessons learned and best testing practices for SAP ERP global rollouts are for project managers, cutover managers, and experienced SAP functional and technical team members. The eight main topic areas that I cover are:
- Reusable components
- Requirements traceability
- Project management visibility
- User Acceptance Testing (UAT)
- Test management tools
- Role-based testing
- Negative role testing
- Test beds for interfaces
Reusable Components
Reusable components allow you to create and then recycle modular individual test scripts “as is” or with modifications from one test event to the next. These individual test scripts are the building blocks and foundations for subsequent test events. The key to reusable components is minimizing the number of test scripts that you must develop from scratch.
Employing reusable components works optimally in the early stages of testing, such as unit testing, when you write test scripts to represent the transaction codes within the Business Process Master List (BPML) and the individual Reports, Interfaces, Conversions, and Extensions (RICE) objects. Reusable components streamline testing by providing users with a test-script repository, which can then lead to automated test scripts. Furthermore, reusable components allow the test team to construct and recombine functions and processes to create numerous test scenarios for integration, load/stress, and regression testing.
For a project implementing one or more components — such as SAP Customer Relationship Management (SAP CRM), SAP Supplier Relationship Management (SAP SRM), SAP NetWeaver Business Warehouse (SAP NetWeaver BW), and SAP Supply Chain Management (SAP SCM) — you need to create a hierarchical library of unit test scripts. You also need to store this information in a secured central repository test tool, such as SAP Quality Center by HP or SAP Solution Manager, either of which you can use to create reusable test components. At the unit testing level, you want the test scripts representing the reusable components decomposed at the lowest level of testing to verify master data, transaction functionality, error-handling conditions, reconciliations, record count, data transfer rules, and customized online help features.
You need to assign the creation of the reusable component test scripts to individual testers by team to avoid duplication of work.
Figure 1 illustrates a hierarchical library of test scripts in the test management tool that uses individual test scripts from sales and distribution (SD) and SAP ERP Financials for the order-to-cash end-to-end scenario. Creating a test script entails establishing test steps with test data, preconditions, and expected results for each transaction code and its corresponding variation.
Figure 1
Hierarchy of reusable test scripts available to other project members and stored within a test management tool
For instance, transaction CJ20 from Project System (PS) can have many variations that should be identified within the Business Process Procedures (BPPs). You can assign each variation so that no two individuals are working on the same test script for the same transaction code or variation.
Initially, you can store the developed test scripts (once peer-reviewed and approved) in a central repository folder by team, such as the Logistics Team Sandbox folder. This gives all team members visibility and access to these test scripts for reuse within their specific test scenarios. After the team has developed all the test scripts for the assigned transaction codes and RICE objects and stored them within the test management tool repository, others can reuse and modify the test scripts to create multiple end-to-end combinations of end-to-end scenarios.
Note
In SAP systems, you can execute, navigate, and populate the same transaction in different formats based on business rules that may constitute a process variation. For instance, transaction code VA01 for creating a sales order may have a different process, business rule validations, and fields that need to be populated when creating a consignment order versus creating an international order.
Figure 2 shows the usefulness of reusable components. The example of the end-to-end scenario for order-to-cash consists of multiple SAP transaction codes and interfaces to outside applications for lockbox, shipments, inventory updates, receiving advanced shipment notifications (ASNs), and conducting customer credit checks with a credit agency. The transaction codes for order-to-cash come mainly from SD and SAP ERP Financials.
Figure 2
Example of an order-to-cash process that includes multiple transaction codes from SD and SAP ERP Financials
The assigned test team members can develop individual test scripts for each of the transaction codes and interfaces in a test management tool. Once the test scripts are written during unit testing, the SAP functional analysts and subject matter experts (SMEs) can combine the constructed test scripts or modify them as needed to create new scenarios for other test events, such as integration testing.
This practice enables the project team to move from unit to integration testing. For instance, the order-to-cash team can expand individual unit test scripts from the test management repository for tasks such as creating sales orders, deliveries, and invoices and rearrange them in a logical sequence to form a larger end-to-end scenario. The larger end-to-end scenarios consist of previously designed test scripts for transaction codes, RICE, security, and workflow testing. With this approach, the order-to-cash team ends up with a modular end-to-end scenario built mostly from the reusable components designed during unit testing, which is easier to maintain and execute than a massive end-to-end scenario with hundreds of test steps that the test team needs to execute.
Note
The order-to-cash scenario can have multiple variations, such as the acceptance of partial incoming payments, contract creation, multiple sales order types, partial deliveries, and customer inspections. Therefore, reusable unit test scripts may need to be modified to account for all of these scenario variations.
Requirements Traceability
Requirements for an SAP ERP application are best captured as end-user tasks in workshops with functional team members, SMEs, and system architects during the blueprint or gap analysis phase. These workshops document the necessary tasks that the end user is expected to perform in the SAP ERP system production environment. Necessary tasks might involve system performance, security, segregation of duties, internal controls, functionality, business processes, business rules, converted data, and industry compliance.
The requirements capturing phase allows the functional project team members and SMEs to define the end-to-end scenarios, business rules, and flow process diagrams that represent how the end users will interact with the SAP ERP system. The requirements gathering effort is also instrumental for scope verification during testing because the project manager and system integrator are typically expected to confirm that all in-scope requirements and business rules have been successfully demonstrated prior to going live. Requirements mapping and validation is a critical quality assurance function because today’s definition of quality is conformance to requirements.
You can present individual requirements in a Requirements Traceability Matrix (RTM) using a tool such as SAP Enterprise Modeling by IDS Scheer (subject to version controls with each individual requirement mapped to a test script). The approach of mapping individual requirements to test scripts is well suited for unit testing, but you need to modify it for scenario and integration testing. Although you may be able to successfully validate an individual requirement in isolation, the requirement may fail when taken in the end-to-end scenario — such as order-to-cash — that has multiple requirements grouped together to form an end-to-end process.
During scenario and integration testing, I recommend that you group all the requirements that logically form an end-to-end scenario. This approach validates all the requirements with all the necessary touch points, data integrations points, business rules, workflow, and end-user roles prior to making the go or no go decision for go-live. Typical examples of end-to-end scenarios for an ERP application include hire-to-retire, order-to-cash, procure-to-pay, acquire-to-require, make-to-order, and make-to-stock, which can be executed with multiple variations. The end-to-end scenario for order-to-cash may have variations based on the order type, such as a domestic, international, or consignment order.
Create and Enforce Standards: Project Management Visibility
Quality assurance (QA) standards allow for the deployment of a system with quality as opposed to testing for quality. QA standards also enforce project discipline, ensure that all the project members have the same approach to testing and creating test deliverables — such as documenting test results and ensuring the creation of consistent testing artifacts — and help meet compliance with external and internal auditors. Project members need to be educated on what the QA standards are and how they will be enforced and monitored throughout the project’s various test events.
You have many opportunities for creating QA standards that will pay big dividends for the overall quality and consistency of the system design. These include:
- Test defects: How are these to be reported and routed with workflow? Are screen captures needed, and if so, what are the criteria for screen captures? Who is authorized to close a defect? What are the audit trails for a defect?
- Test script design: What are the controls for designing test scripts? How will test scripts be peer-reviewed and approved? How granular do the test steps need to be and should they be recorded at a high level or the keystroke level? Which fields need to be populated? How do you plan to name the test scripts? How are negative test scripts denoted? Examples of test script controls include defining all fields that need to be populated for a functional test script and defining the level of granularity for designing a test script. For example, you usually write the test script on a keystroke level for anyone to follow or at a high level for someone with in-depth knowledge of the application. All interfaces need to be tested for reconciliations, transfer rules, record counts, and performance to ensure that a test script maps to at least one testable requirement and that it is role based.
- Test script template: What is the format for developing test scripts? Is there a data dictionary describing all the fields on the template? You can convert a test script template that is Microsoft Excel- or Word-based to a test script within a test management tool or upload it as an attachment. However, testers need to know which fields they need to test, what the allowed values are, which fields are mandatory or optional, and which output data from one test script may feed as the input data for another test script.
- Project schedule: Have testing activities been identified and recorded on the project’s schedule? Have the activities been recognized and approved by the Project Management Office (PMO) and accepted by the task owners? How are duration times estimated for testing tasks? What are the dependencies? Addressing theses issues is a critical project management activity that affects the project schedule and budget. Many project hours are allocated to large testing efforts, such as UAT or integration testing, whether the testing is conducted manually or with automated test tools. You need to gather actual versus expected times to complete the design and execution of test scripts, measure the variance, and use this information as a guideline for estimating the time needed to complete future test events.
- Project metrics: Which test metrics are to be reported for each test event? What are the contents and format of the metrics, and are these to include graphs or charts? How is the data to be entered for gathering metrics? Even if there is a tool for test management available, the tool will not provide valuable metrics unless users actively update the test management tool to reflect the executed test scenarios. For each project, you need to decide which test reports will be shown, how they will be used for trend analysis, and how they will support the exit criteria, identify resource bottlenecks, measure earned value, or re-baseline the project’s schedule.
- Modifications: What is the procedure for changing a test script that has been previously approved in the middle of testing? What happens if a previously approved test script needs to be changed during integration testing? Who is authorized to make changes and under what conditions will the changes be authorized? How are system changes to be transported from one environment to the next when the test environment is supposed to be frozen? Is there a change control board in place, and if so, who are the participants? How is test documentation such as requirements, test scenarios, flow process diagrams, training documentation, and configuration notes updated during testing to reflect a system change that has been approved to go into the frozen test environment?
- Audits: What documentation do the auditors or external agencies require from the test events to consider the testing complete? How is this documentation to be captured and delivered to the auditors?
- Test plans: Do you have an approved test plan or strategy document that shows the test roles, assumptions, risks, scope of testing, schedule, definitions of test events, exit and entrance criteria, data refresh strategy, and testing resources? If not, how will testing take place? Is it that everyone has an idea of what constitutes testing but does it differently in a chaotic way, or is it a disciplined auditable approach to testing based on a written and approved test plan? If a test plan has not been previously written for an initial global template, then it may be necessary to draft and approve one before commencing further testing activities for any templates that are part of the global rollout.
- Test script maintenance: In a global rollout, you need to maintain and update test scripts that you want to reuse in different divisions to compensate for changes to the system configuration settings, master data, security settings, and regulatory compliance. Test scripts consist of test steps, input data, pre-conditions, log-on roles, and expected results that may be subject to change when the global rollout expands to other sites. The test manager needs to coordinate these changes with the functional team leads, the change management team lead, and the requirements manager.
A test management tool that includes version control, lockdown, workflow, and the approval process provides enhanced transparency and audit trails for updating the test scripts. You’ll need to update the test scripts for documentation and for automation (if they have been automated with a test tool). Changes to the automated test scripts may include changes in programming logic, error handling, conditional statements, verification points, synchronization points, and data values.
UAT Preparation
UAT participants need to receive SAP ERP training and attend a kick-off meeting prior to engaging in UAT test execution. The SMEs, project managers, system architects, and functional team leads work together to determine the scope of UAT testing and how the end users will participate in the UAT sessions.
The details of how to conduct UAT testing is often a matter of contention among the various project stakeholders. Some system integrators want to give prototypes or Microsoft PowerPoint presentations of the application to UAT participants. Other clients that hired the system integrators may demand that the UAT participants actively take part in UAT testing by taking a hands-on approach to executing the test scenarios that were previously executed during integration testing.
UAT gives the end users the opportunity to verify that the requirements that were provided during the blueprint phase fulfill their expectations. UAT also gives end users the opportunity to raise critical system problems that should be addressed during the go/no go decision, which makes them the de facto gatekeepers for the application prior to deploying it. End users might raise system exceptions that are out of scope or simply enhancements, but even these should be recorded for a future system release or gap analysis.
The test manager needs to ensure that all defects from integration testing prior to entering UAT are resolved (with the exception of cosmetic defects), that all in-scope requirements have been successfully demonstrated, and that at least 95% of all scenarios have been executed during integration testing. Integration testing can also serve as a platform for evaluating the training documents. The test manager also needs to develop cheat sheets for the UAT testers. This documentation should provide details on a wide range of tasks, guidelines, and specifications, including:
- Procedures and directions about how to record defects on a tool such as the SAP Solution Manager Test Workbench or SAP Quality Center by HP
- Standards for recording test results
- List of repositories for storing test data during UAT testing
- Announcements of known problems and system limitations
- A test schedule or test calendar
- Printouts of test scripts
- Setup instructions for UAT testing facilities
- A list of functional experts available to UAT participants for assistance as needed during test script execution
- Confirmation that the UAT understand the procedures for testing
Test Management Tools
Test management and automated test tools require a learning curve that is often ignored at many projects and thus reduces their usefulness and return on investment. Most test tools require a background in coding to maintain the automated test scripts when the application undergoes system changes. Furthermore, most testers already have so may other tasks — such as conducting gap analysis, adding system enhancements, and validating data — that they have little or no time to learn a new automated test tool.
Test tools — whether used for performance, functional, or test management purposes — need an established owner within the project to avoid gathering dust after they have been acquired. Test tools initially require training, setup, installation, and customization. After that period, someone within the project needs to maintain the test tool, apply vendor patches, answer questions on usage, train new project members, and ultimately use the tools for test automation. These are all full-time tasks that cannot be simply given to a functional user who has no prior experience with test tools.
Depending on the tool used to test automation, it may require knowledge in a programming language such as C or Visual Basic and techniques associated with automation, such as:
- Data correlation
- Addition of verification points
- Working with external data sheets
- Insertion of loops
- Error handling
- Creation of synchronization points
- Object recognition when test tools do not recognize objects on the screen
- Identification of hidden custom objects, such as rows in an array
- Construction of function files
These are all tasks that one can reasonably expect to perform when automating processes in an SAP ERP environment, particularly for critical end-to-end scenarios such as order-to-cash. This makes the need to have experienced test tool resources quite compelling.
In global rollouts, the initial template may first be tested mostly manually, which may prove time consuming. For future releases, test automation of core and repetitive processes may be needed to reduce the overall project schedule and allow functional resources to focus on enhancing and configuring the application. The project manager should hire or contract resources with in-depth knowledge of test tools who can create an automation pilot for stable yet critical processes within a project function, such as distribution, finance, or logistics. Depending on the success of this pilot, this resource may continue to automate processes to reduce the total time spent manually testing the application.
Role-Based Testing
For most SAP ERP implementations, it is critical to maintain segregation of duties and assign users privileges and permissions consistent with their end-user production roles to avoid employee fraud and waste. An example of segregation of duties could be that the associates responsible for approving purchase orders do not have security rights to receive and accept goods.
Test scripts need to reflect role-based testing as soon as the change management team identifies the role and the security or Basis team implements it. If the roles need to be determined, you can add placeholders in the test scripts to represent those roles so the test script can still be used in a test event. Testers executing test scripts need to log on with the roles that represent the end user in production. For instance, a warehouse manager may use a test script to test the goods receipt process in the production environment. In this case, the test script should have a log on step for the warehouse manager.
You should not discontinue role testing once the system is deployed in the production environment. Regression testing for production support, whether it is manual or automated, also needs to be role based. Test tools allow incorporation of test roles, and thus a process may be played back multiple times with different sets of end-user roles. In a global rollout, a company may have different end-user roles for the same process that vary by region, country, or division, so it may be necessary for you to update the test scripts from the initial global template to reflect the new user roles from the continued global rollout to other sites.
Negative Testing
Role testing is not limited to testing that the user can perform, tasks based on the privileges, and permissions assigned to the role. You can also carry out negative testing to deny system access to users. Negative role testing prevents violation of the segregation of duties or unauthorized system access. For example, based on her role, an HR representative may not be able to approve employee pay raises, approve vacations, or execute an interface that shows employees’ Social Security numbers.
Negative testing is an essential area of testing that companies often omit due to time constraints. However, omitting this testing can cause problems in the production environment as inexperienced end users commit errors or create a potential violation in business rules that the system should correctly identify and handle.
You should conduct negative testing during the early test events, such as unit testing. With the help of flow process diagrams, you can detect negative conditions that you can use to design negative test scripts. You should carry out negative testing first on custom fields, security roles, custom development, and configuration settings. Basically, any functionality that you create in-house, such as a user exit, that’s not delivered out of the box from the vendor should be negative tested first.
You need to document scripts for negative testing conditions to ensure that:
- No violations occur for defined business logic
- Custom SAP transaction codes and fields only accept valid input
- Data integrity is maintained
- End users are allowed system access based strictly on their assigned roles
- Messages are displayed when an end user commits an error
- You compensate for end-user errors in the production environment
Negative testing goes beyond trying to purposely insert the wrong data format on a field. Here are some negative testing examples for global rollout scenarios:
- Trying to post costs to a closed project
- Trying to post a journal entry to an account assigned to an incorrect business unit
- Trying to make primary and remittance vendors the same
- Leaving a primary vendor field empty
- Attempting to post a material to an invalid profit center
- Trying to approve a work order in the incorrect status
- Having the same person who creates a vendor pay the vendor
Failure to negative test the examples above could prove disastrous to end users and the production support team because the application may experience short dumps or abrupt terminations, leaving end users in a state of confusion. You can reuse and share negative testing conditions from the first global template for subsequent system releases to validate the built-in logic for error handling and to account for end-user errors.
Test Beds for Interfaces
You need to test interfaces and conversions as if they are expected to run in a production environment. You might need to simulate the testing of interfaces during early test events, such as unit and string testing, to compensate for missing test beds or scheduling test tools. However, this is no guarantee that the interfaces will behave as expected in the production environment.
An example of an interface simulation during unit or string testing is sending a flat file by email to a legacy system when you expect an automated scheduling tool in the production environment to automatically send the flat file. To ensure that the interface works as expected, you should test the interface using a scheduling tool (if one will be used in the production environment), such as AutoSys or CONTROL-M.
Note
Simulating an interface does not test whether it will work as expected in the production environment with a scheduling tool connected to a legacy system. This can create a risk to the cutover team or the production team after the go-live event because the interface was not proven to work as expected in the production environment with a scheduling tool.
As an example, an end-to-end scenario — such as the order-to-cash scenario in
Figure 1 — may depend on interfaced data from multiple legacy systems or third-party vendors, with each interface running through a scheduling tool. The interfaces shown in
Figure 2 for the order-to-cash scenario may have audit logs, error handling procedures, and data transformations rules. These require rigorous interface testing during integration testing and prior to go-live with test beds that closely resemble the settings and connections in the production environment.
Interfaces are critical for maintaining referential integrity for an end-to-end scenario composed of multiple data handoffs from one process to the next. If the order-to-cash scenario depends on four interfaces that are independent of one another and have a success rate of 90% of working correctly based on the coding, this means that the probability of success for the entire scenario is now 65.6%. The success for the entire scenario depends on the likelihood of each independent interface successfully executing (.90*.90*.90*.90 = 65.6%), as shown in
Figure 3. This assumes that all the transactional master data and security processes have been configured correctly for the individual tasks, such as create sales order, create invoice, create billing documents, and create deliveries. The project manager and the project’s sponsoring committee may find it unacceptable that a critical end-to-end scenario such as order-to-cash works as expected less than two-thirds of the time.
Figure 3
Probability of success for entire order-to-cash scenario based on interfaces
The practices and lessons learned provided above are examples of successful techniques from different SAP projects. No single project has ever implemented all of these practices regardless of test event, but has instead picked and chosen to implement practices that are designed and aligned with the project’s deadlines, current resource workload, and an understanding of the practices based on the project’s cultural or contractual obligations. The practices above provide a framework to build upon when deploying and testing an SAP application.
Jose Fajardo
Jose Fajardo (PMP, M.S., CSCP, and SAP certified) has worked as a test manager for various companies using a wide array of automated testing tools. Throughout his career, Jose has helped clients create testing standards and test plans, mentored junior programmers, audited testing results, implemented automated testing strategies, and managed test teams. He has written numerous articles on testing SAP and authored the book Testing SAP R/3: A Manager’s Step-by-Step Guide (Wiley, April 2007). Jose lives in the Washington, DC area.
You may contact the author at
support@saptesting.net.
If you have comments about this article or publication, or would like to submit an article idea, please contact the
editor.