Data at Rest
Data at rest, both in production and nonproduction environments, is often overlooked as an area of
cybersecurity concern. Examples of data at rest include archives, online backups, client copies for quality assurance and development environments, and disaster recovery sites.
The increase of virtualization both in-house and in the cloud makes it cheap and easy to create full-system copies of data for hot failover sites, testing, and development environments. The most commonly addressed areas of concern are connections from nonproduction systems out to the rest of the world.
Removing connectivity from nonproduction copied systems out to external environments, deactivating connectors, external facing communications, user IDs, and closing RFC connections are the most common tasks done to protect nonproduction systems. Though these are critical, it is important to understand the inherent risks to master data and how to protect it.
Master data is one of the most important company assets. It covers specific data types that, when combined, are the core of business profitability and risk. Management of the quality of the individual master data elements is a key business initiative. Securing that master data requires a skilled team.
What Is Master Data?
- Vendor Information: Vendor contact data, purchase order history, pricing, contracts, and bank account information
- Customer Information: Customer contact data, sales order history, prices, contracts, and bank account information
- Employee Information: Employee contact data, HIPAA data, benefits information, dependent information, and bank account information
- Corporate Product Data: Formulations, BOM, recipes, project data, material lists, and process documents
Threats
There are many threat vectors for master data. They include hackers, nation state actors, corporate espionage, employee fraud, disgruntled employees, outsourced resources, and contractors. Attackers know the value of the master data held in an SAP datastore. If they can gain access, regardless of how old the data may appear, they will copy as much of it as they can.
Their goal is to gather bank account information, identities, corporate intellectual property, processes, and trade secrets. Unfortunately, all of these data types are included in some manner or another during a client copy or copy from backup.
In many cases, these nonproduction systems are not monitored to record who accesses them and what they are doing on those systems. A lack of data obfuscation leaves bank accounts, vendor and customer data, and employee personally identifiable information (PII) accessible.
Production user IDs are often available in nonproduction environments. These include contractors and outsourced resources as well as expanded testing access in quality assurance and development environments.
Solutions
Users can access tools and techniques to solve the problem of governing and protecting data at rest. They fall into three categories: awareness and policy, data categorization, and automation.
Awareness and policy solutions start with creating a policy for managing data at rest. This policy details what data should be included in a copy for testing or development environments, how all nonproduction environments should be monitored, and what the user provisioning and access review processes should include.
The policy should include performing the same risk analysis for data at rest as is done for production systems. This mandates evaluation of systems, locations, access points, and users accessing these nonproduction systems.
Key Questions:
- What are the systems containing data at rest—online archives, copied systems, disaster recovery sites, online backups, offline backups?
- What external access is there to these systems—cloud application test systems, external data transfer points, RFC?
- What are these systems used for—testing, who has access, are these systems common knowledge?
- Is there an access management process in place for these systems?
- Who maintains key password data, and is this monitored?
- Is there a data retention policy for the duration and retirement of old data/systems?
Categorization
Data categorization is the process of identifying critical master data elements to be secured, who makes decisions for these elements, and what is the value of the data to the business. Categorization can range from simple—choosing fields for data obfuscation, to complex—categorization of data in a confidentiality model.
Key Questions:
- Who needs to be involved from the business to determine what are the critical data elements to secure? This includes speaking with finance, sales, product design and build, and human resources.
- How do you build awareness of the problem and build an engagement process for new projects?
- What is required to implement a data categorization project to prioritize the level of security?
- Who should have access to the different categorizations of data?
- What is the purpose of securing this data, and who should maintain what levels of access?
- What data requires obfuscation in offline scenarios?
Automation
Automation of the protection of data at rest can use the same tools used for production systems.
Key Questions:
- What are we using for securing the production data?
- How are we monitoring production environments?
- What tools do we already have to leverage?
Some common tools that can secure and monitor data at rest include SAP GRC Access Control, SAP GRC Process Control, Field Masking, and Solution Manager.
Practitioners should use
SAP GRC Access Control for production and nonproduction environments. It can lock out users, validate tester access, approve expanded access provisioning in nonproduction environments, and monitor transaction usage and access in nonproduction systems.
Emergency Access Management (Firefighter) can access the most critical data access in nonproduction systems.
SAP GRC Process Control can manage the documentation around system copy requests and approvals, disaster recovery records, and test signoffs.
Users can field mask with the SAP field masking tools or with another third-party program. It provides data obfuscation at the field level. Users can configure the process to manage field-level access to a user’s authorization and can be used in production and non-production systems.
Solution manager monitors notes, patching, and security update requirements. The EarlyWatch reports notes and security update requirements. Also, it has reporting for system risk management and transaction execution. Setting this up for nonproduction systems can send alerts for excessive use on a nonproduction system and monitor cyber risk.
Conclusion
Using a methodical approach to security data as shown will bring data at rest into the holistic data security model. The first step is awareness. From there, use the tools and policies already in place for production systems to manage nonproduction systems.
Data classification is key to knowing what data to obfuscate and what to consider confidential. The business must identify these data points and determine what data to retire from all systems and when.
It is important to understand that just because users have access to the production data does not mean they should have access to copied systems. The mitigating controls and monitoring process used for production systems are not present in nonproduction systems, thus the risk is greater.