Expert advice on Access Control 10.0: Q&A with SAP GRC expert Simon Persin (transcript)

Expert advice on Access Control 10.0: Q&A with SAP GRC expert Simon Persin (transcript)

Reading time: 10 mins

What’s the relationship between the 4 components of SAP BusinessObjects Access Control 10.0? What are implementation & migration best practices, what are your  configuration options, and how can Access Control components best work together? 

On March 27, I moderated an exclusive one-hour online Q&A with expert, consultant, and GRC 2012 speaker Simon Persin of Turnkey Consulting in Insider Learning Network’s Compliance Forum. You can also meet Simon at GRC 2012  in Milan, June 6-8, where he will be presenting a session on“Expert guidelines for implementing and integrating the four components of SAP BusinessObjects Access Control.”

Follow the full Q&A discussion in the Compliance forum, or read the edited transcript here:

Allison Martin: Welcome to today’s forum on Access Control 10.0 with Simon Persin of Turnkey Consulting. Simon is a featured speaker at GRC 2012 in Milan, coming
up June 6-8, and will be presenting a session on today’s topic: Integrating & implementing the 4 components of SAP BusinessObjects Access Control 10.0.

This is an opportunity to ask your questions about Access Control 10.0, its functionality, and how to integrate this functionality for optimal security and controls in your SAP systems.  

Welcome, Simon, and thank you for joining us today! I know there are already some questions posted for you, so we can get started.

Perla Priscila: Hi Simon,

What would be the sequence and frequency recommended to run the ABAP jobs (programs: GRAC_REPOSITORY_OBJECT_SYNC and GRAC_BATCH_RISK_ANALYSIS), that update the Reports & Analytics results in SAP GRC Access Control 10.0 for Access Management?

Best regards,

Perla S.

Simon Persin: Hi Perla,

With the synchronisation jobs, I would tend to have them all as scheduled periodic background jobs and would suggest the following frequency:

– Authorisation Sync – Perhaps weekly or even less frequently depending on the volume of changes to the core authorisations in the Target system
– Repository Object sync – Hourly
– Action usage – Daily
– Role usage – Daily

I would then run an incremental Batch Risk Analysis on a daily basis after the jobs above.
I would also recommend a monthly or weekly full sync to make sure that everything is up to date (ideally outside of core business hours).
Simon

 

Perla Priscila: Thank you for the recommendation, Simon. We will take it in account and schedule that frequency in our upgraded environment.

Sandy: Hi Simon.

– Which one has less efforts and lest risk, upgrade from GRC 5.2 to GRC.10.0 or easier to do fresh installation?

– If we have to fresh installation, can we export the configuration from 5.2 and put into GRC 10.0?

Cheers,

Sandy

Simon Persin: Hi Sandy

The official line is that there is no direct upgrade/migration path from 5.2 to 10. You will need to upgrade to 5.3 first and then do the migration of data across. 

To be honest, with the significant technical shift, you spend almost as much time validating and revalidating the migration that I think that its easier to think of it as a re-implementation with some accellerators on the ruleset front. 

I’m not actually sure that exporting and importing the configuration is of much value since you’ll more often want to re-assess the key design decisions anyway. Especially with workflow, I would re-implement it directly within GRC 10.

Simon

jdeloren: Can role assignment conflicts be identified during Risk Analysis and Remediation, or only in Compliant User Provisioning. If possible in RAR, how?

Simon Persin: You can run risk analysis at numerous levels:

– The organisation unit

– Profile level

– Role level, and

– User level

These reports are all in RAR or ARA as it’s now identified. If you want the role assignment conflicts, I would lean towards user level analysis as that will advise you on the conflicts arising between roles. 

You can also simulate potential risks from changes to roles or users as well. 

Usin
g the access request management (GRC10’s CUP module) you can assess the impacts as an integrated check in the request process. 

jdeloren: Many thanks Simon ! 

JurgendeKok: Simon,

What’s your best practice on updating the ruleset? Of course you can perform this when using the transaction NWBC, but what’s your best practice when performing mass maintenance?
And if you use upload functionality (like in 5.3) how can you make sure that the upload is done for the correct system (logical / physical)

Regards,

Jurgen.

Simon Persin: Hi Jurgen, 

I actually quite like using the global upload functionality for mass maintenance, especially in GRC10. 

I think that it aligns more easily with audit requirements to support strong change management as you can then cite transports and effective testing in support of your processes. It is also easier to chunk up the data into business processes so that each business can support their own data outside of the system. This also allows you to remove change access to the ruleset in production and avoid a clear SoD issue within your SoD tool! 

There is an argument for allowing the direct changes of the rules in production uing NWBC and the mass maintenance options as it keeps the controls within a single repository. However, you do then have more and more people interacting with the system and increase the change of mistakes being made. If this is your preference, I would definitely configure the approval workflow for function and risk changes. 

Regarding your systems question, this is where connector groups help massively. In GRC 10, you can choose to assign the rules to a single system or to a logical connector group. Plus the ability to append or overwrite really helps you to manage the upload more effe
ctively. 

Simon

KesavanJagadheesan: Simon,

In GRC 10, is there any restriction in number of SOD rule within a risk as like in GRC 5.3?

Simon Persin: Hi there, 

Within 5.3 I think that there was a restriction somewhere around the 47k mark for rules per risk. I have not seen that in 10.0 as yet but that might just mean that the threshold is higher. It is good practice to split your functions into managable chunks so that you reduce the load on the system to evaluate it. If you have very broad functions, then you might get performance issues during analysis. 

Looking at the database tables and using Early watch reporting should allow you to guard against such issues. 

Simon

JurgendeKok: Simon,

What is your experience with creating workflow with BRF+? Do you think that the MSMP workflow templates in GRC AC are sufficient or do you use BRF+ most of the time?

Best Regards,
Jurgen

Simon Persin: Hi Jurgen, 

I have used MSMP and BRF+ throughout all of my GRC10.0 projects thus far. 

However the extent to which I have had to use BRF+ differs depending on the Process ID. 

I have always created at least a custom initiator rule for Access Requests because the default one doesn’t cover the requirements. 

I have also had to create agent rules and routing rules. I’ve not really had to do much with notification rules though.

Most of the other processes are specific to certain use cases and therefore are simple in nature. For these (e.g. Function, Risk approval or Firefighter Log report) the standard settings seem fine. 

Simon

 

JurgendeKok
:
Simon,

Did you ever encounter problems or even not been able to install the plug ins in a certain remote system? For example the problems are caused by online connections to other systems need to be offline before being able to install the plug in software? Any experience with similar problems?

Best Regards,

Jurgen

 

Simon Persin: Hi, 

We had a couple of interdependencies to manage especially with BW systems whereby the import queue had to be completed prior to implementation. 

Other than that, the plug ins went in very smoothly with minimal fuss or disruption to the system or business. 

Understanding the compatible version of GRCPI* required is the important thing for me as you need to have the correct version for your Basis level. If you have the wrong one, then SAP will reject it. 

However, this is for the standard SAP systems, using other applications with GRC will require the additional middleware components and adapters either from SAP GRC or through Greenlight Technologies. 

Simon

JurgendeKok: Thanx for your quick and detailed answers! 

Perla Priscila: Hi Simon,

Would there be a functionality consideration (maybe CUP) to connect the complete backend’s ERP landscape to GRC AC production enviroment? Or could all the non-production enviroments from the backend landscape be connected to a non-production GRC AC instance and have production systems connected to GRC AC production?

Best regards,

Perla S.

 

Simon Persin: Hi Perla, 

This is a question which is comi
ng up a lot during the early scoping phases of projects and also on the training courses I’ve taught. It is important to understand what you want to do with your GRC system and then look at how best to architect it. 

I think GRC has a massive role to play in supporting both production and non-production systems from the perspective of controls and efficiency. However, it can create massive complexity in your connector configuration to support that and actually open up a different set of risks. 

The main advantages of connecting GRC prod to SAP pre-prod is in Role build as you can then track all of the compliant role build processes throughout the landscape. 

You can do risk analysis against a productive ruleset from anywhere in your SAP landscape. You can also check for critical development access (developer and transport activities) from your GRC production system. It is also much more efficient to have a single user management workflow process for all systems. 

It gets complicated when you factor in the GRC pre-production environments. You need to validate your GRC changes against a system somewhere and you need to be clear which system is the one you want to rely on and which is supporting testing. e.g. you could actually provision access from GRC dev for unit testing, GRC QA for UAT and from GRC prod for actual users which actually then complicates the risk of user provisioning standards massively. How can you check whether it was a proper user change (subject to production like controls) or just a test?

A combination of connector config and naming conventions as well as compensating controls might be the answer here! 

Simon

 

Perla Priscila: Thanks, Simon. I really appreciate the useful decisions’ explanation you are able to share.

 

Scott Wallask: Hi Simon — I’m not too familiar yet with Access Control 10.0. What the monitoring options available with 10.0?

Thanks…

 

Simon Persin: Hi Scott, That’s quite a wide question really as there are lots of different monitoring options really.  Put simply, you can monitor the following:

 

– Use the analytical reports to periodically monitor the risk exposure from your business rulesets

– Use Emergency Access Management (Firefighter) to monitor and control elevated access

– Use the management information dashboards to track and monitor trends on access related risks

– Use the security reports to perform regular compliance checks

– Use the service level and provisioning related logs to track user access requests and monitor the quality of those from inception through approval to provisioning.

– Use the usage and alerts logs to track the actual use of authorisations in the target systems

 

Within the GRC10.0 architecture, you also have the process controls and risk management modules to look at automated monitoring of configured controls within the system. 

If you’ve got more specific requirements, please let me know so that I can suggest something more useful to you. 

Simon

 

jdeloren: Hello Simon,

Do you have any experience with an implementation where MSMP workflow was  not the existing means to approve access ? Have you worked on a project where MSMP approvals were not employed or do customers typically adopt this approval method ?

Thanks,

James

 

Simon Persin: Hi Jame
s, 

I’ve been to some organisations where they use MSMP in combination with a few manual checks. 

It is not always practical to have all of the process steps configured in the system. In this case, the end users submitted the requests to a key user community who did some off system checks for pre-requisites and non SAP system access etc. (I did point out that this was posible within the tool but hey!). The key users then submitted the request on behalf of the end users (therefore effectively providing the first approval of the request) and then MSMP was there to explicitly approve the requested access. 

This also works very well when the role design is complex and remediation has not allowed for the roles to be transparent to end user communities. In that way, the users do not need to understand the complexity of the technical roles to be requested.

Simon

jdeloren: Great information. Thank you.

 

Allison: Thanks to all who posted questions and followed the discussion! And thank you to Turnkey Consulting’s Simon Persin for taking the time to respond to these questions.

 

A full summary of all the questions will be available here in the Compliance Forum and the Compliance Group on Insider Learning Network, and you can post your Compliance questions at any time in the Compliance Forum.

I encourage you to join these groups for ongoing information and additional resources, and to read Simon’s  GRC Expert article on 10.0 configuration.

You can also meet Simon this spring, at SAP
insider’s annual GRC 2012 conference in June, in Milan, June 6-8.  We hope to see you there!

More Resources