digital padlock of light yellow colour with the word "security" placed behind it

RSM’s Guide to Overcoming Privacy Concerns For AI

Reading time: 2 mins

Meet the Authors

Key Takeaways

⇨ Many companies are now turning their attention toward solutions like AI and automation.

⇨ Yet security and privacy risks can be a barrier to implementing these solutions.

⇨ RSM released several key considerations organizations should make about their AI models that can keep their data safe.

How SAP users can overcome AI privacy concerns

Organizations are always searching for new technological solutions to help them stand out in a crowded marketplace. Many companies are now turning their attention toward evolving innovations like artificial intelligence (AI) and automation.

However, leveraging these solutions is not without its stumbling blocks. For instance, the recent SAPinsider AI and Automation in Finance report found that security and privacy risks were a barrier to implementing AI and automation technologies for over 50% of survey respondents.

These concerns are particularly prevalent for companies in fields that deal with sensitive data, such as finance, legal and health care organizations. These companies are concerned that using data in an AI context will somehow expose them to malicious actors, leading to data breaches or potential sanctions. Yet, neglecting these emerging technologies can put companies behind their competitors.

Overcoming AI and automation data concerns

As long as organizations are educated about the potential concerns around data use in AI and how to avoid any pitfalls, they should be able to leverage these technologies safely. RSM recently released several key considerations organizations should make about their AI models that can keep the data of SAP users safe. Those considerations include:

  • Where the model lives—if AI models are hosted by a third-party organization or built internally, some risks to data privacy can emerge. However, SAP companies can leverage internal controls to ensure no sensitive information can move outside the organization.
  • The information the model can access—if companies upload a document to large language models like ChatGPT or others, that data is subject to the company’s privacy conditions and can be used in the solution. Yet as long as users keep their models in-house and do not share data in an open-source application, they should be able to keep important information secure.
  • Development or operation of the model—we know AI has numerous use cases for improving workflows, but companies must train their AI solutions to perform tasks first. Some training phases ask for companies to upload documents as reference material, which can lead to data leakage. However, companies can use APIs that do not rely on specific internal data to train models.


Using new technologies like AI can be daunting from a data privacy perspective. Yet as long as companies adhere to their typical security protocols of not sharing data outside the organization, they can safely harness these technologies.

“It’s the same as a user emailing sensitive data outside of the organization or accidentally sharing it with a client that shouldn’t have access. That’s not an AI problem. That’s a data loss problem that already existed,” said RSM Director Dave Mahoney.

Companies should ensure that their data management practices are equipped to handle new technologies and that users are trained to observe these controls.

More Resources

See All Related Content