Charting a Path to Responsible AI

Charting a Path to Responsible AI

Reading time: 2 mins

Meet the Authors

It appears that 2025 may be the year of AI. More and more organizations are beginning to understand the transformative power of this capability and are starting to integrate it throughout their essential workflows. Yet as organizations adopt AI, they may be unaware of the potential issues that come with it.

To help organizations understand these issues and overcome them, EY has determined a list of the top potential issues that companies might face in their AI implementation initiatives.

Potential Issues

Explore related questions

  1. Data: Businesses that do not prepare their data properly risk exposing external parties, such as outside collaborators and sources.
  2. Performance: Companies need to have policies that restrict access to AI tools before governance and training modules can be developed and shared.
  3. Algorithms: Complex technologies that may allow unique AI outputs for individual prompts, without tracking and retaining AI training data can be problematic for organizations.
  4. Design: Companies should ensure that the design of their algorithms are clear and transparent and that they do not change data.
  5. Training: Organizations that do not hold training sessions for GenAI and its associated risks can lead to inadvertent misuse.

“One of the trickiest parts of channeling AI for business is the human bias factor. AI absorbs everything it receives without judgment, so individual user choices, however innocent or inadvertent, can easily affect AI outcomes, becoming a permanent part of the AI universe. That’s why it is advisable to create a protected internal space for practice and experimentation before applying AI capabilities to external business deliverables,” said Samta Kapoor, EY Americas AI Leader.

Overcoming Issues

Though these potential issues may be concerning, EY also offers a framework of five key considerations that companies can make to help mitigate any potential issues.

Considerations

  1. Define a responsible AI framework to validate compliance of AI models from design to implementation with a rigorous feedback mechanism.
  2. Establish an AI operating model, a multidisciplinary AI organization structure comprising business, technology and various compliance functions to implement AI at scale responsibly.
  3. Employ specialized cybersecurity controls to meet the unique challenges presented by AI systems and mitigate risk to your organization.
  4. Prepare your data. AI requires vast amounts of unstructured and structured data, and a mature data management program with robust governance is necessary to deploy transformative AI solutions.
  5. Activate your enterprise across functions, defining roles and responsibilities for each and establishing a process to educate, train and inspire users.

What This Means for SAPinsiders

Data is the foundation. AI is only as powerful as the data it is trained on. Companies cannot expect accurate or helpful results if they have inaccurate or duplicated data. Those wanting to install AI should bolster their data management as a first step.

Remember the human element. Though AI is promising, it is still a capability that people need to wield. Organizations should specifically lay out who is using which AI tools and how in order to ensure that their AI initiatives are successful. They should also hold training sessions to help these users better harness AI.

Set up guardrails. AI is a powerful tool, when used correctly. Though companies may be new to using it, there are still some commonsense frameworks they should install to reduce concerns and maximize output. Many organizations are turning to expert partners like EY to better understand what these guardrails are and which points should be emphasized for their specific organization.

More Resources

See All Related Content