Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

The C-suite’s dilemma: Who’s in charge of Artificial Intelligence risk?

Who owns AI? It’s become the existential problem to solve as adoption skyrockets against a backdrop of uncertainty. The solution starts here.

Every successful company understands the importance of risk management—from cybersecurity and data privacy to regulatory compliance. But one emerging threat is not getting the attention it deserves: the risk associated with artificial intelligence (AI).

Call it the paradox of progress. AI solutions are booming because, as an open-source technology, it’s owned by no one. But leveraging it to drive insights, automation, and innovation within an organization—while limiting risk at the same time—requires clear ownership and accountability.

This is a massive missing piece and emerging threat, as we detail in our 2023 KPMG U.S. AI Risk Survey Report. At most organizations, there isn’t yet a role dedicated to AI risk management. But these risks, and potential new regulatory requirements meant to help mitigate them, require a thoughtful new framework to put a tight ring around what will otherwise become an AI circus—and starting today.

On top of that, the vast majority of the leaders we surveyed expect mandatory AI audits within a few years. And while there’s still much excitement and enthusiasm from company leaders about AI and its vast potential on many fronts, there’s also clear concern about the many unknowns ahead.

A blind spot in the C-suite

To better understand how businesses are approaching AI risk, KPMG asked 140 executives from various industries for their views about the threats associated with their AI initiatives. The big three, they agreed, are data integrity, statistical validity, and model accuracy.

As such, it’s a bit surprising to discover that relatively few C-suite executives have a seat at the risk mitigation table:

  • Only 44 percent help create new AI-related processes.
  •  Just 33 percent develop or implement governance to reduce AI risk.
  • A scant 23 percent review AI risks.

Our survey also found that while many C-suiters are actively involved in providing direction on goals and analytics, they are mostly delegating the equally important rubber-meets-the-road parts: implementation, refinement, and risk review. This suggests that organizations recognize AI-related risks, but may not be bringing enough executive firepower and gravitas to the table to fully address them.

There’s also a potentially concerning gap in understanding about exactly how AI models are defined. For example, 82 percent of the survey respondents said their organization had a clear definition of AI and the related predictive models; and overall concerns about transparency on AI ranked a distant fourth. But with most companies using at least some third-party data and analytics “black box” solutions—which, by definition, lack that transparency—where is that confidence coming from?

Why AI ownership matters

The lack of AI ownership is exacerbated by emerging technology approaches such as data lakes, which conveniently not only centralize data for AI access and insight mining but also risk disconnecting the data from its source, leading to loss of ownership and domain-specific knowledge. Respondents in our survey said data integrity is their top concern, raising questions about identifying intentional errors introduced by malicious actors at the data’s source.

Accelerating government oversight is making leaders sweat as well: 73 percent of respondents reported some level of regulatory oversight over their AI models. In addition, 84 percent believe independent AI model audits will become a requirement within the next one to four years. A patchwork of government agencies are already circling AI model audits in the United States, and the EU is proposing regulations to govern AI model usage with potential fines for noncompliance.

However, most organizations lack the expertise to conduct these audits internally, with only 19 percent saying they have the necessary skills to do that today. In other words, AI adoption and maturity are outpacing organizations’ ability to assess and manage associated risks effectively. 

84%

of respondents believe independent AI model audits will become a requirement within the next 1-4 years.

Responsible AI: A new way to manage risk

How can you address these threats? One answer is responsible AI, an approach to design, build, and deploy AI systems in a safe, trustworthy, and ethical manner. To establish a responsible AI platform, organizations can start with eight guiding principles:

1

Fairness: AI-powered products meet expectations set by the Fairness Maturity Framework, ensuring they serve diverse groups of people.

2

Explainability: AI products are easily understood, transparent, and open for review.

3

Accountability: Mechanisms are in place to ensure responsibility throughout the planning, development, deployment, and use of AI products.

4

Data integrity: Trustworthy data quality, governance, and enrichment measures are implemented.

5

Reliability: AI-powered products perform accurately and consistently at the desired level.

6

Security: AI products have safeguards to protect against unauthorized access, corruption, or adversarial attacks.

7

Privacy: AI-powered products respect privacy expectations and safeguard user data. 

8

Safety: AI products work as intended and do not cause harm to humans, property, or the environment.

To explore these and other insights, read more from the 2023 KPMG U.S. AI Risk Survey Report.

Explore more insights and opportunities around generative AI

Meet our team

Image of Emily Frolick
Emily Frolick
KPMG Trusted Leader, Principal, Advisory, KPMG US
Image of Kelly Combs
Kelly Combs
US Trusted AI Development and Deployment Leader, KPMG LLP

Subscribe to receive the KPMG Opportunity (In)sight Newsletter

Turn insight into opportunity with unique perspectives and actionable insights addressing the burning issues atop the C-suite agenda. Delivered monthly.

Thank you

Thank you for subscribing to the KPMG Opportunity (In)sight newsletter. Be on the lookout for Opportunity (In)sight, a monthly newsletter from KPMG providing unique and data-driven perspectives into the most pressing C-suite issues.

Subscribe to the KPMG Opportunity (In)sight Newsletter

Turn insight into opportunity with unique perspectives and actionable insights addressing the burning issues atop the C-suite agenda. Delivered monthly.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline