Photo by @dbrandaof

Artificial Intelligence and Trust

A brief look at a report by KPMG International Data & Analytics

--

Since I started my internship with KPMG I decided to spend a few days to look at a few of the reports they have created on the topic of AI Safety the last few years. One of the closest reports I could find was a report on trust in analytics since part of the focus of the report was on artificial intelligence (AI), and the question of trust seem relevant. As such how can this analytic trust issue be resolved according to KPMG? I will touch upon the foundations briefly and how AI is discussed in the report.

The report Guardians of Trust was written in 2018 and is a study KPMG International commissioned from Forrester Consulting to survey almost 2,200 global information technology (IT) and business decision makers involved in strategy for data initiatives. This survey found that just 35 percent of them have a high level of trust in their own organisation’s analytics. There can therefore be said to be a low-level of trust in analytics. According to KPMG International trust underpins reputation, customer satisfaction, loyalty and other intangible assets, which now represent nearly 85 percent of the total value of companies in the S&P 500.

Trending AI Articles:

1. Basics of Neural Network

2. Bursting the Jargon bubbles — Deep Learning

3. How Can We Improve the Quality of Our Data?

4. Machine Learning using Logistic Regression in Python with Code

What is the foundation of trust in this report?

It is argued that the governance of machines should not be fundamentally different from the governance of humans and it should be integrated into the structure of the entire enterprise. They argue that trust in an age of digital transformation:

  • Influences reputation
  • Drives customer satisfaction and loyalty
  • Inspires employees
  • Enables global markets to function

To address this lack of trust it is argued there needs to be a foundation. The foundation is visualised as such:

KPMG International

In this regard they have a list of heuristic rules (‘key takeaways’) to abide by:

  1. If you can’t measure it, you can’t manage it
  2. Prioritize risks
  3. Create trust-impact personas
  4. Create a buddy system
  5. Checklist manifesto for data and analytics
  6. Don’t let the board off the hook
  7. Be flexible with horses for courses
  8. Create a mesh governance framework

At a time when machines are working in parallel with people, this study points to a clear need for proactive governance of analytics in order to build trust.

How is AI discussed in the report?

There is a clear focus on AI as part of the issue in regards to trust and a potential risk going forward.

“The widespread use of AI will make it imperative — and more difficult — to ensure trusted analytics.”

According to the report AI can both disrupt and create trust depending on how it is used: “The age of AI also offers new ways of protecting public trust as we shift from humans towards machines. In audit, for example, cognitive systems can analyze millions of records and identify patterns to create more insights on a company’s processes, controls and reporting. Algorithms, meanwhile, can be designed to reduce human biases in decision making, and blockchain can offer greater data security and new distributed trust models.” They describe this digital shift as a double-edged sword. There are several issues in this regard:

  • AI systems may be seen as a ‘black box,’ making important decisions when few people can fully understand how.
  • The ‘superhuman’ behaviour problem. sometimes their performance is almost ‘too good’ and we find ourselves unable to predict the consequences.
  • The ‘subhuman’ behavior problem. For example, people have been ‘injured by GPS’ when following directions that are outdated and wrong. Visual recognition is great in some areas, but less so in others
  • The ‘bad-human’ behavior problem. Algorithms that use machine learning can also pick up bad habits or biases from the human behavior they seek to emulate.

Who is responsible?

There is a looming question of accountability and the report argues: “ While we may like to blame our machines, they are simply machines and, as such, cannot be held accountable for the decisions or insights they produce.” Most respondents (technical decision-makers) argued the responsibility lies with technology functions and service providers, the: “…organization that developed the software, ahead of the manufacturer, the passenger and regulators.”

Therefore it is said to be important to proactively govern analytics in ways that build trust, resilience, integrity, quality and effectiveness. The person who is regarded to have the primary responsibility is the Chief Information Officer (CIO). However it seem have resources or want to take on greater responsibility for governance of AI and analytics across core business.

In the report there is an interview with a General Manager in Microsoft called Emma Williams. She mentions there is a focus currently to blend EQ, or ‘emotional intelligence,’ with traditional IQ. They use and approach called AI FATE: a broader context that includes fairness, accountability, transparency and ethics. Emma mentions her team includes anthropologists, cognitive behavior, as well as ethicists, PhDs in human psychology, UX designers and psychologists. As such a wide range of skills are necessary to ensure responsible and trustworthy AI.

Governance of AI

Five top steps is outlined in the report and it may be a good place to start if you are lost on where to begin with AI governance.

  1. Develop standards to provide guardrails for all organizations
  2. Modernize regulations to build confidence in D&A
  3. Increase transparency of algorithms and methodologies
  4. Create professional codes for data scientists
  5. Strengthen assurance mechanisms both internal and external.

“AI can increasingly allow auditors to obtain and analyse information from non-traditional sources, such as all forms of media — print, digital and social — and, combined with other information, draw a deeper, more robust understanding of potential business risks”

KPMG International describes a few examples of essential controls to inspire management of AI for an analytical enterprise. I have selected a few of their suggestions: (1) partnering and ‘parenting’ algorithms with a nominated human partner; (2) explainable AI, although technical explanation can be made it needs to be understood by teams or even the organisation as a whole; (3) ethics boards to develop standards; (4) and human-centered machine learning.

This is day 67 of #500daysofAI. My current focus for day 50–100 is on AI Safety. If you enjoy this please give me a response as I do want to improve my writing or discover new research, companies and projects.

As mentioned I do work with KPMG however I am using this personal project partly to review some of their reports. Of course all views here are my own, but you will not find too many viewpoints as it can be considered more of a summary.

Don’t forget to give us your 👏 !

--

--

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.