Humans in the AI loop

  • Dr Giles Cuthbert
  • 10 September 2020
  • Blog | Fintech and Innovation in Banking | Blog

In this series I’ve been looking over the very useful publication from the Information Commissioner’s Office and the Alan Turing Institute - Explaining Decisions made with AI – a collection of three guides which set out to help explain AI.

Up to this point, what we’ve covered has been my sense that this is an important document in helping establish a common vocabulary around a difficult topic. In doing so, it helps shine a light on the need for us all to collectively understand how AI is, and might be, used within our own organisations.

As we move into the remaining guides:  Explaining AI in practice and Explaining what AI means for your organisation, we start to get into slightly more technical areas, focused on those within project teams [design and implementation], and even compliance. So, for many readers I expect this is like opening up the casing to see the wiring. However, many of you might be asked to support early stage design: for example, you might be asked – ‘How do you perform this task?’ or ‘Typically, what sort of discussions do you have with clients / colleagues?’, etc. All of which might inform the development of an AI system. And what questions are asked at the beginning of that process are incredibly important.

You may even be part of what the Guides refer to as ‘a human decision maker’. But whatever your role, the take-away from the second Guide - Explaining AI in practice-  might be the process overview.

It’s this second Guide I’ll cover here, and it’s perhaps the toughest to summarise. It is about structuring the system – very technical sounding for sure. However, as I mention above, I feel it is important to bring this to your attention, as you may easily become ‘someone involved’ in the development of a process that is supported by AI decision making.

This is broken down into 6 key tasks. It details each task in great detail, including checklists and such practical matters as recruitment and training. For example, ‘Task 4’ highlights the need to be able to translate the rationale of a system’s results into useable and easily understandable reasons. Whilst automated software may come into play here, but in many cases it may be a person, the implementer, who is responsible for translating results into reasons. Here it is noted that ‘it is important to remember that the technical rationale behind an AI model’s output is only one component of the decision-making and explanation process. It reveals the statistical inferences (correlations) that your implementers must then incorporate into their wider deliberation before they reach their ultimate conclusions and explanations. Again, the system gets you so far, but ultimately people are responsible for making sure the right decisions are made and are understood.

So, no surprises that this task list also includes an emphasis on training. Indeed, for most readers with only a passing interest in AI,  ‘Task 5’ of this guide might be of most interest. It looks at ‘preparing the implementers’ and one never knows when one’s role might be expanded to include the use of AI-systems! Here, the basic premise is clear: that training should help identify the benefits and risks of using these systems to assist decision-making, and in particular how they help humans come to judgements rather than replacing that judgement. In addition to these aspects, it provides a fairly comprehensive list of what training should cover, including the basics of how machine learning works, and the limitations of the AI and automated decision-support technologies used. There is also some useful writing on the topic of managing bias, and the value of including an exploration of biases, both AI-related and human-based, during training. For readers with little or only a passing interest in AI, bias is perhaps one of those ‘hot topics’ they are aware of, and, since it is a complex one, I shall return to this later in this series.

The final task focuses on presenting an explanation. Here it suggests thinking about this as if delivering the explanation as a conversation, rather than a one-way process. The point being made is that people should be able to discuss a decision with a competent human being. And because that competent human could be you, dear reader, I recommend you take a look at this guide: if only to make sure that you get the training that experts in this field deem necessary if organisations are to get it right when explaining decisions made using AI.

Other blogs in this series:

How to speak AI

AI needs an interest in explanations; not interesting explanations

 

Author

Dr Giles Cuthbert

Dr Giles Cuthbert

Chartered Banker Institute | Managing Director

 

Giles leads the Institute’s thought-leadership on ethical banking, particularly around the area of digital ethics. He has around 20 years' experience in the arena of professional education and professional standards. He has led a very wide range of education and professionalism projects to significantly diversify the work of the Chartered Banker Institute, developing major strategic programmes to diverse areas of the banking industry around the globe. In particular, he has specialised in developing a wide range of bespoke professionalisation programmes for banks, coupled with highly innovative accreditation services. These projects have reached many tens of thousands of banking professionals.

Giles holds a doctorate in AI and Professional Ethics, a degree in law, a Masters in education, and a Masters in applied professional ethics.