Monday 27 May 2024

AI in the charity and healthcare sectors and not leaving people behind

A couple of weeks ago I attended the CIO Digital Enterprise forum and spoke on AI in healthcare and the charity sector. Everyone knows AI is absolutely everywhere, and is the solution to every problem in the known universe and while we are clearly in the upper parts of a crazy hype cycle, unlike recent tech revolutions this one might actually deliver some of its promise to change the game. In this world, it is very important we consider all of society and do not leave people behind, and this was the topic of my fireside chat with Timandra Harkness who did a wonderful job interviewing me (I was rather nervous!). I thought I'd recap some of what I said here, although I'm not going to bother writing much about efficiency. Everyone knows that at this stage.

Charities and the public sector need to think about customers differently to a business. Where a company like Amazon can focus down to the most profitable users and decide, after analysis of the return on investment, to simply ignore anyone who doesn't own a modern smartphone or a high speed internet connection this isn't really an option for us. Our mission is to reach everyone, so we need to avoid making decisions that cut out or degrade service to subsets of the population.

Fundamentally, charities exist on trust both for income and service delivery. Income is predominantly donations from people who want to support the cause, and fairly obviously people will not donate to an organisation they do not trust to be good stewards of their money. Similarly, people will only reach out for a service to an organisation they trust. This naturally leads to a more risk-averse approach to anything that can damage that trust.

At Macmillan, we are trying to reach people who are going through one of the worst experiences of their lives, when they are most vulnerable. This is a tremendous privilege and responsibility and we have to take this very seriously, understand where people are coming from and meet them at their place of need. We work with people from all manner of backgrounds. Some are highly educated in the medical field. Some are in cultures where speaking of any illness, let alone cancer, is taboo. Some will reach out to a doctor when feeling unwell. Some mistrust doctors and the wider establishment and will talk to a local community or spiritual leader instead. All these different groups and many more besides deserve access to the best healthcare available when they need it and for many of these people we'll have perhaps one chance to engage with them and build a connection before we're written off as "not for them".

Looking at technology, this means we have to be very very careful when putting in anything that can be a barrier to engagement and this does not sit well with many of the end-user deployments of AI at the moment. Although the potential is far wider, the discussions around AI usually end up being about cost saving - doing more with less. When talking about user interaction, an obvious option is the chat bot, either web chat or an automated phone responder. These tend to communicate in a very particular way which works for simple information retrieval but lacks warmth and certainly isn't all things to all people. I know I've been turned off from services by being presented with chat bots (in fact, I wrote a post about this some years ago) and I work in this field and haven't been looking for potentially terrifying medical advice. Chat bots are getting better all the time, but at the moment they certainly do not replace the personal connection one gets from a well trained call responder.

That said, call responders are expensive and their capacity scales linearly so need to be deployed carefully. Behind the scenes, there is lots of use for data (and therefore potentially AI) driven optimisation of their time, ensuring good stewardship of donations by making sure phone lines are staffed without being over-staffed. As real-time translation improves, this will also make a huge difference to us. There are a lot of languages spoken in the UK and we cannot possibly maintain a workforce which allows people to speak to us in whatever language they choose. However if and when we can have ongoing translations between our users and our call centre staff we can communicate in their preferred language, again reaching them in their place of need.

In a similar way, use of AI in semantic site searching is an opportunity to allow people to communicate with us how they choose. In earlier days of the internet, everyone knew someone who was "good at finding things with Google" - this means they could phrase their searches in a way the search engine understood. Any good site tries to make finding content easier through good information architecture and a decent search function, and this can be significantly enhanced with AI. Again, closing the gap with users rather than expecting them to come to us.

Of course, AI-driven chat bots do have a place working out of hours. As long as it is very clear when speaking to a machine rather than a person, and there is clear signposting to when a human is available, it provides a "better than nothing" opportunity for when the phone lines are closed.

This theme also comes through when considering personalisation. In theory, personalisation lets us provide content suitable for you and your needs, which is a great way of helping you find what you want. However, promoting some content inherently means we're demoting other content. Is this the right tradeoff? Ideally, yes and I'm sure we can tune the site to behave better for a high percentage of visitors. But we're trying to reach everyone and now we're doing maths trading some people off. If we can provide good personalisation for 99% of our visitors, that means in a period of time where we're seeing 100,000 visitors we're actively hiding the content 1000 people need. In all likelihood, those people with "unusual" needs are going to correlate with the people about whom we have less data and guess which of the above groups that represents...

This is the fundamental danger of data-drive organisations and service design. The underlying data must be understood, including the weaknesses. We know there are many MANY holes in research data across healthcare. You may well have equal access to medical care, but the medical care itself was almost certainly not developed equally and its effectiveness will vary accordingly. There is a lot of work going on to correct this problem (although not enough!) but in the meantime we need to be very alert to not compounding the problem.

This is a useful segue to the last thing I want to put down. We were talking about the future where AI takes us. I had a couple of things to say, but the one I want to replicate here is around the change I hope we will see across the sector. Currently, charities cooperate with other organisations, but each is fairly stand alone. Given the rich, but incomplete (see above) data we are collecting and our resources being tiny when compared with big tech firms, I hope we see "big data" collaboration across charity groups to help spread the costs and fill in data gaps. We need to deliberately find and occupy our places in a wider ecosystem, so we can work together, share and signpost to each other more as a single organism rather than overlapping entities. What that specifically looks like remains to be seen, but this has to be the future and I'm hoping to be a part of it.

And let's close with a picture of me pretending to be smart...

Photo credit to CIO Digital Enterprise forum


No comments: