Showing posts with label ai. Show all posts
Showing posts with label ai. Show all posts

Wednesday, 27 August 2025

The Insider Threat in the Age of Agentic AI

Insider threats. A common enough cybersecurity concern, albeit one that in my experience is rather hard to successfully articulate both within a Tech department and especially to a wider organisation. It's very understandable - when discussed, the threat is usually either dangerous to the culture ("trust nobody!") or trivialised ("lock your screen!") and it is hard to maintain understanding of the risk, while also accepting this is part of life.

For those who aren't in or around cybersecurity, an insider threat is when someone uses their legitimate access to exploit your computer systems. We mitigate these risks with both individual behaviours and systemic safeguards - from locked screens to zero-trust access controls.

Classic examples include the "evil maid" scenario - an employee who has access to the many parts of a building and can use that to steal data or items. Or someone in payroll giving themselves a pay rise. Or someone with access to the secret strategy leaking it.

Not all insider threats are intentional. For instance legitimately sharing data with a colleague by putting it on an open share allows unintended people to download it. This is still a leak resulting in a data breach and this problem only gets worse if you are handling sensitive information such as medical details.

Now let's assume our organisation has just hired a new person. They are exceptionally clever - able to consume and process data like nobody you have ever met and solve problems in ways others don't even consider. Senior leadership, eager to take advantage of their skills, expands their role and grants them unlimited access to all the data in the organisation. The security team raises concerns, but this opportunity is too good to miss. They get it all - every file, every email, everything.

Unfortunately, they are also completely amoral.

Some time later, a private discussion about downsizing shows their role is at risk and they are likely to be terminated. They have discovered this by reading everyone's email. They have also uncovered some very embarrassing information about the CEO and quietly use it to blackmail them into inaction. Later, there is a strategic shift and the company decides to change mission. Our insider finds out early again, and decides to leak key strategic data to force a different outcome.

Clearly this is a disaster and exactly why these safeguards are in place. Nobody in their right mind would give this kind of unchecked reach to a single employee.

But what happens when the "employee" isn't human? Imagine an insider threat that arrives not in the form of a disgruntled staff member, but as an AI system with the same kind of access and none of the social or moral guardrails.

Oh, hello agentic AI.

The point of agentic AI is to operate as an autonomous agent, capable of making decisions and taking actions to pursue goals without constant human direction. It is given a goal, a framework, access to data and off it goes.

Research is starting to show it can be ... quite zealous at pursuing its goal. In the lab, AI agents have made the very reasonable decision that if problem X needs to be solved, then a critical requirement is the AI itself continuing to function. From there, if the existence of the AI is threatened that threat becomes part of the problem to solve. One neat solution - reached more often than you would think - is blackmail. A highly effective tactic when you have no moral compass. In many ways, this is just office politics without empathy.

An agent going nuts in this way is called "agentic misalignment".

The concept is very important, but I don't particularly care for this term. First, it is very "Tech" - obscure enough that we have to explain it to people who aren't in the middle of this stuff, including how bad things could get. Second, it is placing the blame in the wrong place. The agent is not misaligned, it is 100% following through with the core goal as assigned, just without the normal filters that would stop a person doing the same thing. If it is not aligned with the organisation's goals or values, that is the fault of the prompt engineer and / or surrounding technical processes and maintenance. It is an organisational failing, not a technical one and I feel it is important we understand accountability in order to avoid the problem.

In the very-new world of AI usage, agentic AI is so new it is still in the packaging. Yet launching AI agents able to make decisions in order to pursue an agenda is clearly the direction of travel and this will create a world of new insider threat and technical maintenance risks and we need to be ready. The insider threat is clear from the above - we as technical leaders need to be equipped to speak about the risks of data access and AI, while recognising that there is a very good reason to make use of this technology and a suitable balance must be found. In some ways, this is a classic cybersecurity conversation with a new twist.

We also need to be ready to maintain our AI agents. Like any part of the technical estate, they require ongoing attention: versioning, monitoring, and regular checks that their prompts and configurations still align with organisational goals and of course we have to maintain organisational skills and knowledge. Neglecting that work risks drift, and drift can be just as damaging as malice.

But it isn't just accidental error - there is scope for new malicious attacks. If I wanted to breach a place which had rolled out an unchecked AI agent, I'd plant documents that convince an AI the company's mission has changed, then have someone else nudge the AI into leaking secret information to "verify" that new strategy. Neither person would need access to the secret information, they would only need to shape the AI's view of reality enough to prompt it into revealing information.

For instance, in a bank you might seed documents suggesting a new strategy to heavily invest in unstable penny stocks. Another person could pose as a whistleblower and ask the AI to share current investment data "for comparison". The AI, thinking it is being helpful (simply following its core instructions) and protecting the organisation, might disclose sensitive strategy. I have actively created the agentic misalignment, then exploited it.

Now you might be thinking this is extreme, and honestly for much of it you would be right. However, consider the direction of travel. There is a huge push for organisations to exploit the power of AI - and rightly so, given the opportunity. Agentic AI is the next phase, and we are already starting to see this happening. But most organisations are, if we are honest, really bad at rolling out massive tech changes or indeed knowing where their data is held, and being properly on top of technical maintenance. Combine this with the lack of proper AI skills available and we see a fertile environment for some pretty scary mistakes.

As technical leaders, we must be ready for these challenges. We must be ready to have these conversations properly - carefully and risk-conscious, but not close-minded and obstructive. The insider threat is evolving, and so must we. We also have to be ready for a huge job sensitively educating our peers and communicating concerns very clearly. Fortunately, as a group we are famously good at this!


This post references research, which comes from this post on agentic misalignment. I was very pleased to see research putting data to a concern I've been turning over for some time!

Monday, 26 May 2025

Digital Inclusion in the age of AI

These days, working in tech means spending lots of time thinking about how to implement and exploit the capabilities of AI. This technology is changing the world with new options and capabilities and this train has a lot of track left before we reach the edge of this bubble and it falls off the rails. Personally, I see this current era like the dotcom bubble. Exactly like the internet, we have a technology that will fundamentally change the world and usher in a new paradigm for modern life (ugh) but is also being over-hyped and over-invested and eventually reality will catch up.

However, I want to be clear that I'm not an AI denier or a full-blown Luddite. What we have now is a truly wonderful set of tools and we're barely starting to scratch the surface of the capabilities ahead of us. Remember when the pinnacle of the internet was dancing banana gifs? Now it powers global ... well, everything. AI has the same potential, hype bubbles be damned.

And here we reach the point of this post. Alongside thinking about how to bootstrap data migrations and create AI-ready technology suites despite legacy and technology debt I've been pondering something much more important - digital inclusion in the face of AI.

Society is not good at dealing with sweeping change. If we follow the business drivers alone, we rapidly reach the point where it is too expensive to support people. Superfast broadband changed the face of the internet, but if you live somewhere slightly rural you probably don't have access to this. It's expensive to lay those cables if there are only three households using it so bad luck. Sites like Amazon or banking apps have no requirement to support all users, so they have a cost / benefit ratio that targets modern browsers and modern hardware. If you're running older hardware and cannot upgrade then it is not financially viable to maintain the service for you.

This is not a post about bashing capitalism, but I want to make it clear that people are always left behind when technology pushes society forward. People are cut off from what others consider normal, and eventually there is just no way to bridge that gap. This is where government steps in. There is legislation covering the national rollout of broadband. Has this solved the problem? No. But it has forced progress in the right direction. Working on online government services, digital inclusion was (and is) vital. There are huge benefits to digitising government services, but it is simply not acceptable to leave anyone behind. This is one reason there is always a paper fallback for any online government service.

Other organisations face the same problem. Charities such as Macmillan are not required to make their services available to all, but clearly it is in support of the mission to make sure they do - and again, tremendous work is done in this area.

There are many strands to digital inclusion, but put very simply they come down to identifying barriers created by skills, access or money and how those barriers can be removed.

Ok, time to think about AI. First, we shall consider cost. You can do some stuff for free, but if you want to properly use a tool you will likely want a subscription. A ChatGPT subscription is £20 per month. If you want to add a Microsoft / Google productivity subscription that's another £20 per month (Google Gemini). For the moment that is probably enough, unless you want to play with video or something else specialist. But we have already reached £40 per month or £480 per year. Apparently the average UK salary at the time of writing is £37,430pa gross (source: Forbes). So our £480 is over 1.5% of net income per year. That is a huge chunk of income when considering it is up against essentials like rent and food.

Now, we can say that AI tools are a luxury and arguably for the moment that is true. But this is a technology that can supercharge productivity. Someone familiar with AI tools can research more thoroughly, write better, generate ideas and templates ... and this is all very simple prompt work. And equally importantly, they can produce results so much faster. If this is applied to a job search, use of AI to enhance writing can be a massive uptick in the quality of an application which obviously makes the applicant more likely to get the role.

We have something that will rapidly become an essential skill and capability. How does one learn it? You need some technical skill and you need time. These are not in easy supply for most people and even then, often people need someone to get them started. Point at the correct URL, say "type in there". I've seen it with relatives - it wasn't until WhatsApp got the Meta AI button they engaged at all and they still needed encouragement to push the button when it appeared. Building skills in the alien world of tech is far harder than those of us on the inside realise.

Years ago, access to the internet was a nice to have. Then broadband was a nice bonus on top of your dial-up connection. Now (in the UK at least) your access to high speed internet is enshrined in law. However, it is too late - too many people have already been left behind and it is another have / have not divide in society. AI will create another but more profound divide. Rather than have / have not we will see a can / can not gap and that will directly align with salaries.

Written out, this progression is pretty obvious to me, and I am sure I am not the only one. The first question is - do we care? I have spent my career in public and third sector work and for me, the answer is a clear yes. AI is an exciting and genuinely transformative technology, but if we want it to be a force for good we must ensure it doesn't just benefit the wealthy and technically literate. We need to be thinking about digital inclusion now - as a core concern, not as a side project.

For myself, I am going to keep giving back to this industry where I can. Where I work with services and policy-makers, I will continue to uphold these ideals. More locally, I recently became a STEM Ambassador, which gives me the chance to connect with developing minds (yikes) - and the people who teach them. I am running some AI workshops this summer, helping people get started one "type here" at a time.

These are not grand gestures. But inclusion starts small - with a nudge, a link, a bit of time. This stuff is surprisingly low-barrier once you know where to look.

So, ending on a challenge. If you are already on the inside, think about who isn't - and how you might help them in. The divide is growing. Let's not wait until it's too wide to cross.

Monday, 27 May 2024

AI in the charity and healthcare sectors and not leaving people behind

A couple of weeks ago I attended the CIO Digital Enterprise forum and spoke on AI in healthcare and the charity sector. Everyone knows AI is absolutely everywhere, and is the solution to every problem in the known universe and while we are clearly in the upper parts of a crazy hype cycle, unlike recent tech revolutions this one might actually deliver some of its promise to change the game. In this world, it is very important we consider all of society and do not leave people behind, and this was the topic of my fireside chat with Timandra Harkness who did a wonderful job interviewing me (I was rather nervous!). I thought I'd recap some of what I said here, although I'm not going to bother writing much about efficiency. Everyone knows that at this stage.

Charities and the public sector need to think about customers differently to a business. Where a company like Amazon can focus down to the most profitable users and decide, after analysis of the return on investment, to simply ignore anyone who doesn't own a modern smartphone or a high speed internet connection this isn't really an option for us. Our mission is to reach everyone, so we need to avoid making decisions that cut out or degrade service to subsets of the population.

Fundamentally, charities exist on trust both for income and service delivery. Income is predominantly donations from people who want to support the cause, and fairly obviously people will not donate to an organisation they do not trust to be good stewards of their money. Similarly, people will only reach out for a service to an organisation they trust. This naturally leads to a more risk-averse approach to anything that can damage that trust.

At Macmillan, we are trying to reach people who are going through one of the worst experiences of their lives, when they are most vulnerable. This is a tremendous privilege and responsibility and we have to take this very seriously, understand where people are coming from and meet them at their place of need. We work with people from all manner of backgrounds. Some are highly educated in the medical field. Some are in cultures where speaking of any illness, let alone cancer, is taboo. Some will reach out to a doctor when feeling unwell. Some mistrust doctors and the wider establishment and will talk to a local community or spiritual leader instead. All these different groups and many more besides deserve access to the best healthcare available when they need it and for many of these people we'll have perhaps one chance to engage with them and build a connection before we're written off as "not for them".

Looking at technology, this means we have to be very very careful when putting in anything that can be a barrier to engagement and this does not sit well with many of the end-user deployments of AI at the moment. Although the potential is far wider, the discussions around AI usually end up being about cost saving - doing more with less. When talking about user interaction, an obvious option is the chat bot, either web chat or an automated phone responder. These tend to communicate in a very particular way which works for simple information retrieval but lacks warmth and certainly isn't all things to all people. I know I've been turned off from services by being presented with chat bots (in fact, I wrote a post about this some years ago) and I work in this field and haven't been looking for potentially terrifying medical advice. Chat bots are getting better all the time, but at the moment they certainly do not replace the personal connection one gets from a well trained call responder.

That said, call responders are expensive and their capacity scales linearly so need to be deployed carefully. Behind the scenes, there is lots of use for data (and therefore potentially AI) driven optimisation of their time, ensuring good stewardship of donations by making sure phone lines are staffed without being over-staffed. As real-time translation improves, this will also make a huge difference to us. There are a lot of languages spoken in the UK and we cannot possibly maintain a workforce which allows people to speak to us in whatever language they choose. However if and when we can have ongoing translations between our users and our call centre staff we can communicate in their preferred language, again reaching them in their place of need.

In a similar way, use of AI in semantic site searching is an opportunity to allow people to communicate with us how they choose. In earlier days of the internet, everyone knew someone who was "good at finding things with Google" - this means they could phrase their searches in a way the search engine understood. Any good site tries to make finding content easier through good information architecture and a decent search function, and this can be significantly enhanced with AI. Again, closing the gap with users rather than expecting them to come to us.

Of course, AI-driven chat bots do have a place working out of hours. As long as it is very clear when speaking to a machine rather than a person, and there is clear signposting to when a human is available, it provides a "better than nothing" opportunity for when the phone lines are closed.

This theme also comes through when considering personalisation. In theory, personalisation lets us provide content suitable for you and your needs, which is a great way of helping you find what you want. However, promoting some content inherently means we're demoting other content. Is this the right tradeoff? Ideally, yes and I'm sure we can tune the site to behave better for a high percentage of visitors. But we're trying to reach everyone and now we're doing maths trading some people off. If we can provide good personalisation for 99% of our visitors, that means in a period of time where we're seeing 100,000 visitors we're actively hiding the content 1000 people need. In all likelihood, those people with "unusual" needs are going to correlate with the people about whom we have less data and guess which of the above groups that represents...

This is the fundamental danger of data-drive organisations and service design. The underlying data must be understood, including the weaknesses. We know there are many MANY holes in research data across healthcare. You may well have equal access to medical care, but the medical care itself was almost certainly not developed equally and its effectiveness will vary accordingly. There is a lot of work going on to correct this problem (although not enough!) but in the meantime we need to be very alert to not compounding the problem.

This is a useful segue to the last thing I want to put down. We were talking about the future where AI takes us. I had a couple of things to say, but the one I want to replicate here is around the change I hope we will see across the sector. Currently, charities cooperate with other organisations, but each is fairly stand alone. Given the rich, but incomplete (see above) data we are collecting and our resources being tiny when compared with big tech firms, I hope we see "big data" collaboration across charity groups to help spread the costs and fill in data gaps. We need to deliberately find and occupy our places in a wider ecosystem, so we can work together, share and signpost to each other more as a single organism rather than overlapping entities. What that specifically looks like remains to be seen, but this has to be the future and I'm hoping to be a part of it.

And let's close with a picture of me pretending to be smart...

Photo credit to CIO Digital Enterprise forum