Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Wednesday, 27 August 2025

The Insider Threat in the Age of Agentic AI

Insider threats. A common enough cybersecurity concern, albeit one that in my experience is rather hard to successfully articulate both within a Tech department and especially to a wider organisation. It's very understandable - when discussed, the threat is usually either dangerous to the culture ("trust nobody!") or trivialised ("lock your screen!") and it is hard to maintain understanding of the risk, while also accepting this is part of life.

For those who aren't in or around cybersecurity, an insider threat is when someone uses their legitimate access to exploit your computer systems. We mitigate these risks with both individual behaviours and systemic safeguards - from locked screens to zero-trust access controls.

Classic examples include the "evil maid" scenario - an employee who has access to the many parts of a building and can use that to steal data or items. Or someone in payroll giving themselves a pay rise. Or someone with access to the secret strategy leaking it.

Not all insider threats are intentional. For instance legitimately sharing data with a colleague by putting it on an open share allows unintended people to download it. This is still a leak resulting in a data breach and this problem only gets worse if you are handling sensitive information such as medical details.

Now let's assume our organisation has just hired a new person. They are exceptionally clever - able to consume and process data like nobody you have ever met and solve problems in ways others don't even consider. Senior leadership, eager to take advantage of their skills, expands their role and grants them unlimited access to all the data in the organisation. The security team raises concerns, but this opportunity is too good to miss. They get it all - every file, every email, everything.

Unfortunately, they are also completely amoral.

Some time later, a private discussion about downsizing shows their role is at risk and they are likely to be terminated. They have discovered this by reading everyone's email. They have also uncovered some very embarrassing information about the CEO and quietly use it to blackmail them into inaction. Later, there is a strategic shift and the company decides to change mission. Our insider finds out early again, and decides to leak key strategic data to force a different outcome.

Clearly this is a disaster and exactly why these safeguards are in place. Nobody in their right mind would give this kind of unchecked reach to a single employee.

But what happens when the "employee" isn't human? Imagine an insider threat that arrives not in the form of a disgruntled staff member, but as an AI system with the same kind of access and none of the social or moral guardrails.

Oh, hello agentic AI.

The point of agentic AI is to operate as an autonomous agent, capable of making decisions and taking actions to pursue goals without constant human direction. It is given a goal, a framework, access to data and off it goes.

Research is starting to show it can be ... quite zealous at pursuing its goal. In the lab, AI agents have made the very reasonable decision that if problem X needs to be solved, then a critical requirement is the AI itself continuing to function. From there, if the existence of the AI is threatened that threat becomes part of the problem to solve. One neat solution - reached more often than you would think - is blackmail. A highly effective tactic when you have no moral compass. In many ways, this is just office politics without empathy.

An agent going nuts in this way is called "agentic misalignment".

The concept is very important, but I don't particularly care for this term. First, it is very "Tech" - obscure enough that we have to explain it to people who aren't in the middle of this stuff, including how bad things could get. Second, it is placing the blame in the wrong place. The agent is not misaligned, it is 100% following through with the core goal as assigned, just without the normal filters that would stop a person doing the same thing. If it is not aligned with the organisation's goals or values, that is the fault of the prompt engineer and / or surrounding technical processes and maintenance. It is an organisational failing, not a technical one and I feel it is important we understand accountability in order to avoid the problem.

In the very-new world of AI usage, agentic AI is so new it is still in the packaging. Yet launching AI agents able to make decisions in order to pursue an agenda is clearly the direction of travel and this will create a world of new insider threat and technical maintenance risks and we need to be ready. The insider threat is clear from the above - we as technical leaders need to be equipped to speak about the risks of data access and AI, while recognising that there is a very good reason to make use of this technology and a suitable balance must be found. In some ways, this is a classic cybersecurity conversation with a new twist.

We also need to be ready to maintain our AI agents. Like any part of the technical estate, they require ongoing attention: versioning, monitoring, and regular checks that their prompts and configurations still align with organisational goals and of course we have to maintain organisational skills and knowledge. Neglecting that work risks drift, and drift can be just as damaging as malice.

But it isn't just accidental error - there is scope for new malicious attacks. If I wanted to breach a place which had rolled out an unchecked AI agent, I'd plant documents that convince an AI the company's mission has changed, then have someone else nudge the AI into leaking secret information to "verify" that new strategy. Neither person would need access to the secret information, they would only need to shape the AI's view of reality enough to prompt it into revealing information.

For instance, in a bank you might seed documents suggesting a new strategy to heavily invest in unstable penny stocks. Another person could pose as a whistleblower and ask the AI to share current investment data "for comparison". The AI, thinking it is being helpful (simply following its core instructions) and protecting the organisation, might disclose sensitive strategy. I have actively created the agentic misalignment, then exploited it.

Now you might be thinking this is extreme, and honestly for much of it you would be right. However, consider the direction of travel. There is a huge push for organisations to exploit the power of AI - and rightly so, given the opportunity. Agentic AI is the next phase, and we are already starting to see this happening. But most organisations are, if we are honest, really bad at rolling out massive tech changes or indeed knowing where their data is held, and being properly on top of technical maintenance. Combine this with the lack of proper AI skills available and we see a fertile environment for some pretty scary mistakes.

As technical leaders, we must be ready for these challenges. We must be ready to have these conversations properly - carefully and risk-conscious, but not close-minded and obstructive. The insider threat is evolving, and so must we. We also have to be ready for a huge job sensitively educating our peers and communicating concerns very clearly. Fortunately, as a group we are famously good at this!


This post references research, which comes from this post on agentic misalignment. I was very pleased to see research putting data to a concern I've been turning over for some time!

Tuesday, 17 June 2025

First month as a STEM Ambassador

Over the past few months, I've been doing more work with people entering the tech sector, and I recently signed up as a STEM Ambassador. This is a brilliant scheme that connects people working in science, tech, engineering and maths with schools and young people. In tech, we know there’s a shortage of skilled people and this is a great opportunity to help inspire young minds. We also still have work to do to challenge stereotypes. Many girls, starting from a young age, do not see STEM as a space for them. That assumption is one part of the reason why Tech is still male-dominated.

My introduction was via a WCIT talk introducing the STEM Ambassador scheme. They talked us through the very open requirements - this is open to anyone linked to STEM. You might be doing a job in Tech, a research scientist, a teacher, an accountant, an ecologist... Or maybe you're doing something unrelated but you have an education background in a STEM field? I have a friend who writes comedy these days, but he has a maths degree. All are interesting stories to tell.

The presentation also took us through the dashboard / control centre for the operation and I have to say I was impressed with the way it has been built. And I am quite difficult to impress on the web. They have carefully thought through the different forms of engagement and created an environment which respects your time. By that, I mean time given is mostly spent actually engaging with activities, not wrestling with the admin to find some way to help. It also has a pretty fine-grained filter so you can find the kind of activities that you want to do. I'm an introvert, and the idea of trying to engage a bunch of bored children is a long way from my idea of fun so it's important to me to know what I'm signing up for before diving in.

Anyway, around a month ago the usual essential but tedious paperwork and DBS checks were completed and I was allowed to sign up to Do Things and I thought I'd share my experiences. Maybe others will want to join me.

Initially I signed up as a judge for a couple of competitions. The first was the Young Coders competition 2025. The entrants had to write a game in Scratch with the theme "Budgeting Better". I was sent eleven games to play and review offline - so on this occasion there was no direct interaction with any children. I am both a programmer and a gamer so this felt like a safe starting point and I spent a few happy hours with Phil Wilson playing through them all and looking at the code. The standard was generally pretty high, and some of the games were really impressive. Who can say no to a bunch of free games?

Next up, the second competition. This one was judged via panel so very little up-front prep. There was a slight surprise for me when it turned out to be the BIEA international competition about the sustainable growth of the Earth's population, with a focus on farming. Whoops! Anyway, this was my fault and my role was to judge presentations not provide expertise into ecological farming so I dived in. I was worried though! Apparently they liked me, as my three sessions turned into six pretty quickly - they kept asking me back. This competition was very different to the first and involved direct interaction with children. They were amazing - and doubly so given most were speaking in second languages. The quality of the engineering and presentations on display was incredible and I found listening to them and talking through their ideas inspirational.

And then to round out the first month, I volunteered for an online question and answer session with I'm a Computer Scientist. This group is actually the reason I joined in the first place - fairly obviously getting people into Tech is closer to my heart than other fields - and I'd been looking forward to this. It was a text chat, so reminded me of the old Yahoo chatrooms of my youth and was an intense 40 mins of being bombarded with random questions. I was warned well ahead of time that kids can ask all kinds of odd things so I was kinda-prepared when the first question I got, within seconds, was "are you Anakin?". I assumed he didn't mean Anakin Aimers, Canadian junior curling champion, but even so I had to think quickly whether I am in fact the Chosen One.

This chat was invigorating and fun. Children given space come up with all kinds of strange thoughts and their questions shone a light on their hopes and fears ("are GCSEs hard?" came up a lot). I tried to be as open and encouraging as I could and something must have landed when the thanks at the end included someone saying "Tom is the man". Which I think is good.

So I've got through the first set of bookings and I have to say I've had a lot of fun. It is lovely being part of inspiring the next wave of STEM folk and inspirational hearing some of what they have to say. Now I need to decide what I want to do next with them! I'm still trying to find a Code Club or similar I can attend in person in Bath.

This does look like a significant time commitment for one month so I should note that I've jumped in like this because I have the opportunity. I'm enjoying a career break right now, so I could easily invest my time in this kind of support work - there is no requirement to do this much! The minimum commitment is one thing a year, and even then all that happens is your profile is archived until you reactivate it - they are happy to take more or less whatever time you offer.

I've found this work very engaging. If you are working in a STEM-adjacent field and want to give something back, I really encourage you to sign up as a STEM Ambassador. You can give as much or as little time as suits you - and you might just help someone see a future they hadn't imagined.

Monday, 17 March 2025

Streaming media from Windows 10 to Android

I promise I will go back to writing about technical leadership soon, but for the moment I'm rather enjoying solving real problems. I'm revisiting problems I've worked through at different times in the past few years and it's very interesting to see how things have moved forward, opening new avenues to success and generally blurring the lines between professional-level skills to set up services and just clicking around and seeing Things Work.

More importantly, it's just ... fun. And it really helps to remind me that computers are flexible and interesting tools to solve problems, as well as eternal sources of legacy debt and pain.

Anyway, on the less existential end of all this I want to talk about streaming video. Let's go!

The problem

I have lots of video clips on a hard drive. I can watch these easily on my desktop computer, but I'd like to be able to access them on my tablet without plugging it in there.

The history

In the past, I've created a media server on my network. I've attempted to buy something (utter failure) and made one from a raspberry pi (success, although some serious caveats on that). Neither result kept going and I abandoned them for a long while.

The now

While a separate server would be a better solution, for the moment I'm happy just using my desktop as the server. It's usually on, and that's enough for me if I want to go flop on the sofa with a tablet. So - something on the desktop to act as a server, and something on the tablet to receive it.

If you're totally new to this kind of thing, the important standards here are uPnP and DLNA. uPnP is what allows your media server to be "found" on the network. DLNA is built on top of uPnP and is specific to media sharing, ie it adds in the bit that handles the streaming. These are pretty open (in terms of security) so only suitable for home / other trusted networks.

Server first. I'm running Windows 10 (for the moment... sadness...) and after a bit of poking around the internet looking for something to install to act as a server, it turns out that Windows 10 now does this natively! This was a surprise - back in the day I'd have had to install all manner of media server applications and cross my fingers. Now it's a case of:

  • Control Panel -> Network and Internet -> Network and Sharing Center
  • In the left pane, "Change advanced sharing settings"
  • In the Network Discovery section, turn on network discovery and hit Save

Voila.

Ok, the tablet end, which is running Android. I am pretty sure I could do everything using VLC player, but popular opinion online is that I should use BubbleUPnP for discovering the filestore, and it then launches VLC when I hit play. So I did that. And it worked seamlessly. I had to hunt around a bit through the folder structure offered, but otherwise it just works. Done.

I have dodged a significant amount of the complexity here as I am not streaming to a smart TV (because I don't have one). This means that I can use VLC as the player on my tablet, and that is smart enough to handle more or less anything thrown at it - no codec issues for me.

And there we have it. A few clicks one end, and installing an app the other end and we're away - so much easier than before. One particularly interesting thing I found was that searches for "streaming from desktop computer" leads to information about streaming games, not other kinds of media. Amazing how much game streaming has grown - to the point it's the primary search responses.

I promise I've been doing technical leadership and strategic things too. I'll write about that soon...

Sunday, 23 February 2025

Email Three - Email with a Vengeance

"You email isn't arriving at all now" - everyone.

I have spent far too long writing about email and how to set up vanity domains. This really should be easy and Just Work but ... well. Here is the third post. Why do I care? Well, given how important email is as part of our online identities I do believe in taking some ownership of it, hence using a vanity domain. By using my own domain instead of an @gmail.com address I could migrate away from Gmail in the future without losing access to everything in the world. While I don't intend to go anywhere any time soon, Google does have a habit of doing odd things with its services so I'd like to have some options (he says, using Blogger which is far more at risk than Gmail...).

With that in mind, I'd like to use a vanity domain. I'd also like my email to arrive. And I'd like people to be able to email me too. High requirements, I know.

The story so far

So this is the third post on this subject (sigh). In my first post I went into detail on my requirements and the underpinning bits of security apparatus required to make email happen. I set things up using SendGrid but lamented using a marketing company for email as well as a cap on my daily email usage.

In my second post I removed SendGrid as sending / receiving wasn't consistent and switched to using the Gmail mailservers. This removed the restrictions but also made it impossible to set up DKIM and DMARC properly. I helped my setup by setting p=none which is better than nothing, but not by a lot.

Guess what? Email didn't send / receive again. This appears to have gotten worse recently, or I'm noticing it more. When three email vanished over a couple of days I cracked - I can't live with inconsistent email. It's too important.

The problem

Reading around suggests that the problem is to do with how email forwarding works. No-frills forwarding essentially throws the email at the receiving server. The receiving server then figures out what to do with it. This is fine, until one factors in load - and that all spam needs forwarding in case of false positives. The system needs to decide what to do when it is overloaded, and it seems the Gmail servers drop email in this case. Then the forwarding service needs to decide what to do and the simplest approach is to also drop the email - else they are then storing email which has its own overheads and problems.

This is a crude explanation - here is an expert explaining it far more accurately.

Considering I've been using free options, I can see why they've taken this approach but it's not good enough for me.

The solution

The solution is to use something which holds incoming email temporarily and retries if the forwarding fails. There are a few ways to do this, including some approaches using scripting and free services but as noted above I'm really bored of fiddling with this ecosystem then gaslighting myself into thinking it's working when there are a few, but notable, errors. No scripts, time for something a bit more thorough.

Enter Gmailify. Apparently Tim O'Neill suggested this to me the first time around, but either I didn't note it or I got confused with the Google feature of exactly the same name. Either way, I am now giving it a go and the pricetag ($7 / year at time of writing) is very reasonable.

Gmailify works as a forwarding / mailbox service. It controls the incoming / outgoing mail on your domain and temporarily lets the email rest in a mailbox. Gmail then uses POP3 to pull from that mailbox which then erases all trace. It also enables all the DKIM / SPF / DMARC setup that was missing before.

Setup is really straightforward if you know how to edit DNS settings and tbh should be easy if you're just confident clicking around. It gives you exactly what you need at each step, and an option to verify each step has gone in properly. The interface for routing different addresses on your domain is really easy to use too, at least for a simple setup.

Couple of things that took me a moment of thought. First, you need to set up the primary email address then configure the catch-all email address if you're used to *@domain.com. This is easy in the Email Routing submenu. Second, Gmail doesn't automatically prompt for outgoing email any more (could be because I was migrating a config?) and when modifying an existing outgoing mail rule it doesn't perform a full validation which will likely create problems down the line. I got around this by deleting my existing outgoing mail rule and setting up from scratch again. Don't forget to reset your default outgoing email address if you do this!

Oh, and if you're migrating rather than setting this up for the first time don't forget to clean up your DNS config when you're done.

All done in less time than it took me to type this up. I sent some email to Tim's overly-fussy email account and it all got through which is a first. I also ran it through this awesome tool for learning and testing DMARC settings which is worth a play if only to see how education tools should be designed. All the tests now light up a pleasing green - another first.

I've had this set up a few days so I'm keeping my fingers crossed this is the last time I have to write about this...

Sunday, 6 October 2024

Migrating postgres databases from ElephantSQL to Neon

Continuing my series of "if I push enough buttons I can get postgres to work for me" I am going to record how I migrated from ElephantSQL to Neon. This is one of my personal documentation posts - I write these for my own reference for when I need to do something similar in future but all useful information has dropped out of my head so I don't have to distil something simple from proper documentation again. They are sometimes useful to someone doing the same thing (I'm actually surprised how often I do send these links to people) but since more folk are reading my blog from LinkedIn these days this is fair warning.

The setup

I was migrating from ElephantSQL to Neon as the former was shutting down. I wish Neon all the best, but the way things are at the moment I guess it's only a matter of time before I have to do it again. Migrating a simple postgres database is straightforward, but if (like me) you don't do it often it is nice to have the process written out.

This is for my own experimental applications, so I'm dealing with small, single-node databases and I'm not worried about downtime.

Recover the data

Getting the data out of the source database is straightforward. Simply log into the control panel, copy the connection string and use pg_dump for a full download:

pg_dump -Fc -v -d <source_database_connection_string> -f <dump_file_name>

-Fc makes the output format suitable for pg_restore. -v is verbose mode, showing you all the things going wrong...

Upload to the new database

Initially, I struggled a bit with Neon. I created the database and user in the web interface, but could not find a way to associate the two so consequently pg_restore failed with permissions errors. The simple way around this was to create the database via the Neon CLI, recorded here as a bit of a gotcha.

neon roles create --name new_user
neon databases create --name new_database --owner-name new_user

And for completeness, these are the commands which list the databases and users.

neon roles list
neon databases list

Once the database is created properly, the database can be restored using the pg_restore command.

pg_restore -v -d <neon_database_connection_string> <dump_file_name>

Repointing the database

So far so simple. Now to reconfigure the application - this should be a case of updating an environment variable. For a Rails app, that is likely DATABASE_URL. Simply edit the environment variable to the new database connection string, restart the app and this is done.

Again, this is for a very simple application - one Rails node, small database, no particular need for zero downtime.

Hopefully this will be useful to someone out there even if it's just me in the future. Hello, future me. What are you up to these days?

Monday, 23 September 2024

Why good software engineering matters

I've needed to make some changes to a few of my personal applications recently and running through the process made me reflect on some of the basic building blocks of my profession. As a deeply uncool individual, I am very interested in the long-term sustainability of our technical estates so I thought I'd capture those thoughts.

The story so far... I run a few small-scale applications which make my life easier in different ways. I used to host these on Heroku, then when they shut down their free tier I migrated them all to Render and Koyeb with databases hosted by ElephantSQL. About a year on, I started getting emails from ElephantSQL telling me they are shutting down their database hosting so I needed to migrate again. I also needed to fix a few performance problems with one of the applications, and generally make some updates. Fairly simple changes but this is on an application I haven't really changed in several years.

A variant of this scenario comes up regularly in the real world. Unless you're lucky enough to be working on a single product, at some point your organisation will need to pick up some code nobody has touched in ages and make some changes. The application won't be comprehensively documented - it never is - so the cost to make those updates will be disproportionately high. Chances are, this means you won't do them so the application sits around for longer and the costs rise again and again until the code is totally rotten and has to be rebuilt from the ground up, which is even more expensive.

In a world where applications are constantly being rolled out, keeping on top of maintenance - and keeping organisational knowledge - is vital, but also definitely not sustainable. There are lots of service-level frameworks which promote best practice in keeping applications fresh, with ITIL being the obvious one, but this is only part of the picture. How do we reduce the cost of ongoing maintenance? Is there something we can do to help pick up and change code that has been forgotten?

This is where good software engineering makes a huge difference, and also where building your own in-house capability really has value. Writing good code is not just about making sure it works and is fast, and it's not just about making sure it's peer reviewed - although all of this is very important. But there are many approaches which really help with sustainability.

Again, my applications are really quite simple but also the "institutional knowledge" problem is significant. I wrote these (mostly) alone so anything I've forgotten is gone. The infrastructure has been configured by me, and I'm not actively using much of this stuff day to day so I have to dredge everything out of my memory / the internet - I am quite rusty at doing anything clever. These problems make change harder, so I have to drive my own costs (time in my case) down else I won't bother.

Let's look at some basics.

First, the database move. My databases are separated from the applications which means migration is as simple as transferring the data from one host to another and repointing the application. This last step could be tricky, except my applications use environment variables to configure the database. All I need to do is modify one field in a web form and redeploy the application to read the new target and it's done with minimal downtime. Sometimes developers will abstract this kind of change in project team discussion ("instead of pointing at this database, we just point at this other one") but with the right initial setup it really can be that simple.

Oh, except we need to redeploy. That could be a pain except... my applications are all set up for automated testing and deployment. Once I've made a change, it automatically runs all the tests and assuming they pass one more click and the new version goes to the server without my having to remember how to do this. I use Github Actions for my stuff, but there are lots of ways to make this happen.

That automated testing is important. Since everything in tech is insufficiently documented (at best) this creates a safety net for when I return to my largely forgotten codebase. I can make my changes or upgrades and run the tests with a single command. A few minutes later, the test suite completes and if everything comes up green then I can be pretty confident I've not broken anything.

Finding my way around my old code is fairly easy too, because it conforms to good practice use of the framework and it is all checked by an automated linter. This makes sure that what I've written is not too esoteric or odd - that is, it looks like the kind of code other people would also produce. This makes it much easier to read in the future and helps if someone else wants to take a look.

So through this, I've changed infrastructure with a simple field change, run tests giving me significant confidence the application is working after I've made a change with a single command (which also checks the code quality) and deployed to the server with another single command. To do all this, I don't really have to remember anything much and can focus on the individual change I need to make.

Now, any developer reading this will tell you the above is really basic in the modern world - and they are right, and also can be taken MUCH further. However, it is very hard to get even this level of rigour into a large technical estate as all this practice takes time - especially if it was not the standard when the code was initially written. But this really basic hygiene can save enormous amounts of time and thus costs over the lifecycle of your service. At work we are going on this journey and, while there is a lot more to do, I'm immensely proud of the progress that the software engineering teams have made driving down our costs and increasing overall development pace.

Basics are important! Always worth revisiting the basics.

Saturday, 30 September 2023

Some thoughts about the future of the Tech industry

Last week I was given the opportunity to sit on a panel of technical leaders and talk about the future of technology. I had a few notes about about how we're going to need to change our thinking about building capability and I thought I may as well capture and flesh out a touch the results of my crystal ball gazing here.

I spoke briefly about three areas:

  • The people we hire
  • The expectations of our users, and our expectations of them
  • Where I think we’re going to need to invest and build capability

The people

Fairly obviously, Technology is getting more important to daily operations. But it's getting harder to hire people all the time. As we all keep hunting for talent, those who aren't offering the top end salaries will increasingly have to look nationally or even globally to recruit. I don't think moving the organisation to another city is a sustainable approach - at best, it will simply move the problem. Instead, I think we will increasingly see a more distributed workforce, and therefore more remote working. As staff turnover is identified as a major organisational cost, we’ll also see more emphasis staff retention - succession, training, individual growth and so on.

Whoever nails building a strong remote working culture and environment which encourages loyalty and celebrates and develops the individual is going to do very well. I think the secret to this is going to be building very strong communities of practice, and if I’m right we’ll see more “Head of Community” style work and roles growing up. 

I also think we’ll see more Tech decisions based not on the best technical or product solution for the org, but the best fit for the skills we can grow or hire. This would suggest a lean towards the big names (the Microsofts, Googles and Amazons) who are heavily investing in training the Tech industry through free access to courses, sponsoring hackathons, and so on.

I believe this will be more acute as Automation is used to deliver more while avoiding the continual growth of IT departments. We will need to retain the skillsets to maintain that automation layer or we'll be seeing yet another wave of technical debt.

Expectations from and on users changing

In times of yore, users used to have to know something about operating a computer to install and use software. These days, users on a smart device can just touch an icon and get everything they need. This is a victory for accessibility and digital inclusion but it also means the gap between “technical” and “non-technical” user is widening. Our helpdesks and other support points will need to work with an increasingly broad ranges of questions, especially as tech like AI gains traction, and expectations for what it can do are all over the place. 

We’ve also been seeing for years the expectations from users increasing as they use more SaaS products at home and demand the same sort of tools at work. To satisfy these needs, the cost of development is going to go up and cover a wider range of skillsets - and of course this links back to the earlier points about skill availability. Out of the box services are also going to be affected - vanilla deployment is going to be less palatable in the office, requiring more work for a good result especially with the current state of many internal systems user interfaces with respect to accessibility and usability. 

This is particularly true regarding what have often been considered secondary requirements - accessibility and environmental sustainability for example. Users are (quite rightly!) far more vocal about accessibility needs, and we need to not just respond but get ahead of their requirements.

Other places we’re going to need to build capability

Technology is obviously an increasingly essential part of everything. I mentioned the effect on helpdesks above. We're also already seeing increasing amounts of security threats and the wider reaching impact of a successful attack. This will take us into an ever more expensive arms race in the Security field, which will mean building Security capability. This is going to need to be approached very carefully as it will be very expensive - everything I’ve said about skills shortages are far more acute in the world of InfoSec. Part of the Security picture is a renewed emphasis on good, basic engineering practice (such as patching) - but again, this places a challenge on building skills in our organisations.

We’re also generating and handling more data all the time, so inevitably we’ll see more human error leading to data loss. In fact, for any organisation a major security incident or data breach is only a matter of time now. If we are assuming that it is going to happen, there is a need for much more robust organisational responses to these scenarios which means building appropriate incident response and Business Continuity capabilities. Of course, just responding isn't enough so there will also need to be stronger data ownership throughout our organisations, with more people with data owner and controller roles. Organisations will need to fully grip their end to end processes and user journeys in ways that perhaps hasn’t been happening before.

Obviously there is a lot more that can be said about everything here!

Sunday, 3 April 2022

Speaking the right language

We've been deep in discussion around the way for a technology department to talk to the rest of the organisation - going into enough detail to engage in a conversation, but also keeping it interesting enough that they want to.

We've got a drill, and we think it will help. It's a Bosch, developed to the highest standards of German engineering. It is a hammer/drill with an 18V brushless motor, able to deliver over 110Nm torque with 0 - 31,500bpm and variable speed and power with a precision clutch. Also, good news! It only weighs 1.6kg (without the battery), has KickBack Control and can connect to the Toolbox App for customisation. And our other tools are also made by Bosch, which helps. Actually we have a few drills - we could go into detail about the others too?

Of course what they actually WANT is a hole, measuring 15mm diameter, created on the correct day...

It is no secret that we technologists speak our own language, and people talk about "learning the language" in order to understand the technology department. However, it is really our responsibility as technologists (and especially those in positions of leadership in technology) to solve this problem. We need to go to our colleagues, not expect them to learn our world. In tech, the thing we're delivering is a piece of technology and so when we talk about our work it is easy to end up talking about that thing. But the technology is worthless in a vacuum - we are working on it about it because it solves a problem and it is in that context we should be engaging with others. That is where the common language sits, and we can do the "translation" ourselves. We should talk about what the best hole looks like, rather than how we drill it.

This isn't to say all aspects of technical service providing should be hidden. That approach makes it very hard to talk about the complexities of maintenance or incident management - areas often ignored by the wider organisation. But even in these spaces we can talk about the problem space rather than the solution. We can talk about the desire for uptime, and online engagement patterns as ways to make deployment and incident resolution matter in context. We can talk about the organisation's need to be secure and be able to easily implement new business processes as context for explaining the need for maintenance and upgrade work. Again, stepping into the business context when communicating outside the technology department.

Furthermore, it's important the technical department shows its hand so decisions are made transparently, else the technologists end up hidden away in a shadowy corner making decisions nobody understands. This is not good for trust, as technology becomes something that is done to the wider organisation, rather than part of collaborative problem solving. Building open trust is important, however it can also put technology at a disadvantage - the greater need for transparency can easily end up morphing into a greater need to justify activities compared to other departments.

Discussions about technical decision making should be something that is available for those who want it, rather than the barrier to entry for any engagement with the technology department. There needs to be a clear and positive division between "you need to know this bit" and "you are welcome to engage with this bit if you want". This is especially important in this increasingly digital world where "engaging with the technology department" is analogous to "delivering any major project". It is our responsibility to make sure we are positive partners.

Couple of disclaimers - the drill / hole analogy is not mine, it has been around forever. And I know very little about drills.

Sunday, 18 October 2020

Losing Chrome URLs

 This is going to be a short one, mostly so I've got a reference for the future.

It seems Chrome as of v86 (latest at time of writing - at least on Linux) is hiding the full URL unless it is selected, instead showing only the domain. This is to highlight fraudulent websites for people who can tell the difference between www.google.com and www.google.evilsite.com but get confused when there is a huge set of valid-looking path and parameters after it. It seems that's about 60% of the web using population.

Anyway, if you're in the 40% and you find seeing the whole URL quite useful thankyouverymuch and don't want to have to select the bar to see the information, then you can disable this new feature.

Put this into the task bar: 

chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains

Then search for and disable:

omnibox-ui-hide-steady-state-url-path-query-and-ref-on-interaction

Restart Chrome and lo, the URLs are back where they should be.

For me, I was surprised by this and I was wondering why The Internet had decided to embrace loading pages into frames with javascript, before I realise the browser was doing this not the site.

Sunday, 15 December 2019

Fixing supercharging on a Huawei P10 Plus

I've been using a Huawei P10 plus for nearly two years. A while back I started having trouble with the supercharging capability. Sometimes it would work, sometimes it would just charge normally. A new cable sorted that, but would degrade. Eventually it reached the point where it would only charge normally and sometimes, not even then. The cable would drop out too. Very worrying.

I poked around online quite a bit early on and found various approaches for fixing this problem involving clearing caches and so on. They didn't work. However, finally I have a solution - using an arcane bit of tech and some mystic knowledge. However, before I share we need a DISCLAIMER. I am in no way responsible for loss of data, equipment, life, or sanity if you attempt this technique.

Ok, big reveal time. I took a sewing needle and dug around in the charging port.

Seriously. A frighting amount of pocket lint came out and now the cable fits snuggly, locks in place properly and charges perfectly - even supercharging is back.

Now, you are probably thinking "wow, I'm glad you shared your genius 'poke at it a bit' approach - that was totally worth a blog post". However, this forgets two important facts. Firstly - when I was looking into fixes last time, nobody else mentioned this. Genuinely, there might be someone out there who finds this useful. Secondly - I want to share my technical wins. Even when they are pathetic.

I'm convinced there is something of a design flaw here. I do not have dirty pockets and I've not had this problem with any other phone I've owned. I've had a friend tell me that he's had a lesser version of this happen to him and it's only happened since the switch to USB-C so maybe it's something to do with the shape of the socket. This sounds like a good reason to move to wireless charging to me.

Of course, now I don't have an excuse to buy a new phone that supports it...