Tuesday, 19 November 2024

Resurrecting a Pixel C

I'm putting off the post I need to write so I'm going to ramble a bit about resurrecting an old Pixel C tablet by sideloading a custom operating system. This is something that sounds scary but really isn't, so I thought I'd share in case anyone else has hardware in the cupboard and no desire to buy new, expensive kit.

Setting the scene

My tablet usage is pretty modest. For anything significant, I'll use a laptop / desktop or my mobile phone. My tablet is mostly used for video streaming (YouTube, Netflix, etc) and a bit of web browsing when it's the closest thing to hand. I am not a power user.

Many years ago (2016), I bought a Google Pixel C tablet. This was originally released in 2015 and was Google's first tablet under the Pixel brand. Other than having a name that is really annoying to search for, it was a fabulous piece of kit - feeling very solid and chunky. However for boring reasons, when COVID hit I moved and my Pixel C fell out of use.

Fast forward many years, and my iPad battery is dying so I thought I'd see about resurrecting the Pixel. Firstly, I hadn't realised how long it had been since it was last used - like a lot of people I know, my sense of time has been utterly smashed by two years of lockdown - so it was a bit of a surprise to discover Google stopped supporting it nearly six years ago (end of 2018). Consequently, although it fired up fine, it was hideously out of date with no path to catching up. It seems older versions of Android (8 in this case) have a problem connecting to modern 5ghz wifi connections and so my lovely hardware was dead unless I wanted to run a lower speed wifi network.

Why take the easy solution, eh?

So, with my brain telling me "this is only a few years old" (it's nine years old?!) I thought I'd look at bringing it back to life via the medium of a custom ROM. For the uninitiated, this is basically installing a new operating system package - pre-built with proper drives and so on - however is more complicated because tablets and phones tend to lock all this down to stop people fiddling and bricking the device. However, I'm (apparently) a grown adult so I'll fiddle if I damn well please. Plus, it doesn't work now so it's not going to get any less functional - perfect for some learning.

Get on with it

The hard part was really all the prep. I needed to find a good ROM, which I did via the application of Google and reading. LineageOS is a the gold standard for community-run operating systems, but even they have dropped support for the Pixel C. However there are some intrepid folk out there who are keeping the dream alive on the XDA forums and a helpful dev going by npjohnson is pushing out builds for the Sphynx project, via his forum thread. Sphynx is an unofficial build of LineageOS tailored for the Pixel C - perfect for me.

The instructions are good - I read them a few times before giving this a go because I was scared - and basically there are four stages:

  1. Download adb and fastboot to your computer - this laptop is Ubuntu Linux so that was as simple as sudo apt install adb fastboot but Windows options are also available
  2. Learn how to boot your tablet into the recover mode menu (for the Pixel C, with the device off hold Power and Volume Down)
  3. Download the desired image - I just took the latest (lineage-21.0-20241019-UNOFFICIAL-sphynx.zip at the time) and it worked fine
  4. Follow the instructions very closely

Extremely minor gotcha - I extracted the downloaded zip file to get a recovery.img file for page 3, then used the original zip for sideloading in step 4. Other than a couple of slightly alarming beeping sounds from the device, this was the only time I really needed to think once I got going. The whole process took about an hour, going very slowly and carefully, then additional time to set up the tablet (obviously it is wiped so you'll lose anything on it from before).

Behold

And it's working! There are some known issues - the camera doesn't work properly, apparently bluetooth is a mixed bag and the rotational sensors also don't work - but these haven't impacted my needs. If you like running your tablet in portrait mode, apparently this can be a pain. However, for me I have a shiny refurbed tablet that plays videos and doesn't keep turning off mid-video. Hurrah!

Overall, this is scary. But it turns out it is also easy. Hopefully others will give it a go and bring their devices back to life.

Return of the Pixel C

Thursday, 14 November 2024

Sending email - redux

It feels like forever since I wrote my last blog post on sending email via a vanity domain but has actually only been a year. In that post, I noted that sending via SendGrid was optional and it should all be doable using Google servers. The SendGrid config has been mostly ok, but hasn't been perfect and I wanted to remove this free third party from my email tool chain so I've revisited the setup and got it working through the Google mail server.

Will this work long term? Hopefully, but I'm not convinced for reasons I've laid out below. Let's do this.

Sending email

First I needed an app password for your Google account, which is a bit of a buried in the security interfaces. You can find the admin console for app passwords here.

This is also reachable by going to Google account settings, under "How you sign in to Google" select "2-Step Verification" and the option for App Passwords is at the bottom of the page.

Next, to configure gmail to send via the Google mail server, I needed to set the outgoing mail, found in:

Settings -> See all settings -> Accounts and import -> Send mail as

Then adding / editing my intended outgoing email address with these details on the second page ("Send emails through your SMTP server"):

SMTP server: smtp.gmail.com
Port: 465
Username: Your google account (blah@gmail.com)
Password: Your app password from earlier
Secured connection using SSL ticked

Email security

This is a minefield and something I'm going to have to monitor to see how horribly I have broken things. First, this tool from Red Sift is great for analysing email security settings.

Deep breath...

SPF

To the DNS record, add:

include:_spf.google.com

and remove references to sendgrid to avoid too many lookups - this flags it as insecure.

DKIM

Without a full Google Workplace account, I can't upload a private key to Gmail so DKIM isn't going to work. Hopefully SPF will be enough. We'll see...

I think this also scuppers MTA-STS, but happy to be corrected.

DMARC

This is tricky. DMARC requires one of DKIM or SPF to pass tests properly in order to pass, then it suggest the receiving server takes a specified action (via the p flag). In this case, my DKIM check is going to fail so that is out. The SPF check passes the initial test however there is a further check to make sure the Sender and From headers are the same. In my case they are not, since Sender is gmail and From is my domain so check fails with a no-alignment error - thus the DMARC check itself fails.

I've "solved" this by setting the p flag in my DMARC DNS entry to "none" which just tells the receiving email server to deliver it anyway. It seems to work for my very limited testing sample, but obviously I'm not happy about this approach.

What is next?

I hope the SPF config will be enough to make my email work happily again, however I'm clearly hitting the limits of the free options in Gmail. If this doesn't work well enough, I think I'll move away from free options and look at something like AmazonSES which (from a quick read) will let me configure everything I need and charge me $0.10 per 1000 outgoing emails. This is probably the ultimate "correct" solution (unless I want to pay for a Google Workspace account) but is a lot more work and right now I don't wanna.

Sunday, 6 October 2024

Migrating postgres databases from ElephantSQL to Neon

Continuing my series of "if I push enough buttons I can get postgres to work for me" I am going to record how I migrated from ElephantSQL to Neon. This is one of my personal documentation posts - I write these for my own reference for when I need to do something similar in future but all useful information has dropped out of my head so I don't have to distil something simple from proper documentation again. They are sometimes useful to someone doing the same thing (I'm actually surprised how often I do send these links to people) but since more folk are reading my blog from LinkedIn these days this is fair warning.

The setup

I was migrating from ElephantSQL to Neon as the former was shutting down. I wish Neon all the best, but the way things are at the moment I guess it's only a matter of time before I have to do it again. Migrating a simple postgres database is straightforward, but if (like me) you don't do it often it is nice to have the process written out.

This is for my own experimental applications, so I'm dealing with small, single-node databases and I'm not worried about downtime.

Recover the data

Getting the data out of the source database is straightforward. Simply log into the control panel, copy the connection string and use pg_dump for a full download:

pg_dump -Fc -v -d <source_database_connection_string> -f <dump_file_name>

-Fc makes the output format suitable for pg_restore. -v is verbose mode, showing you all the things going wrong...

Upload to the new database

Initially, I struggled a bit with Neon. I created the database and user in the web interface, but could not find a way to associate the two so consequently pg_restore failed with permissions errors. The simple way around this was to create the database via the Neon CLI, recorded here as a bit of a gotcha.

neon roles create --name new_user
neon databases create --name new_database --owner-name new_user

And for completeness, these are the commands which list the databases and users.

neon roles list
neon databases list

Once the database is created properly, the database can be restored using the pg_restore command.

pg_restore -v -d <neon_database_connection_string> <dump_file_name>

Repointing the database

So far so simple. Now to reconfigure the application - this should be a case of updating an environment variable. For a Rails app, that is likely DATABASE_URL. Simply edit the environment variable to the new database connection string, restart the app and this is done.

Again, this is for a very simple application - one Rails node, small database, no particular need for zero downtime.

Hopefully this will be useful to someone out there even if it's just me in the future. Hello, future me. What are you up to these days?

Monday, 23 September 2024

Why good software engineering matters

I've needed to make some changes to a few of my personal applications recently and running through the process made me reflect on some of the basic building blocks of my profession. As a deeply uncool individual, I am very interested in the long-term sustainability of our technical estates so I thought I'd capture those thoughts.

The story so far... I run a few small-scale applications which make my life easier in different ways. I used to host these on Heroku, then when they shut down their free tier I migrated them all to Render and Koyeb with databases hosted by ElephantSQL. About a year on, I started getting emails from ElephantSQL telling me they are shutting down their database hosting so I needed to migrate again. I also needed to fix a few performance problems with one of the applications, and generally make some updates. Fairly simple changes but this is on an application I haven't really changed in several years.

A variant of this scenario comes up regularly in the real world. Unless you're lucky enough to be working on a single product, at some point your organisation will need to pick up some code nobody has touched in ages and make some changes. The application won't be comprehensively documented - it never is - so the cost to make those updates will be disproportionately high. Chances are, this means you won't do them so the application sits around for longer and the costs rise again and again until the code is totally rotten and has to be rebuilt from the ground up, which is even more expensive.

In a world where applications are constantly being rolled out, keeping on top of maintenance - and keeping organisational knowledge - is vital, but also definitely not sustainable. There are lots of service-level frameworks which promote best practice in keeping applications fresh, with ITIL being the obvious one, but this is only part of the picture. How do we reduce the cost of ongoing maintenance? Is there something we can do to help pick up and change code that has been forgotten?

This is where good software engineering makes a huge difference, and also where building your own in-house capability really has value. Writing good code is not just about making sure it works and is fast, and it's not just about making sure it's peer reviewed - although all of this is very important. But there are many approaches which really help with sustainability.

Again, my applications are really quite simple but also the "institutional knowledge" problem is significant. I wrote these (mostly) alone so anything I've forgotten is gone. The infrastructure has been configured by me, and I'm not actively using much of this stuff day to day so I have to dredge everything out of my memory / the internet - I am quite rusty at doing anything clever. These problems make change harder, so I have to drive my own costs (time in my case) down else I won't bother.

Let's look at some basics.

First, the database move. My databases are separated from the applications which means migration is as simple as transferring the data from one host to another and repointing the application. This last step could be tricky, except my applications use environment variables to configure the database. All I need to do is modify one field in a web form and redeploy the application to read the new target and it's done with minimal downtime. Sometimes developers will abstract this kind of change in project team discussion ("instead of pointing at this database, we just point at this other one") but with the right initial setup it really can be that simple.

Oh, except we need to redeploy. That could be a pain except... my applications are all set up for automated testing and deployment. Once I've made a change, it automatically runs all the tests and assuming they pass one more click and the new version goes to the server without my having to remember how to do this. I use Github Actions for my stuff, but there are lots of ways to make this happen.

That automated testing is important. Since everything in tech is insufficiently documented (at best) this creates a safety net for when I return to my largely forgotten codebase. I can make my changes or upgrades and run the tests with a single command. A few minutes later, the test suite completes and if everything comes up green then I can be pretty confident I've not broken anything.

Finding my way around my old code is fairly easy too, because it conforms to good practice use of the framework and it is all checked by an automated linter. This makes sure that what I've written is not too esoteric or odd - that is, it looks like the kind of code other people would also produce. This makes it much easier to read in the future and helps if someone else wants to take a look.

So through this, I've changed infrastructure with a simple field change, run tests giving me significant confidence the application is working after I've made a change with a single command (which also checks the code quality) and deployed to the server with another single command. To do all this, I don't really have to remember anything much and can focus on the individual change I need to make.

Now, any developer reading this will tell you the above is really basic in the modern world - and they are right, and also can be taken MUCH further. However, it is very hard to get even this level of rigour into a large technical estate as all this practice takes time - especially if it was not the standard when the code was initially written. But this really basic hygiene can save enormous amounts of time and thus costs over the lifecycle of your service. At work we are going on this journey and, while there is a lot more to do, I'm immensely proud of the progress that the software engineering teams have made driving down our costs and increasing overall development pace.

Basics are important! Always worth revisiting the basics.

Saturday, 31 August 2024

Moving office

I don't often directly talk about events at work, but for once I'm going to celebrate something rather cool that's happened. We've moved offices!

Despite the valiant efforts of our estates people, the Macmillan offices in Vauxhall were ... well, terrible. Vauxhall itself is a roundabout with delusions of grandeur and the building was slowly falling about around us. I do not frequent the office too often, so I was rather surprised during a trustee meeting when the whole building started to shake like the apocalypse had come. Nobody else blinked - this was "normal" to the point of it happening several times a day. The rest of the day gave a glorious demonstration and I have no idea how anyone copes, frankly.

So for this and various other reasons it was time to Be Elsewhere. However for us in Tech this meant we had to face the (kinda literal) elephant in the closet - the server room on the premises. This was not a comms cupboard, but a proper server room, with ageing steam-powered servers bolted the floor, powering the organisation. But this was not a time for panic and fear - instead, we had a fantastic opportunity to take a big step modernising our systems. A golden opportunity to spend a chunk of time significantly moving the dial on our legacy tech debt in the service of a hard deadline which the org needed hitting. We grasped this opportunity, with months of work spent virtualising, reconfiguring, and rebuilding to bring things into a much better state in preparation for the move.

To actually move, we initially had to plan for disabling everything. However, with every week of work where we cleaned up dependencies and updated our overall configuration we decoupled systems and by the time we came to do the move, the only services that we actually disabled were the ones hosted on the machines we had to turn off. This in itself was a huge win, but the move weekend itself was exceptional. I've been involved in lots of tech projects over the years, and many releases, and something always goes wrong and needs correcting at the last minute. We had our share of challenges, but for the week before the move we were having daily meetings in which we were looking at each other wondering what we had missed - things were calm. Then the weekend was so well executed it was almost unsettling. The team didn't exactly stick to published timings, but only because they were so far ahead.

Overall, it was incredibly smooth and not only did this enable our office move, we have finished with our systems in a much better place, either in the cloud or in a proper data centre and better understood, and run by a team with a great deal of (very much earned) confidence. An exceptional result on the back of a lot of hard work - really knocked it out the park.

The second, and far more visible, part is the new office itself. This was clearly much wider than the Technology group, but we had a crucial role in making sure the new premises had an internet connection (which it didn't until quite literally the 11th hour ... worrying times!) and working AV, door controls, room bookings, etc etc. The wider team did an excellent job bringing everything together on time and it is lovely being in a modern office which doesn't shake when the trains go by. In particular (for my post!) I'm going to say the technology is working rather well. The new meeting equipment is very easy to use, with great sound quality and scary cameras which track motion to zoom in on speakers. I wonder if I can mount a nerf gun on one of them...

So yes. Some excellent work here and well worth recording. A great result for Macmillan. For the Technology group, not only did we play our part we've also managed to modernise, increase knowledge, improve resilience and other great things across our server infrastructure we ALSO managed to remove a load of problems with the office AV. As I said at the top, I don't often write about specifics here but I thought I'd make an exception.

And to close, here are some pictures of the new place including the most important part of the new office building - a button which gives hot chocolate milk...

The Forge, Macmillan

Congratulations everyone!

Sunday, 28 July 2024

When to mentor

I've been thinking again about mentoring. When is the right point to consider the challenge of mentoring someone? When does one know enough? When should one offer oneself as a mentor, without it coming across as seriously arrogant?

The answer is, of course, never. A mentor is calm, wise, and has seen it all before. They can easily understand everything that could possibly come up, have a very clear plan in place immediately and be able to take a mentee forward through any situation. Does this sound like me / you? Really? Plus, let's face it, if I / you know it then it's pretty obvious and can't possibly be worth offering to someone else.

Or at least that's what The Voices say to me every time I think about this. This is, of course, nonsense.

So what is the real answer? When is the right time to help those with less experience? Now. It doesn't matter what experience you have - it is more than some people. Sure, over time that number will increase and more folk will benefit from hearing from you, but you already know something that is unique and worth telling others. Mid-level developer? Plenty of people coming through the junior levels who need to learn from you. "Only" a junior? Well, there are plenty of people who are just starting out and have no experience at all.

This is before we get to the value of mentoring to you. Similarly to writing a blog, there is a discipline in structuring ideas and then clearly talking through ideas and concepts in a way someone else can understand them and like any form of teaching, one needs an extra level of understanding to be able to talk about a concept in this way. It is essential for a leader to be able to articulate their thinking clearly in order to bring others along with them. It is also very important to be able to think clearly on the fly - such as when people can drop awkward topics of discussion on you at any time.

As an aside, I really don't like the term "mentor" - or rather I don't like thinking of someone as "my mentee" because of the implied power dynamic there. I would say I don't have any mentees, but there are plenty of people who would disagree with that.

Ok, so how does one offer mentoring without sounding deeply arrogant? The easiest route, I think, is by offering to a group who are already in a place to be receptive, and maybe linked to individual topics you know you can claim some expertise. I've recently seen someone I respect offering consultation around salary negotiations. This is a form of targeted mentoring, and in a field she is visibly knowledgeable. 

As I said above, I already do some mentoring however my new year resolutions included giving more back to this industry. So I'm going to do two things. 

Firstly, if anyone is reading this and wants a chat about the tech industry - in particular technical leadership, moving from a technical job to a leadership role, the role of technical knowledge in the strategic / leadership space or similar - then please do reach out on LinkedIn or Twitter. I am also open to speaking to groups (which is whole different post).

Secondly, I'm going to make this same offer in an engineering leadership Slack which is filled with people I don't know. That idea scares me ... we'll see what happens.

An important caveat here. I know there are qualified coaches, mentors and so on. I am not that. I am simply someone who has been around a bit.

Anyway, I'm going to do something here and I challenge you, dear reader, to do so too. The important thing is that there is always something one can offer to others who are looking to learn. And there is always something one can learn from someone else. We all can find value by listening to and learning from each other.

Sunday, 23 June 2024

Behold the art

Sometimes I just want to write a post on this blog to mark something that made me happy. This is one of those, so feel free to tune out if you're looking for something something data technology management leadership.

Anyone left? Cool.

For reasons that escape me, I was asked to show some of my photographs at a local exhibition of creativity and art. I have been a keen photographer for many years, but I've never really thought my pictures rise to any kind of display standard. However, others did not share that opinion so I was pushed into creating a display.

It went well!

My photos at the St Swithins art exhibition 2024

And my photos of the whole event are here.

I actually got some really good comments. People loved the theme coming through the writeup and apparently some people got quite emotional when I wasn't there. In addition to the disease itself, the COVID lockdown has left some deep wounds and it does seem weird to me that, unlike other national emergencies (eg the war) we don't really talk about it much. Some of life has changed, some of life has reverted to as it was before. But in the main we're just carrying on. Certainly in my head unless I properly think about it, lockdown was ages ago now and something that happened over a couple of weeks. Clearly, that is absolutely not true, and I find it weird how keen we are collectively to put it to the side. Perhaps this is our way of collectively dealing with the trauma? It seems we should have a national memorial day or something?

Anyway, before this becomes a post about lockdown or COVID, just a few notes on how this was pulled together.

Obviously, the photos were taken years ago. They were part of a wider set (which was on the projector above, and is in the embedded carousel below) depicting light in the darkness of those times. This set is also on Flickr and that creates a slideshow which could be put up on screen. These pics were made into a book years ago, so I got one of the owners of that book (my parents) to pick their favourites and, after a spot of measuring and checking the DPI I calculated they would work at A3 size. Sadly, the local print store has shut down so after a spot of Googling, I took my flash drive to Ryman and was very pleasantly surprised that their photo printing is (to my amateur eye at least) really very good. If you are in Bath and need something printed well, you can do a lot worse!

One Flickr link, six pictures and a short write up combined to what you see above. I was really quite pleased with the outcome and makes me think I would like to do a bit more of this kind of thing. Of course that means I need to do some more creative work.

Since this blog is usually about the tech industry - I also met someone who is a developer looking at their future in the industry and gave them some sage advice (lol). Seriously, no idea if I said anything valuable or not, but I am always open to opportunities to help those coming through and give back. In fact, there is a post on this coming soon...

And to sign off - here is the full set of images from the display.

Light book - lockdown