Showing posts with label development. Show all posts
Showing posts with label development. Show all posts

Sunday, 23 February 2025

Email Three - Email with a Vengeance

"You email isn't arriving at all now" - everyone.

I have spent far too long writing about email and how to set up vanity domains. This really should be easy and Just Work but ... well. Here is the third post. Why do I care? Well, given how important email is as part of our online identities I do believe in taking some ownership of it, hence using a vanity domain. By using my own domain instead of an @gmail.com address I could migrate away from Gmail in the future without losing access to everything in the world. While I don't intend to go anywhere any time soon, Google does have a habit of doing odd things with its services so I'd like to have some options (he says, using Blogger which is far more at risk than Gmail...).

With that in mind, I'd like to use a vanity domain. I'd also like my email to arrive. And I'd like people to be able to email me too. High requirements, I know.

The story so far

So this is the third post on this subject (sigh). In my first post I went into detail on my requirements and the underpinning bits of security apparatus required to make email happen. I set things up using SendGrid but lamented using a marketing company for email as well as a cap on my daily email usage.

In my second post I removed SendGrid as sending / receiving wasn't consistent and switched to using the Gmail mailservers. This removed the restrictions but also made it impossible to set up DKIM and DMARC properly. I helped my setup by setting p=none which is better than nothing, but not by a lot.

Guess what? Email didn't send / receive again. This appears to have gotten worse recently, or I'm noticing it more. When three email vanished over a couple of days I cracked - I can't live with inconsistent email. It's too important.

The problem

Reading around suggests that the problem is to do with how email forwarding works. No-frills forwarding essentially throws the email at the receiving server. The receiving server then figures out what to do with it. This is fine, until one factors in load - and that all spam needs forwarding in case of false positives. The system needs to decide what to do when it is overloaded, and it seems the Gmail servers drop email in this case. Then the forwarding service needs to decide what to do and the simplest approach is to also drop the email - else they are then storing email which has its own overheads and problems.

This is a crude explanation - here is an expert explaining it far more accurately.

Considering I've been using free options, I can see why they've taken this approach but it's not good enough for me.

The solution

The solution is to use something which holds incoming email temporarily and retries if the forwarding fails. There are a few ways to do this, including some approaches using scripting and free services but as noted above I'm really bored of fiddling with this ecosystem then gaslighting myself into thinking it's working when there are a few, but notable, errors. No scripts, time for something a bit more thorough.

Enter Gmailify. Apparently Tim O'Neill suggested this to me the first time around, but either I didn't note it or I got confused with the Google feature of exactly the same name. Either way, I am now giving it a go and the pricetag ($7 / year at time of writing) is very reasonable.

Gmailify works as a forwarding / mailbox service. It controls the incoming / outgoing mail on your domain and temporarily lets the email rest in a mailbox. Gmail then uses POP3 to pull from that mailbox which then erases all trace. It also enables all the DKIM / SPF / DMARC setup that was missing before.

Setup is really straightforward if you know how to edit DNS settings and tbh should be easy if you're just confident clicking around. It gives you exactly what you need at each step, and an option to verify each step has gone in properly. The interface for routing different addresses on your domain is really easy to use too, at least for a simple setup.

Couple of things that took me a moment of thought. First, you need to set up the primary email address then configure the catch-all email address if you're used to *@domain.com. This is easy in the Email Routing submenu. Second, Gmail doesn't automatically prompt for outgoing email any more (could be because I was migrating a config?) and when modifying an existing outgoing mail rule it doesn't perform a full validation which will likely create problems down the line. I got around this by deleting my existing outgoing mail rule and setting up from scratch again. Don't forget to reset your default outgoing email address if you do this!

Oh, and if you're migrating rather than setting this up for the first time don't forget to clean up your DNS config when you're done.

All done in less time than it took me to type this up. I sent some email to Tim's overly-fussy email account and it all got through which is a first. I also ran it through this awesome tool for learning and testing DMARC settings which is worth a play if only to see how education tools should be designed. All the tests now light up a pleasing green - another first.

I've had this set up a few days so I'm keeping my fingers crossed this is the last time I have to write about this...

Tuesday, 19 November 2024

Resurrecting a Pixel C

I'm putting off the post I need to write so I'm going to ramble a bit about resurrecting an old Pixel C tablet by sideloading a custom operating system. This is something that sounds scary but really isn't, so I thought I'd share in case anyone else has hardware in the cupboard and no desire to buy new, expensive kit.

Setting the scene

My tablet usage is pretty modest. For anything significant, I'll use a laptop / desktop or my mobile phone. My tablet is mostly used for video streaming (YouTube, Netflix, etc) and a bit of web browsing when it's the closest thing to hand. I am not a power user.

Many years ago (2016), I bought a Google Pixel C tablet. This was originally released in 2015 and was Google's first tablet under the Pixel brand. Other than having a name that is really annoying to search for, it was a fabulous piece of kit - feeling very solid and chunky. However for boring reasons, when COVID hit I moved and my Pixel C fell out of use.

Fast forward many years, and my iPad battery is dying so I thought I'd see about resurrecting the Pixel. Firstly, I hadn't realised how long it had been since it was last used - like a lot of people I know, my sense of time has been utterly smashed by two years of lockdown - so it was a bit of a surprise to discover Google stopped supporting it nearly six years ago (end of 2018). Consequently, although it fired up fine, it was hideously out of date with no path to catching up. It seems older versions of Android (8 in this case) have a problem connecting to modern 5ghz wifi connections and so my lovely hardware was dead unless I wanted to run a lower speed wifi network.

Why take the easy solution, eh?

So, with my brain telling me "this is only a few years old" (it's nine years old?!) I thought I'd look at bringing it back to life via the medium of a custom ROM. For the uninitiated, this is basically installing a new operating system package - pre-built with proper drives and so on - however is more complicated because tablets and phones tend to lock all this down to stop people fiddling and bricking the device. However, I'm (apparently) a grown adult so I'll fiddle if I damn well please. Plus, it doesn't work now so it's not going to get any less functional - perfect for some learning.

Get on with it

The hard part was really all the prep. I needed to find a good ROM, which I did via the application of Google and reading. LineageOS is a the gold standard for community-run operating systems, but even they have dropped support for the Pixel C. However there are some intrepid folk out there who are keeping the dream alive on the XDA forums and a helpful dev going by npjohnson is pushing out builds for the Sphynx project, via his forum thread. Sphynx is an unofficial build of LineageOS tailored for the Pixel C - perfect for me.

The instructions are good - I read them a few times before giving this a go because I was scared - and basically there are four stages:

  1. Download adb and fastboot to your computer - this laptop is Ubuntu Linux so that was as simple as sudo apt install adb fastboot but Windows options are also available
  2. Learn how to boot your tablet into the recover mode menu (for the Pixel C, with the device off hold Power and Volume Down)
  3. Download the desired image - I just took the latest (lineage-21.0-20241019-UNOFFICIAL-sphynx.zip at the time) and it worked fine
  4. Follow the instructions very closely

Extremely minor gotcha - I extracted the downloaded zip file to get a recovery.img file for page 3, then used the original zip for sideloading in step 4. Other than a couple of slightly alarming beeping sounds from the device, this was the only time I really needed to think once I got going. The whole process took about an hour, going very slowly and carefully, then additional time to set up the tablet (obviously it is wiped so you'll lose anything on it from before).

Behold

And it's working! There are some known issues - the camera doesn't work properly, apparently bluetooth is a mixed bag and the rotational sensors also don't work - but these haven't impacted my needs. If you like running your tablet in portrait mode, apparently this can be a pain. However, for me I have a shiny refurbed tablet that plays videos and doesn't keep turning off mid-video. Hurrah!

Overall, this is scary. But it turns out it is also easy. Hopefully others will give it a go and bring their devices back to life.

Return of the Pixel C

Thursday, 14 November 2024

Sending email - redux

It feels like forever since I wrote my last blog post on sending email via a vanity domain but has actually only been a year. In that post, I noted that sending via SendGrid was optional and it should all be doable using Google servers. The SendGrid config has been mostly ok, but hasn't been perfect and I wanted to remove this free third party from my email tool chain so I've revisited the setup and got it working through the Google mail server.

Will this work long term? Hopefully, but I'm not convinced for reasons I've laid out below. Let's do this.

Sending email

First I needed an app password for your Google account, which is a bit of a buried in the security interfaces. You can find the admin console for app passwords here.

This is also reachable by going to Google account settings, under "How you sign in to Google" select "2-Step Verification" and the option for App Passwords is at the bottom of the page.

Next, to configure gmail to send via the Google mail server, I needed to set the outgoing mail, found in:

Settings -> See all settings -> Accounts and import -> Send mail as

Then adding / editing my intended outgoing email address with these details on the second page ("Send emails through your SMTP server"):

SMTP server: smtp.gmail.com
Port: 465
Username: Your google account (blah@gmail.com)
Password: Your app password from earlier
Secured connection using SSL ticked

Email security

This is a minefield and something I'm going to have to monitor to see how horribly I have broken things. First, this tool from Red Sift is great for analysing email security settings.

Deep breath...

SPF

To the DNS record, add:

include:_spf.google.com

and remove references to sendgrid to avoid too many lookups - this flags it as insecure.

DKIM

Without a full Google Workplace account, I can't upload a private key to Gmail so DKIM isn't going to work. Hopefully SPF will be enough. We'll see...

I think this also scuppers MTA-STS, but happy to be corrected.

DMARC

This is tricky. DMARC requires one of DKIM or SPF to pass tests properly in order to pass, then it suggest the receiving server takes a specified action (via the p flag). In this case, my DKIM check is going to fail so that is out. The SPF check passes the initial test however there is a further check to make sure the Sender and From headers are the same. In my case they are not, since Sender is gmail and From is my domain so check fails with a no-alignment error - thus the DMARC check itself fails.

I've "solved" this by setting the p flag in my DMARC DNS entry to "none" which just tells the receiving email server to deliver it anyway. It seems to work for my very limited testing sample, but obviously I'm not happy about this approach.

What is next?

I hope the SPF config will be enough to make my email work happily again, however I'm clearly hitting the limits of the free options in Gmail. If this doesn't work well enough, I think I'll move away from free options and look at something like AmazonSES which (from a quick read) will let me configure everything I need and charge me $0.10 per 1000 outgoing emails. This is probably the ultimate "correct" solution (unless I want to pay for a Google Workspace account) but is a lot more work and right now I don't wanna.

Sunday, 6 October 2024

Migrating postgres databases from ElephantSQL to Neon

Continuing my series of "if I push enough buttons I can get postgres to work for me" I am going to record how I migrated from ElephantSQL to Neon. This is one of my personal documentation posts - I write these for my own reference for when I need to do something similar in future but all useful information has dropped out of my head so I don't have to distil something simple from proper documentation again. They are sometimes useful to someone doing the same thing (I'm actually surprised how often I do send these links to people) but since more folk are reading my blog from LinkedIn these days this is fair warning.

The setup

I was migrating from ElephantSQL to Neon as the former was shutting down. I wish Neon all the best, but the way things are at the moment I guess it's only a matter of time before I have to do it again. Migrating a simple postgres database is straightforward, but if (like me) you don't do it often it is nice to have the process written out.

This is for my own experimental applications, so I'm dealing with small, single-node databases and I'm not worried about downtime.

Recover the data

Getting the data out of the source database is straightforward. Simply log into the control panel, copy the connection string and use pg_dump for a full download:

pg_dump -Fc -v -d <source_database_connection_string> -f <dump_file_name>

-Fc makes the output format suitable for pg_restore. -v is verbose mode, showing you all the things going wrong...

Upload to the new database

Initially, I struggled a bit with Neon. I created the database and user in the web interface, but could not find a way to associate the two so consequently pg_restore failed with permissions errors. The simple way around this was to create the database via the Neon CLI, recorded here as a bit of a gotcha.

neon roles create --name new_user
neon databases create --name new_database --owner-name new_user

And for completeness, these are the commands which list the databases and users.

neon roles list
neon databases list

Once the database is created properly, the database can be restored using the pg_restore command.

pg_restore -v -d <neon_database_connection_string> <dump_file_name>

Repointing the database

So far so simple. Now to reconfigure the application - this should be a case of updating an environment variable. For a Rails app, that is likely DATABASE_URL. Simply edit the environment variable to the new database connection string, restart the app and this is done.

Again, this is for a very simple application - one Rails node, small database, no particular need for zero downtime.

Hopefully this will be useful to someone out there even if it's just me in the future. Hello, future me. What are you up to these days?

Monday, 23 September 2024

Why good software engineering matters

I've needed to make some changes to a few of my personal applications recently and running through the process made me reflect on some of the basic building blocks of my profession. As a deeply uncool individual, I am very interested in the long-term sustainability of our technical estates so I thought I'd capture those thoughts.

The story so far... I run a few small-scale applications which make my life easier in different ways. I used to host these on Heroku, then when they shut down their free tier I migrated them all to Render and Koyeb with databases hosted by ElephantSQL. About a year on, I started getting emails from ElephantSQL telling me they are shutting down their database hosting so I needed to migrate again. I also needed to fix a few performance problems with one of the applications, and generally make some updates. Fairly simple changes but this is on an application I haven't really changed in several years.

A variant of this scenario comes up regularly in the real world. Unless you're lucky enough to be working on a single product, at some point your organisation will need to pick up some code nobody has touched in ages and make some changes. The application won't be comprehensively documented - it never is - so the cost to make those updates will be disproportionately high. Chances are, this means you won't do them so the application sits around for longer and the costs rise again and again until the code is totally rotten and has to be rebuilt from the ground up, which is even more expensive.

In a world where applications are constantly being rolled out, keeping on top of maintenance - and keeping organisational knowledge - is vital, but also definitely not sustainable. There are lots of service-level frameworks which promote best practice in keeping applications fresh, with ITIL being the obvious one, but this is only part of the picture. How do we reduce the cost of ongoing maintenance? Is there something we can do to help pick up and change code that has been forgotten?

This is where good software engineering makes a huge difference, and also where building your own in-house capability really has value. Writing good code is not just about making sure it works and is fast, and it's not just about making sure it's peer reviewed - although all of this is very important. But there are many approaches which really help with sustainability.

Again, my applications are really quite simple but also the "institutional knowledge" problem is significant. I wrote these (mostly) alone so anything I've forgotten is gone. The infrastructure has been configured by me, and I'm not actively using much of this stuff day to day so I have to dredge everything out of my memory / the internet - I am quite rusty at doing anything clever. These problems make change harder, so I have to drive my own costs (time in my case) down else I won't bother.

Let's look at some basics.

First, the database move. My databases are separated from the applications which means migration is as simple as transferring the data from one host to another and repointing the application. This last step could be tricky, except my applications use environment variables to configure the database. All I need to do is modify one field in a web form and redeploy the application to read the new target and it's done with minimal downtime. Sometimes developers will abstract this kind of change in project team discussion ("instead of pointing at this database, we just point at this other one") but with the right initial setup it really can be that simple.

Oh, except we need to redeploy. That could be a pain except... my applications are all set up for automated testing and deployment. Once I've made a change, it automatically runs all the tests and assuming they pass one more click and the new version goes to the server without my having to remember how to do this. I use Github Actions for my stuff, but there are lots of ways to make this happen.

That automated testing is important. Since everything in tech is insufficiently documented (at best) this creates a safety net for when I return to my largely forgotten codebase. I can make my changes or upgrades and run the tests with a single command. A few minutes later, the test suite completes and if everything comes up green then I can be pretty confident I've not broken anything.

Finding my way around my old code is fairly easy too, because it conforms to good practice use of the framework and it is all checked by an automated linter. This makes sure that what I've written is not too esoteric or odd - that is, it looks like the kind of code other people would also produce. This makes it much easier to read in the future and helps if someone else wants to take a look.

So through this, I've changed infrastructure with a simple field change, run tests giving me significant confidence the application is working after I've made a change with a single command (which also checks the code quality) and deployed to the server with another single command. To do all this, I don't really have to remember anything much and can focus on the individual change I need to make.

Now, any developer reading this will tell you the above is really basic in the modern world - and they are right, and also can be taken MUCH further. However, it is very hard to get even this level of rigour into a large technical estate as all this practice takes time - especially if it was not the standard when the code was initially written. But this really basic hygiene can save enormous amounts of time and thus costs over the lifecycle of your service. At work we are going on this journey and, while there is a lot more to do, I'm immensely proud of the progress that the software engineering teams have made driving down our costs and increasing overall development pace.

Basics are important! Always worth revisiting the basics.

Sunday, 25 February 2024

Failing upwards

Humans are fascinating aren't they? Everyone is different, behaves differently, thinks differently... and before looking at others we can spend a lifetime just understanding our own minds and thought processes. I try to spend a lot of time reflecting (often I then write those thoughts down here) and one area I find very interesting is how I learn. I blame my mother for this - she's a teacher and embedded in me an interest in the different ways people learn and understand.

Like many others, one of the ways I learn is experimentation around the boundaries. If I know how a system or a situation is supposed to work, I will sometimes see what happens one step beyond the stated limit. This is particularly useful with computers where one can watch log outputs and understand the complex system while modifying variables. However, it's also useful exploring options and testing perceived limits in the office. One of my first decisions as a senior leader was around a change in recruitment policy which nobody could work out how to sign off. I just ... did. Mostly to see if anyone would tell me I'd overstepped.

That was some five years ago, and as far as I know it was never reverted. Importantly, I discovered that the actual limit to my authority in this role was way beyond where people mentally placed it, and it moved the moment I challenged. So, armed with this knowledge, I then had a whole new space to explore what could be done.

Before moving on, I fully acknowledge that this is hardly sophisticated. While I like to think I've learned some more finesse over the years, "pushing boundaries" is what what two year olds do to try to understand the world. They push the parents to see how far they can go before getting put back in their place. But they do say we lose our inquisitiveness and bravery as we grow older...

Anyway, the reason for this post (other than outing myself as a 6' child) is reflecting this into the workspace. At work, I spend a lot of time developing people and a vital part of that is thinking about how they can push their own limits and move further forward. I've seen very smart people stunt their own growth through their fear of failure - unwillingness to push themselves forward and potentially be wrong.

This is problematic in general, but lethal if an individual's aspirations are to reach the highest levels of an organisation. At that point, there is no manual and you're thinking on your feet the majority of the time. You have to be able to see where you are being limited - by yourself, by the org processes, whatever - and seek ways to push through and improve the situation. For those of us in leadership, that means giving people the space to explore into an area where they might fail and then allow them to find their own way through, even if this isn't quite as clean or direct as we'd necessarily like. Clearly we should help where needed (after all, not all failures are equal) but it's no use constantly being training wheels as this will never build confidence. Worse, it might lead individuals to see the problem as "what makes Tom happy" rather than "what needs to be done to make this situation better" at which point I'm doing all the thinking and that is neither helpful nor sustainable.

Obviously what I'm talking about here is managing the fear of failure (not necessarily by removing all the consequences) and building a psychologically safe environment. If individuals can push the limit of what they can do, they can learn and grow. They can grow towards the next step on their career, and that means instead of having a report we've got a report who is behaving more like us - or at least our level. This is great for their growth, and infinitely more useful to us as leaders as when they've developed the skills we can spend less time managing them and more time leading.

So let's encourage our people to make themselves vulnerable, give them a space where that is safe and let them do things that are imperfect so they can develop the skills to be as perfect as us (ha). Let's encourage some failure?

Monday, 13 November 2023

Sending email in 2023

"Your email keeps going into my junk box" - everyone.

I use a vanity domain to front my email address. I used to run a simple setup where the domain was basically masking my Gmail account. Incoming was handled by a wildcard forward in the domain host. Outgoing, I simply rewrote the email envelope with my desired email address. Essentially I was spoofing the outgoing email.

Gmail used to let me do this, but clamped down years ago requiring proper authentication with an SMTP host however the old setup still worked, as long as I didn't change anything.

Then the big email providers started clamping down on this kind of thing. In an effort to combat spam, email is increasingly complicated and the wider ecosystem is getting more locked down. There is a big rumble about the big providers essentially pushing smaller email providers out by blanket not trusting them, making it increasingly difficult to run your own email setup. This post is not about that, rather it's how I stopped my email started going into junk boxes. I was forging my own sender address, which is exactly the kind of behaviour you see from various types of spam. Nice.

So, on the assumption I wanted my email to arrive I needed to revisit my configuration and set this up properly. I did a bit of work, so I thought I'd write up here so I can repair it in future if needs be, and it's in one place on the offchance it helps anyone else.

Incoming email - you're emailing me

Not many changes here - although I use a combination of Cloudflare and Ionos DNS these days, but a blanket forwarding rule in the Ionos config from the whole domain still works.

Outgoing email - I'm emailing you

Ok, this is where it gets interesting. I can still send email, setting the domain to whatever I want, but my emails are being flagged as spam. This is because the receiving hosts are trying to protect the account owners from spam and my setup was being flagged as spam. Obvious note - I set up a test Gmail account for receiving email so I could test the effects of my settings.

Outgoing SMTP server

First thing was properly configuring an outgoing mail server. In theory, this can be done with the Gmail SMTP service but while I could authenticate properly I found my email still ended up flagged as spam. I'm sure there is a way to do this properly but for the moment I instead turned to SendGrid and this documentation was useful.

A free account allows 100 emails per day - plenty for me. Nobody wants to hear more of me than that. In the SendGrid interface it is easy to create a API key (Settings -> API keys) with appropriate emailing sending permission then when adding the server details, just select smtp.sendgrid.net / apikey / $YourKey. Only slight gotcha is making sure you get the port right (SSL over port 465). This should authenticate properly and email can be sent - although it'll probably be going to junk again.

Next up, setting up DKIM. This stands for DomainKeys Identified Mail - an email authentication method designed which allows the recipient to check that an email came from the domain it claims, and was allowed by the domain owner. The setup is found in Settings -> Sender Authentication. You might be able to get away with Single Sender Verification, but I did the full Domain Authentication. You need to be able to modify your domain's DNS settings for this to work properly.

If the setup doesn't seem to be working properly you can test the individual additions on the command line with a tool like dig.

dig foo8908.tomnatt.com should give a NOERROR response. If it's not, the setting isn't right or it hasn't refreshed yet.

Finally, assuming this is for personal email you'll want to disable link tracking. This rewrites links in your email for marketing purposes and likely break any links you send unless you configure it properly. Turn it off with Settings -> Tracking -> Click tracking -> disable and links will work again.

Other DNS setup

There are two other DNS entries that can help with proving email provenance - SPF and DMARC. I'm not sure whether I needed all these for a minimal setup, but they do work best when all three are present. I did configure them, so I'm capturing what I did. 

SPF (Sender Policy Framework) is another way to ensure the mail server sending an email is allowed to send via this domain. It works by defining which servers can send email, so the client can check, rather than directly encrypting the connection (the DKIM approach). The setup is fairly simple, and can be checked with tools like this.

An SPF policy which allows sending from Gmail and SendGrid servers might look like this:

v=spf1 include:sendgrid.net include:gmail.com ~all

DMARC (Domain-based Message Authentication, Reporting & Conformance) helps receiving mail systems decide what to do with incoming mail that fails validation via SPF or DKIM. So this is worthless without at least one of the other two.

A rule which tells the receiver to mark failing email as spam and send reports to the given email address would look like this:

v=DMARC1;p=quarantine;pct=100;rua=mailto:postmaster@tomnatt.com

Done

And lo, email appears to be flowing again. I hope something here helps. To finish, I want to note that I'm not an email expert - not even close. If you are, and you're seeing somewhere I've written something stupid please reach out and I'll correct and attribute.

Saturday, 22 April 2023

Slack notifications from Github Actions

A while back I wrote about moving my deployment scripts away from Codeship into Github Actions. This has continued to work, but as I noted it did leave me without notifications in Slack. Time to fix this.

There are a few ways of enabling notifications. The best way involves creating Slack apps with incoming webhooks, but this involves a lot of faff (especially on a free Slack instance). The ever-useful Phil Wilson found a much simpler alternative.

Enter the Github Slack app.

The setup is very easy:

  • Add Github app to Slack
  • Log in to Github app as whichever user should be receiving notifications
  • Authorise it to talk to your Github account in Github

All three of these are documented in the above link.

Then it's a case of setting up the notifications you want within the Slack app channel. I used the following:

/github unsubscribe $account/$repo issues pulls commits
/github subscribe $account/$repo workflows:{event:"push"}

The first removed the overly noisy notifications from a whole bunch of Things on the account. The second enables notifications from workflows (ie Actions) - mine are triggered on "push" events. Docs for the workflow configuration can be found on the integration page.

And behold! Notifications!

Sunday, 22 January 2023

Migrate from Codeship to Github Actions for projects using Mina

Oh look - another free tool has decided to eliminate its free tier. My last post was about migrating away from Heroku as it torched its free offering and now Codeship has followed suit. In the past, I have written about using Codeship as a part of my toolchain for CI/CD but over recent years I have moved away from it to Github Actions for running linters and tests and using the Github integrations of the hosting providers themselves for deploying web apps. This means I've migrated many of my projects away from Codeship already, but there has been one significant holdout - my static sites.

Usual disclaimer - this is for a personal project, so I'm more interested in free solutions than high speed or resilience. Docs for myself, maybe someone will find this useful, etc etc

The problem

My static sites are generated using Jekyll, and deployed via Mina. Mina is a great tool, but for the purposes of this post it opens a connection over SSH and executes some commands, which require some environment variables.

I also need to migrate my "tests" - which are just building the site locally to make sure it's not broken - so I'll note that step here.

Migrate test build for a branch

Making the branches do a test build was simple - set up Ruby, get code, run the test code. All very similar (and easier) than doing this with Rails and wrapped up in a short Github Action.

Migrate deployment

Making the deploy script trigger on merge to the main branch was a little trickier (full example below).

First gotcha - when one merges a branch into main, the event that triggers is a "push" event. Knowing that, getting it to work is really easy but that took a bit of hunting. It also means that a push directly to main will trigger a deploy, if your main branch isn't protected.

Second gotcha - environment variables. I didn't want my server config in my public code, so I needed to set some environment variables in the settings of the repository. There are a few ways to do this in Github - I settled on using "Repository secrets" which are found in the project Settings -> Secrets and variables -> Actions then "New repository secret". They are accessed in the workflow file as ${{ secret.MY_SECRET }} and if you want to use them inside a script, you need to pass them through explicitly by setting the env for that script. Example below.

Third gotcha - SSH key. Mina opens a connection to a remote server, so it needs access to an SSH key. I created a public / private pair on the server and added the public key to the usual place, then put the private key in the repo secrets as above. Using this action allowed me to load the key into my workflow easily, and then I could use ssh-keyscan to push it into the container known hosts, allowing Mina to work properly. Again, example below. Shout out to this post for the pointers on the SSH key setup.

And lo, deployment works. Here is the complete deploy Action.

Tidy up

All that is left is to tidy up. Couple of simple steps here:

  • Remove Codeship keys from server
  • Delete Codeship project / build

And we're done! Like my previous Github Actions implementations, this is almost entirely generic so I can copy / paste into other projects with minimal effort and make sure I have light CI/CD for my fun projects. Now it would be great if I could get back to some of my creative programming projects for a while, instead of having to rebuild toolchains.

Sunday, 15 January 2023

Migrating away from Heroku

Along with almost every developer I know, I was very sad to see the end of Heroku's free tier. I was hoping they'd reverse this decision, but it seems that was too optimistic so in the end I had to choose between killing all my toy applications, paying money or finding an alternative.

I did consider just shutting things down, but I had two concerns with this. First, I have one application I actually use and losing it would be a pain. Second, and more importantly, if I have no solution to this problem it adds another barrier to me coding as a hobby. So this needs solving, and not just by throwing hundreds of pounds a year at it.

Finding an alternative took a long while, primarily because I was trying to find a straight all in one alternative. While they are out there, the free tiers are not suitable - the closest insist on deleting my database after 90 days. Instead, I ended up separating application hosting, database hosting and sending of email and the below is where I ended up, written in detail in case it helps one of the many others doing this same thing. I did this all back in November, before the Heroku switch-off, and a few months all it seems to all be running well.

My applications


Everything I'm moving is a personal project (no url customisation, limited need for backups, high availability, etc) and a simple 12f(ish) application. They are written in Ruby (mostly Rails, one Sinatra) and make use of Postgres on the backend and some simple mailing. Nothing complicated and Heroku managed all this for me before.

To move from Heroku SendGrid to independent SendGrid, I did find I had to make a small update to the mailing config.

Most of this diff are linting updates - only the domain change should be necessary.

Databases


I chose ElephantSQL as it has a good enough free tier offering (limited to 20mb) and lets the free databases persist. Detailed steps for anyone who, like me, hasn't done much with a Postgres database for a while.

Setup:
  • Create account in ElephantSQL (sign in with Github)
  • Create database
  • Get connection string

Backup from Heroku:
heroku pg:backups capture --app myapp
curl -o latest.dump `heroku pg:backups public-url --app myapp`
Restore to ElephantSQL:
pg_restore --host "machine.db.elephantsql.com" --port "5432" --username "blah" 
    --password --no-owner --dbname "blah" --clean --verbose "latest.dump"
And voila, database ready to rock in ElephantSQL. This can be checked by running up locally, thus:
DATABASE_URL=postgres://blah:blahblah@machine.db.elephantsql.com/blah bundle exec rails s

Mailing


My requirements are very light here since I only use email for "forgotten passwords" email. For this, a free [SendGrid](https://sendgrid.com/) account was more than enough. I'm using SendGrid because this was the Heroku solution, so it makes for an easy switch.

  • Create account in SendGrid
  • Create API key

Application hosting


I wanted something which, like Heroku, was a container-based PaaS. I certainly can wrap my applications in containers or deploy to virtual machines, but honestly who has the time. I want to point my application at a hosting provider and have it do the work for me.

For this, I looked at a load of options but I settled on two - Koyeb and Render. Both have workable free tier hosting. Render works like Heroku used to - giving a number of hours per month and spinning down applications when they are not in use. Koyeb simply gives you $5 credit per month and lets you choose how to host. This is an enough to have an application running on a low tier package without spinning down, but not enough to run several applications.

For deployment, the steps were basically the same. For Koyeb:

  • Create account in Koyeb (sign in with Github)
  • Create application
  • Add Github integration
  • Add to "run" command: rails db:migrate && rails server
  • Select repo (this took a while to appear for me, although Github was having issues when I did this)
  • Add DATABASE_URL env var
  • Add RAILS_MASTER_KEY env var
  • Add SendGrid env vars
  • And any other env vars

Then off it goes. I did find it took a little while to start working, but it has been fine since. Also, a slight gotcha - Koyeb appears to have some broken validation when it comes to counts, eg "must be 3 characters or more" seems to actually mean "more than three characters".

For testing, I did all this (including the mailing change above) in a branch then cleaned up at the end.

For Render it was much the same, except I didn't need to specify the RAIL_MASTER_KEY env var and I needed to write a build script.

Cleanup


Finally, to finish off:

  • Spin down the Heroku app
  • Remove auto-deploy to Heroku (in my case from Codeship)
  • Toggle Koyeb / Render over to watch "main" if using a test branch
  • Make sure the instance is healthy (and make sure not exposed the wrong port anywhere)
  • Final redeploy

Roundup


That is pretty much it. I have been running this setup since November with no problems. I am getting fewer notifications through to Slack, so at some point I would like to get better alerting of start / end deployments but since it's just me working on these things I can watch them easily enough - it's not a priority. I'm just pleased I can still deploy easily and free as that means I'll keep my hand in writing bits and pieces.

Monday, 21 December 2020

A first look at Github Actions with Rails and Postgres

In my last post, I mentioned that while upgrading my Heroku tech stack I noticed Codeship was experiencing some kind of outage. This seemed to stop anything appearing in the "checks" part of a pull request (including any kind of error message, which was a long way from helpful) and I decided to investigate Github Actions for my CI/CD needs.

I've been thinking about using Github Actions for a little while, for two reasons. First, I wanted to run my linting in my CI pipeline and I know of a rather good tutorial for getting started doing this using Actions (thanks Dean!). Second, this should move my CI config to the project repository (keeping it together and putting it under version control) and remove a dependency on a third party SaaS product. I can't help feeling that the recent Codeship outage (which I only noticed because the check was missing in my PR, and could easily have missed this) vindicates this last point.

As a side note, Codeship now seems to be fully working again.

Anyway, the Rubocop implementation is actually pretty straightforward, but it took me forever to get the tests running because of a few tricks and gotchas which I thought I'd record for posterity.

Bundler

Let's start with a timesaver. I read a lot of examples while setting this up, and some had extensive Bundler config in them. However, if you're using the ruby/setup-ruby@v1 action for setting up Ruby (code here) and you put in:

  with:
    bundler-cache: true

It will just handle everything bundle-related with no more configuration. Hurrah!

Migrating Capybara driver to Apparition

I have no idea if anyone else is still using Capybara Webkit to drive their Capybara tests but I was. It has recently become a pain to install because the underlying library (QtWebKit) has been deprecated. I found this out after quite a while of trying to get the QtWebKit libraries accessible in my Action. That didn't work.

It seems Thoughtbot, the authors, agree with me and have deprecated the thing and recommend a move to Selenium or Apparition. I chose the latter because of claims of backwards compatibility and it was very easy to switch when I finally realised that this was a more sensible way forward. The changes can be seen in this commit along with the inclusion of an Action setting up the Chrome driver in my test workflow.

Configuring the database to work in a containerised world

Good grief this took me forever.

In theory, this is really easy - configure a Postgres database as a service, when the tests run connect it up, and bam. In practice it is also really easy, requiring minimal config to get it working. However, it requires getting a load of options to line up and since it's all running on Github servers, the feedback loop is annoyingly slow so painstakingly iterating through a million tiny variations to get to that simple working config took an eternity.

In the end, there were only two things to note.

First, when configuring the Postgres service one HAS to specify a port (despite it being the default port).

Second, remember to update the test database config in database.yml to accept some environment variables (and also default to allowing the tests to run locally). It's really easy to do when you actually remember to do it...

It's highly like these are more down to my own incompetence than anything hidden or surprising.

And done

And lo, it works. While it took a while to figure all the details out, the results are actually really simple and easy to duplicate for other Rails projects.

The whole change for implementing Github Actions and implementing the other updates can be seen in this PR.

Saturday, 19 December 2020

Upgrading to the Heroku-20 buildpack and Bundler 2

I figured it's about time to start moving my various running applications and kit to Ubuntu 20, now there is a new LTS version and well before v18 totally dies. I know I'm late to this party, but I've been busy.

Today I updated an application hosted on Heroku and since it wasn't 100% smooth thought I'd capture my steps, both for myself in future and anyone else who finds it useful.

The tech

  • Ruby on Rails application
  • Ruby 2.5.1 (Ubuntu 18 default version)
  • Bundler 1.17.3
  • Heroku on buildpack 18
  • Codeship for automated testing and deployment

The process

On Heroku, go to the app page then its settings. On this page you can change the stack via the big red "upgrade" button. This requires a redeploy of the application.

Back at the app, I updated by ruby version (.ruby-version file for me), installed bundler and ran bundle update --bundler to upgrade from Bundler to Bundler2. This provoked some other minor config changes, all in this diff.

Then Codeship broke. As in, it was totally down. Sigh. When this came back, the build was not working, giving me a deadlock error when running bundle install. This gotcha took a while to fix (hence writing this post). In the end, all I had to do was add gem update --system to the build setup commands before gem install bundler.

And voila, we're in the modern world and everything works.

Sunday, 18 October 2020

Losing Chrome URLs

 This is going to be a short one, mostly so I've got a reference for the future.

It seems Chrome as of v86 (latest at time of writing - at least on Linux) is hiding the full URL unless it is selected, instead showing only the domain. This is to highlight fraudulent websites for people who can tell the difference between www.google.com and www.google.evilsite.com but get confused when there is a huge set of valid-looking path and parameters after it. It seems that's about 60% of the web using population.

Anyway, if you're in the 40% and you find seeing the whole URL quite useful thankyouverymuch and don't want to have to select the bar to see the information, then you can disable this new feature.

Put this into the task bar: 

chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains

Then search for and disable:

omnibox-ui-hide-steady-state-url-path-query-and-ref-on-interaction

Restart Chrome and lo, the URLs are back where they should be.

For me, I was surprised by this and I was wondering why The Internet had decided to embrace loading pages into frames with javascript, before I realise the browser was doing this not the site.

Thursday, 29 August 2019

Fixing Xcode command line tools on an older version of OSX

This will be of interest to nobody but Future Me when it inevitably happens again.

It has been a while since I did anything approaching proper coding and since I use it as a lifeline when things are getting bad, I thought it was time to make something again. The most* fun part of programming is discovering a problem in the development environment and disappearing into a rabbit-hole for days to eventually bring you to the point where you can actually start.

This time, it was vi not working because rvm had triggered a Homebrew update which hadn't worked properly because of an OSX upgrade and ... argh. Ultimately it was Xcode, then the Xcode command line tools being missing.

This is going to come up again, so here's a note for future me.

For boring reasons I do not (and cannot) run the latest version of OSX or the latest version of Xcode. Consequently, running brew update && brew upgrade gave me:
Error: Xcode alone is not sufficient on High Sierra.
Install the Command Line Tools:
  xcode-select --install
That command is not going to work on older versions of OSX. It triggers another process which (I think) hunts for the very latest version of Xcode in the App Store and its tools. I don't have the latest Xcode so it fails to find anything useful.

After quite a bit of hunting I found I can download the exact version of the tools I need from the Apple Developer site.

Once this is downloaded and installed, everything works ok (although I did have some luck also fixing up Homebrew with brew doctor).


*least

Thursday, 21 December 2017

Hosting a Rails App on Cloud Foundry - first impressions

From time to time I have been known to write a bit of code and whenever one writes a web application, there is always the question of hosting. I've done my time in Ops and I can certainly deploy an application to a VPS and run the surrounding infrastructure to make it work - however, that all sounds like more work than I'm willing to put in. This is the world of Cloud hosting and I'd like to spend my time writing applications, not deployment scripts. What I want is something I can throw code at and have it sort itself out but for my own projects the price needs to be low so I'm not spending a ton of money every month on my own games.

This is an interesting niche as I don't have the same requirements for my own stuff as I would for professional hosting. Initially my requirements were:

  • Very low monthly cost
  • Rails 5
  • Database (probably postgres)
  • Ability to hook it into some kind of CI (ideally Codeship, as I'm already using that)

For my own projects I'm not that bothered about high capacity, or extensive DR. These are great, but are also expensive.

I'm going to end up on Heroku, because the free tier appears to do everything I want and more. However along the way I tried out Cloud Foundry so I thought it worth writing up how I got started.

Easy stuff first


signed up for an account then created an org and a space on the dashboard. I also created a database within the space (no binding - it's better to do that with a manifest). This was all achievable via the web interface. The postgres service has the option of a free database, limited to 20mb storage.

Next, I installed the command line interface and logged in (cf l), choosing the space as the default.

Preparing the application


A Rails 5 application needs no additional configuration, beyond migrating it from sqlite to postgres. The easiest way to tie the application to the production database is via a manifest file. Mine looks like this:

---
applications:
- name: yip-helper
  random-route: true
  memory: 128M
  instances: 1
  path: .
  command: bundle exec rake db:migrate && bundle exec rails s -p $PORT
  services:
    - yip-postgres

The name becomes part of the subdomain on deployment. The memory is kept low to keep the costs down for a personal project. The service listed should match the name of the database created in the space, above. Stick this in the repository so it can be used with the CI later.

Now the application should be ready to deploy with a simple cf push.

Continuous integration


I use Codeship, and their docs worked fine for me with two modifications:

  • I dropped the CF_APPLICATION envar from the script as it's defined in the manifest file
  • My first deploy failed as it couldn't find the required gems - subsequent deploys worked fine, despite a warning about including the .bundle dir in my repo (which I didn't)

Problems


This all works with minimal fuss, however I'm going to end up going back to Heroku. I originally discounted it because it didn't play well with Docker (a requirement I've since abandoned). Also:

  • Heroku encrypts traffic for free on their own domain, whereas Cloud Foundry doesn't have this option. I can pay $20/month to use my own domain and cert but this breaks my first requirement. I can understand them charging for additional domain hosting but honestly, securing their own subdomains should be a given.
  • The Cloud Foundry free tier database is tiny. Paying for a database adds a lot to the monthly costs - this is true of all the hosting options I looked at - so a useful free tier offering is important.
  • Heroku is better supported. In Codeship, for example, there is a plugin to support it whereas Cloud Foundry requires a custom script. It's a simple one, to be fair, but it's symptomatic.
  • The Heroku tooling and web interface are nicer. Again, unsurprising given how much longer Heroku has been around. The Cloud Foundry tools are fine, but the doing the same things with Heroku is just easier.

So there it is. These are just my experiences, based on not a lot of time and with the intention of hosting for a personal project.

Saturday, 21 January 2017

Continuous Delivery with Codeship

Nobody needs to be sold the virtues of automated deployment in a professional environment any more. I've found personal projects a bit different though. Generally these do not deploy as often and are for fun and I'm sorry to say that, for me, the idea of maintaining a Jenkins server doesn't really fit into the latter category. Still, as I wrote about a bit in a previous post, I appreciate the value of simple, consistent deployment and so I write scripts then run them by hand as a happy medium between rigorous process and focusing my spare time on the things that make me happy.

Over the last few years continuous delivery as a service has become a thing which (in theory) means I should should be able to take my deployment scripts and have them run automatically when I push something to a master branch in git and all without having to run my own software. I tried this with Codeship in its early days and something went horribly wrong, requiring me to completely wipe my website and restore from backups. This somewhat killed my enthusiasm for trying again.

To be fair to Codeship, this seems to be a very unusual experience. Most people I know sing its praises - both its capabilities and ease of use. Clearly I did something stupid somewhere and so when I decided to have another look at using a service I went back to give it another go.

The product has moved on significantly since I last visited, however it is still pretty straightforward. More importantly, this time I got it to work and didn't destroy anything in the process. So, with my extremely minor requirements in mind, I had to:

  • Set it up to run my tests (just bundle install for me)
  • Set up run my deploy script (one line to trigger the mina script)

Then just test via an empty commit to master and I was done:
git commit -m "Empty commit to test deployment" --allow-empty

This is pretty much the experience I wanted the first time around. Who knows what I did wrong. Some very minor gotchas:

  • You need to run bundle install in the test section to ensure you have your deployment dependencies in the later steps, even if you don't have any tests to run (remember kids, always write tests...)
  • If your deployment scripts involve connecting to a server somewhere you're going to need your project SSH key which isn't the easiest thing to find

And that's pretty much it.

So why did I look at this again? Aside from it being one fewer task to do by hand when doing any proper work, I can now make minor changes on the go via the Github web interface and not need a properly set up environment to deploy them. More importantly, I can also give access to non-developers to edit content in that same web interface and it will automatically push without my needing to do anything. This is great as now what was a static website suddenly has a content management system attached to it.

I can also finally join the "Codeship is great" bandwagon. Sorry I'm late.

Monday, 14 November 2016

The first day

It occurs to me that, having been working at the University of Bath forever, I have experienced very few first days. For obvious reasons I've been thinking about working environments a lot recently, along with expectations from both employers and employees. The less-than-insightful thought being that the world would be much better if there was less fear. I wonder how many people would make positive changes in their lives, such as moving job, if it wasn't for fear. I know that I'm scared to be moving. Scared that the new place will reject me, scared that I wont be able to do the new job, scared that I wont like my new colleagues. None of these have any basis in reality. There hasn't been anything to suggest the new place will be anything but lovely and although the work will be different I'm definitely up for the challenge - plus they interviewed me and decided I am capable and they should know better than me at this stage.

So really my fear is based on the loss of my old job (which was full of lovely, talented people and a great environment) in the face of an unknown future. But moving on has been the right decision. It has allowed me to advance my career and re-evaluate my professional worth - both of which are Good Things for anyone to do. In turn, the university is going to have to face questions about how it employs developers - questions it can (understandably) avoid while it has people in post - also a Good Thing for the industry as a whole.

If movement is good, why isn't there more of it? That brings me back to fear and for the moment the first day. I know that one way or another I'll be uncomfortable on my first day and that is mostly due to my history of first days. I'm expecting the next one to be better and I'm looking forward to being involved in making them better for others when I'm the experienced one.

My first first day


My first job was as a lifeguard in a place which shall remain unspecified. Memories from that day involve arriving around 5.30am (eugh) and pretty much immediately being sent to set up some giant trampolines on my own. I later discovered that there are supposed to be six trained people involved in setting these things up. Fortunately I was rescued by some more experienced colleagues.

My second first day


My second job was at Unilever. It was a great job but day one was a mess. I was sent to the other side of the country, where nobody knew who I was or why I was there. I ended up interviewing people about a project I knew nothing about all the while wondering when I was going to wake up from the crazy dream.

My third first day


This was the first day working on the University of Bath Helpdesk, although the strongest memory was of the interview. I'd been sitting with a friend (who already worked there) fixing a laptop for him. The supervisor came over, saw what I had done and asked if I wanted to cover the next free shift. I was thrown straight into the action, with a small amount of shadowing an experienced colleague to show me how things worked.

I actually remember very little of this day so it must have been pretty smooth overall.

My fourth first day


My first day as a developer. I was shown to a small office which was about big enough for one and a half people. I was the half. Over the next few days I managed to cannibalise a working computer from various contacts around the university, including some flatscreen monitors from the dawn of time (the desk wasn't big enough for the more common CRT monitors). I managed to borrow a chair from a generous colleague in another office (he had two) then I was shown around the various systems on which I would be working - of which I understood exactly nothing.

Oh and the office let in the rain.

Not that I'm knocking this job. As I'll write about in another post I feel incredibly lucky to have had this opportunity!

No real conclusion here. I suppose the direction I'm heading is that if we want to improve our industry we want to encourage people to be the best they can be, which will likely mean enabling people to move around easily. One problem to overcome is the fear of moving and one of the things to fix there is the inevitably-scary first day. Each environment is different, but some basics (meeting people, first day activities, desk, computer, access) are going to be consistent and we really should have this nailed as an industry by now. So much of fear is the unknown - simply sending out a basic itinerary of the first day should help quell that.

Sunday, 30 October 2016

The rambling story of how I became a developer - part 1

Just recently I've had the privilege of advising a few amateur developers on how to step into the world of professional development. I find this a difficult question but since working with and helping encourage those new to this world is very important and something I hope to be doing a lot more of in the near future I thought it best to get some thoughts in order.

How did I get there?


First things first - I can't claim to be any kind of career expert. My own tale has been a combination of providence and hard work, not particularly shrewd choices as I've progressed - at least not deliberately.

My first IT job was a summer spent as a business analyst, working through a huge data manipulation job and providing the technical expertise to the project manager. This wasn't why I was hired - I was supposed to be doing some kind of data entry as a holiday job - but by a series of coincidences I ended up talking to everyone who the project affected and accidentally doing some in-depth user analysis which led me to ask lots of questions about the best way to move forward. In my first job I learned the importance of the end users.

Next up, I spent a year in user support on a help desk, helping look after a campus full of computers. Again, lots of opportunities to talk to the end users and hear their difficulties and frustrations. This sort of experience is really important for someone who wants to be a good developer. Being able to create great code is important, but if you don't understand the people who will be using your product you will only ever be able to create to the specifications provided by others and that will limit your ability to be effective and put a ceiling on your career.

The help desk also gave me my first proper chance to effect change on my working environment. We had many processes which needed to be more efficient and I was fortunate that the people around me (and particularly my manager) were open to experiment and change. This is understood with the benefit of hindsight and experience - at the time I just had an idea, had a bit of a chat with my manager and gave it a go. Looking back I'm honestly surprised they gave me as much freedom as they did. Being able to critically analyse and successfully question the status quo is an important skill for anyone working in a team and especially in the rapidly-changing world of development.

The first summary


So far I think the key points (other than the rather obvious "make the most of your opportunities") are:

  • get involved with the end users
  • question the world around you

It's never too early in your career to ask "is this the right thing to do?" - it will probably be the most important question you learn. Of course, the other vital part of this skill is being able to ask without annoying and alienating your colleagues. While sometimes it is important to challenge authority or speak truth to power, or whatever the phrase is at the moment it is rarely a good idea to directly butt heads with people higher up the food chain. In a good working environment, questions and discussions should be encouraged (if you're finding you can't ever ask "why" then you're working in the wrong place) but you need to know how to approach such a conversation and when to back out.

Basically, soft skills matter.

More at some point.

Tuesday, 30 August 2016

Jekyll and the build scripts a few years on

A few years ago I moved my sites from a PHP templating system to static generation using Jekyll. How is it working out?

Pretty well. I’ve had no downtime (that I haven’t caused) which is to be expected on a low-traffic website serving HTML files. Updating content and templates has been easy, with Jekyll remaining simple to use. While I’m sure I’m in need of an update, that is less of a concern than if I was running code exposed to a user. Overall no problems with the technology or maintenance - indeed I find it much easier to work with than previous versions of the site as I don’t have to re-learn my configuration each time I want to do anything more complicated than change some words.

The biggest win - and something I actually considered skipping when I initially implemented - has been with the build scripts. In professional life I wouldn’t think twice about writing automated build scripts for a project but we all know that this kind of thinking isn’t as rigorously followed for personal projects. I wrote a simple mina script for deploying (and updating) my sites and several years on I deeply thankful to my past self. I haven’t had to keep my build process in my brain at all - just the magic command, which is in a README somewhere. This has meant small updates have been easy, the most boring part of site maintenance has all but gone away and consequently those updates have actually happened.

The lesson to take away here is that doing the hard (and dull) work up front of defining a development process and writing deployment scripts was worth it. Not so much because of time saving, or the consistency inherent in an automated process - but because these benefits actually encouraged me to maintain my sites in a way I simply wouldn’t have done had I been required to remember how to deploy my work each time I did anything.

Sunday, 5 April 2015

The Year in Pictures


A few years ago I was involved with 12 Cakes - a project which put up a cake recipe every month for a year. It was a nice, simple idea and got me thinking about doing something similar using photos. So, after a year of messing about and not finishing the site, I've create The Year in Pictures. Along with five friends and family, I am selecting one photo each month which says something about that month and putting them together on a website.

There is no real goal for this. By the end of the year I'll have 12 months of photos for 6 people which could be used for a variety of things but for a change I'm not worrying so much about the end game and just enjoying watching it grow. Three months in, there is now enough to look at that it's worth being seen beyond the six contributors so I'm going to start mentioning it on Twitter when a new month is uploaded.

If you're interested in the technical details, the website is static HTML generated by Jekyll and styled using Foundation. The photo pages are created using a Jekyll generator (combining a photo and some metadata from a YAML file to produce a full page) and most of the rest is done with macros. The code is on Github if you would like to see how it all works - it was creating using my Jekyll bootstrap project.