Code-defined infrastructure

In my blog post about DevOps I argued that a good Software Engineer knows not only about his code, but about how and where it is going to be ran. In the past I have been guilty of having an idea which service I might use to run a particular application (usually Microsoft Azure as I get big discounts as a former MSP) but not having had a full idea of the exact environment said application will execute in.

Recently I have been treating infrastructure as a first class citizen of any project I work on using the process of code-defined infrastructure. Instead of manually provisioning servers, databases, message queues and load balancers  — as you might do when using the AWS Console or cPanel on a shared webhost — I create a deployment script and some configuration files. I have actually banned myself from manually creating new instances of anything in the cloud, in order to avoid unwittingly allowing any single instance to become a Pet.

Defining the infrastructure required by any given application in code has a few advantages:

  • If someone tears down an environment by mistake we don’t have to try and remember all of the configuration to relaunch it. We have it in code.
  • We can easily spin up new environments for any given feature branch rather than being stuck with just “staging” and “production” environments.
  • Infrastructure changes show up in version control. You can see when and why you started shelling out for a more expensive EC2 instance.
  • At a glance you can see everything an application requires and the associated names. This makes it easier to be sure that when you’re deleting the unhelpfully named “app” environment it won’t affect production.
  • Scripting anything you can imagine is a lot easier, such as culling servers with old versions of an application programatically.

Introducing code defined infrastructure on one of our greenfield projects at PepperHQ has already paid dividends and the team has a desire to back-port the infrastructure management scripts and config files I developed to other services.

PepperHQ Deployer

PepperHQ Deployer

I developed the system with the idea of a git commit being the true “version” of a given application and that all infrastructure associated with a given version should be totally immutable. In order to deploy a new version of our new greenfield system you don’t upgrade the application on the server, you create a whole new environment with the new version on and then delete the old one — or keep it around for a hot swapping back in the event of an issue.

Pepper whats-live

Pepper whats-live

The infra-scripts also allow you to see what is live identified in a way useful to developers — by commit id and message. Other features include turning any environment into “production” or “staging” by updating their CNAMEs through AWS Route 53. When using the yarn terminate to kill a particular environment checks are ran to ensure you’re not deleting “production” or “staging”. The scripts are developed in TypeScript using the aws-sdk npm package.

Whilst this approach does require some more up-front investment than just manually creating environments as and when you need them, I recommend any developer writing systems people will really be relying on at least investigates this approach as there is a very quick return on investment. Some more out of the box solutions you could look into are Chef and AWS CloudFormation, though I ruled out using these for Pepper based on our internal requirements.

Danny

Advertisements

Flying Video

My brother posted a video to YouTube of when he came flying with me some time last year. You can watch it above.

Danny

DevOps?

Credit: https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0

The idea of DevOps — short for developer operations — being a discrete role in Software Development doesn’t make a whole lot of sense to me. As I have had quite a few conversations about this in the past few weeks I felt it might be useful to get my thoughts written down.

The original concept of DevOps as a culture of continuously automating, improving and monitoring all stages of the Software Engineering process is great, and I believe it will only become more important as businesses move toward using a whole host of micro services rather than singular monolithic systems.

However, in recent years the term has been, in my opinion, bastardised to describe a new type of job role which sits somewhere between a Software Engineering generalist and a SysAdmin. What does the day-to-day responsibilities of a “DevOps Engineer” consist of? It’s somewhat hard to say because, as is usually the case in technology job titles, there is no hard-and-fast definition (What’s a code wizard again and how is that different from a code ninja?). However, in my experience and from what I have read online in job listings they usually develop and manage deployment pipelines and cloud/local hardware infrastructure as well as monitoring services and applications for errors and performance issues.

I have always felt that one of the markings of a good Software Engineer is their ability to understand the entire lifecycle of their application, from developer experience at the time of initial development through to deployment and ongoing monitoring & maintenance. Looking at any one stage of the lifecycle in isolation means that easy wins are missed; for example writing code so that it is easier to maintain in the future or selecting a stack which can be bought to life quickly to enable more fine-tuned scaling. In the worst cases that I have personally witnessed developers have created a fragile, complex mesh of services and infrastructure rather than simplify and improve things knowing that it will never be them that is woken at 3am to fix it — it’ll always be the DevOps guy.

In short, the most obvious issue with having DevOps as a discreet role is that it encourages, and in some cases mandates, “chucking things over the fence” — in other words solving problems by making them someone else’s. That’s no way to run an effective engineering team and means that the DevOps engineers often get the short straw.

Software Engineers are in the rare and enviable position of being able to produce their own tools — most farmers can’t build their own tractors and most pilots can’t build their own planes — and are the people who know exactly what tooling and processes would enable them to work faster and smarter in their day-to-day roles. They should use this position to enhance their own productivity and build & maintain better services.

One downside of requiring all engineers to understand the operations aspect of their code is that the knowledge of what tools and best practices are available needs updating frequently as the industry moves forward at an ever-faster pace, but this is true of all aspects of our jobs as Software Engineers. The correct solution is to make each engineer the master of their own destiny regarding deployment and maintenance and provide them with time to learn and develop their skills in this regard.

I originally started this blog post intending to write about something else, so I hope this will at least be of use to someone (even if it is just me referring back to it at some point in the future)

Danny

 

Business Cards

Got some new business cards at work today. They’re pretty nice.

Danny

First Solo

MASF First Solo Certificate

Last Saturday, 16th December 2017, I flew an aircraft solo for the first time — that is to say with no-one else in the plane! It was simultaneously the most terrifying and exciting thing I’ve ever done.

After having flown about 50 minutes worth of circuits at Cambridge with instructor Nick Camu we set ourselves down and I ran through the after landing checklist. Nick then asked if my medical was valid, which it was, and if I had completed the necessary examinations, which I had. He then told me I was going back out again on my own!

We taxied to an area where it would be safe for my instructor to jump out and he shook my hand and wished me good luck — I was on my own.

The first thing I had to do was check the ATIS in order to set my altimeter correctly and be aware of any change in the weather.

Despite having said the aircrafts call sign — Golf Bravo Foxtrot Whiskey Bravo — dozens of times in radio calls to air traffic control over the previous 50 minutes I managed to totally forget my phonetic alphabet when speaking to Cambridge Tower to let them know I had the updated weather information and provide them with my intentions. Unfortunately, unlike the US, the Very High Frequency aviation radio channels are considered private and are not published online — so you cant enjoy listening to me tongue twistering myself.

Once I’d composed myself and had the required conversation with tower I taxied to the engine run up area and ran though all the pre-flight checklists. The aircraft performed as expected and having positioned myself at the alpha hold shot line I informed ATC I was ready for departure — at this point I could feel my heart beating like it intended to leap forth out of my chest. The adrenalin hit very hard. Cambridge Tower told me to line up on the runway and wait.

Having lined the Piper Warrior up on the white centreline of runway 23 ATC told me I was cleared for take-off. I replied “Cleared for take-off, wish me luck. Golf-Bravo Foxtrot Whiskey Bravo”. The lady who often controls at Cambridge Tower simply replied “You don’t need it”. This settled me down a little.

Normally rotation, the act of pulling back on the column and taking the aircraft into the air, occurs at 65 knots. At around 60 knots I did question why I was doing this mad thing. However, come 65 knots I went back into flying mode and did as I had been taught over the previous few months by the excellent instructors at the Mid Anglia School of Flying.

The single circuit I did was actually fairly standard. The aircraft felt a little lighter and more eager to get off the ground, it also had a higher rate of climb at 80 knots than it normally would with a second body in the plane.

I had the classic “oh my god I’m on my own” moment on the climb out portion of the circuit, as I looked right over Cambridge City Centre and there was no one in the seat next to me.

On final I was mainly concerned with staying on the 3° glide slope indicated by the precision approach path indicator and just staying safe. I wasn’t as worried about “greasing” the landing as I normally would be. This obviously worked because the landing was really smooth, and I was complimented on it by pilots in the school clubhouse when I got back.

As I backtracked to vacate the runway ATC told me “well done” which was nice — when you do something as far outside your normal day-to-day life as flying an aircraft its nice to have that kind of confidence boost.

Once I’d got back to the General Aviation Parking and shut the aircraft down Nick came back over and shook my hand. I’d finally done it! After many hours work with Nick and the other instructors at MASF I’d flown a plane on my own — I extend my thanks to all of them for such a great experience.

The next phase of my flight training will be to conduct 3 hours of solo circuits including not only full-stop landings but touch-and-gos and go-arounds — as I have been doing so far with instructors. After that I’ll move on to the navigation portion of the training, in which I’ll learn to get from one airport to another using strategies like dead reckoning.

I’ll keep the blog up to date with my progress.

Danny

You should be using NSP

A few months ago I integrated the Node Security Platform into the continous integration system we use at pHQ. This week it picked up a vulnerability for the first time (don’t worry, its since been patched 😉) which meant that I was alerted to the vulnerability and provided with a link to read about ways to mitigate the risk involved until a patch was available. Had we paid for a subscription to NSP it would submit a pull request to update the package(s) with the fix as soon as it was available.

In the case shown in the screenshot above you can see that the pHQ platform didn’t directly rely upon the vulnerable package, but had 5 dependencies which included it one way or another. If you’re not automatically checking for vulnerabilities then you may not find them as you probably don’t know how many packages you indirectly depend upon!

If you’re not using node something like Snyk may support your language.

As Software Engineers our job may be seen as producing features for users, but we have a duty to ensure that what we develop is secure and won’t put peoples money or personal information at risk. A dependency vulnerbility checked is one great tool to have in the box.

Danny

Promotion

Last week I was promoted to Lead Software Engineer at PepperHQ.

As part of the meeting we discussed what I want to achieve in the year ahead. There’s a lot and I’m looking forward to it.

I’m going to try to be a bit better at keeping the blog up-to-date with details of my day-to-day work going forward.

Danny