I was invited to talk at Tech Days Online at the Microsoft Reactor in London today on the topic of Microservices.
Rather than a presentation format the talk was as part of a conversation moderated by Christina Warren, Senior Cloud Dev Advocate for Microsoft.
Being live on Channel 9 — Microsofts online TV channel — was somewhat nerve racking but I think it went quite well and I enjoyed the format.
Thank you to everyone involved in the production, I had a lot of fun!
In my blog post about DevOps I argued that a good Software Engineer knows not only about his code, but about how and where it is going to be ran. In the past I have been guilty of having an idea which service I might use to run a particular application (usually Microsoft Azure as I get big discounts as a former MSP) but not having had a full idea of the exact environment said application will execute in.
Recently I have been treating infrastructure as a first class citizen of any project I work on using the process of code-defined infrastructure. Instead of manually provisioning servers, databases, message queues and load balancers — as you might do when using the AWS Console or cPanel on a shared webhost — I create a deployment script and some configuration files. I have actually banned myself from manually creating new instances of anything in the cloud, in order to avoid unwittingly allowing any single instance to become a Pet.
Defining the infrastructure required by any given application in code has a few advantages:
- If someone tears down an environment by mistake we don’t have to try and remember all of the configuration to relaunch it. We have it in code.
- We can easily spin up new environments for any given feature branch rather than being stuck with just “staging” and “production” environments.
- Infrastructure changes show up in version control. You can see when and why you started shelling out for a more expensive EC2 instance.
- At a glance you can see everything an application requires and the associated names. This makes it easier to be sure that when you’re deleting the unhelpfully named “app” environment it won’t affect production.
- Scripting anything you can imagine is a lot easier, such as culling servers with old versions of an application programatically.
Introducing code defined infrastructure on one of our greenfield projects at PepperHQ has already paid dividends and the team has a desire to back-port the infrastructure management scripts and config files I developed to other services.
I developed the system with the idea of a git commit being the true “version” of a given application and that all infrastructure associated with a given version should be totally immutable. In order to deploy a new version of our new greenfield system you don’t upgrade the application on the server, you create a whole new environment with the new version on and then delete the old one — or keep it around for a hot swapping back in the event of an issue.
The infra-scripts also allow you to see what is live identified in a way useful to developers — by commit id and message. Other features include turning any environment into “production” or “staging” by updating their CNAMEs through AWS Route 53. When using the
yarn terminate to kill a particular environment checks are ran to ensure you’re not deleting “production” or “staging”. The scripts are developed in TypeScript using the
aws-sdk npm package.
Whilst this approach does require some more up-front investment than just manually creating environments as and when you need them, I recommend any developer writing systems people will really be relying on at least investigates this approach as there is a very quick return on investment. Some more out of the box solutions you could look into are Chef and AWS CloudFormation, though I ruled out using these for Pepper based on our internal requirements.
The idea of DevOps — short for developer operations — being a discrete role in Software Development doesn’t make a whole lot of sense to me. As I have had quite a few conversations about this in the past few weeks I felt it might be useful to get my thoughts written down.
The original concept of DevOps as a culture of continuously automating, improving and monitoring all stages of the Software Engineering process is great, and I believe it will only become more important as businesses move toward using a whole host of micro services rather than singular monolithic systems.
However, in recent years the term has been, in my opinion, bastardised to describe a new type of job role which sits somewhere between a Software Engineering generalist and a SysAdmin. What does the day-to-day responsibilities of a “DevOps Engineer” consist of? It’s somewhat hard to say because, as is usually the case in technology job titles, there is no hard-and-fast definition (What’s a code wizard again and how is that different from a code ninja?). However, in my experience and from what I have read online in job listings they usually develop and manage deployment pipelines and cloud/local hardware infrastructure as well as monitoring services and applications for errors and performance issues.
I have always felt that one of the markings of a good Software Engineer is their ability to understand the entire lifecycle of their application, from developer experience at the time of initial development through to deployment and ongoing monitoring & maintenance. Looking at any one stage of the lifecycle in isolation means that easy wins are missed; for example writing code so that it is easier to maintain in the future or selecting a stack which can be bought to life quickly to enable more fine-tuned scaling. In the worst cases that I have personally witnessed developers have created a fragile, complex mesh of services and infrastructure rather than simplify and improve things knowing that it will never be them that is woken at 3am to fix it — it’ll always be the DevOps guy.
In short, the most obvious issue with having DevOps as a discreet role is that it encourages, and in some cases mandates, “chucking things over the fence” — in other words solving problems by making them someone else’s. That’s no way to run an effective engineering team and means that the DevOps engineers often get the short straw.
Software Engineers are in the rare and enviable position of being able to produce their own tools — most farmers can’t build their own tractors and most pilots can’t build their own planes — and are the people who know exactly what tooling and processes would enable them to work faster and smarter in their day-to-day roles. They should use this position to enhance their own productivity and build & maintain better services.
One downside of requiring all engineers to understand the operations aspect of their code is that the knowledge of what tools and best practices are available needs updating frequently as the industry moves forward at an ever-faster pace, but this is true of all aspects of our jobs as Software Engineers. The correct solution is to make each engineer the master of their own destiny regarding deployment and maintenance and provide them with time to learn and develop their skills in this regard.
I originally started this blog post intending to write about something else, so I hope this will at least be of use to someone (even if it is just me referring back to it at some point in the future)
Got some new business cards at work today. They’re pretty nice.