After leaving DriveTribe I took on my first role as CTO at a start-up called Defty. As employee number 1 I took on the task of hiring the Engineering and UX/UI team, deciding our technical strategy and setting up quality assurance systems in addition to developing the platform itself. In this blog post I will show what we built before Defty was unfortunately shut down for commercial reasons.
A great site in 20 minutes for £20
Deftys target audience was small and medium sized businesses, particularly those that you can imagine being on your local highstreet, who needed a web presence but perhaps didn’t have the skills in house to get online or simply didn’t feel like they would get enough value from an expensive solution.
Existing solutions made building a site relatively simple but had a few downsides that put small business owners off; they didn’t help with writing content, only worked from desktop computers and supported a myriad of options which sometimes resulted in confusing UIs — particularly for those small business owners who didn’t consider themselves technically savvy.
We wanted to make it possible to build a great single page website in 20 minutes or less — regardless of any existing technical knowledge — from any platform, including mobile phones. This was important as in most developing economies access to mobile phones is much higher than access to desktop machines and even in developed nations it would allow small business owners to work on their site between dealing with customers.
Finally, we wanted our solution to be economical, most fish and chip shops aren’t going to see a return on investment from a £100 website as only locals are likely to use them but they have a much better chance of returns from a £20 site.
The first step of building a website on defty was internally referred to as “initial questions” or more often simply “IQ”. In order to generate the user a site tailored to their specific needs we ask a number of questions and stored their results. The cool thing about IQ was that the questions a user was asked depended on the answers they had already provided, so in a way it acted as a decision tree with side effects along the way.
Based on the answers provided we generated a site that we hoped would satisfy most of the users needs and only require a few copy changes to be made manually. Had we launched we would have tracked which changes a user made to their generated site to try and improve IQ for future users.
Once we had enough information to generate a site we placed the user in to our website builder. The builder itself allowed users to change any aspect of the site we had generated for them including colours, fonts, images and content.
Like everything we built this worked on any screen size from mobile all all the way up to ultrawide monitors. Users could hit preview to see what their site would look like from a mobile phone, tablet or desktop regardless of what device they themselves were on.
All the sites we generated for users were entirely static and hosted in S3 and made available to the world via the Amazon CloudFront. This meant there were no expensive services or databases to keep live and maintain for each individual website and would have allowed us to host sites for mere pennies per year.
Theres no use in having a good site if potential visitors cannot find it. To this end we also made it easy to purchase domains.
Suffice to say the intricacies of dealing with The Internet Corporation for Assigned Names and Numbers (ICANN) and the Extensible Provisioning Protocol (EPP) and all the various registries is outside the scope of this blog post, but it was an interesting challenge and it was interesting to peek behind the curtain of how this pillar of the internet works.
As well as the two major product features of site building and domain purchase and management we of course built all the surrounding infrastructure too. Authentication, organisations, checkout and billing, an activity feed, password reset flows, continuous integration, an entire admin dashboard and all the rest.
I would like to thank the wonderful team I had at defty for all their hard work and I wish them the best of luck in the future.
During my time at DRIVETRIBE I primarily worked on a feature called MyGarage which allows people to upload a complete history of their vehicle ownership as well as their dream vehicle and any vehicles they have for sale.
Each vehicle upload consists of an image, make and model. Optionally a user can add the model year, year of purchase and a text description. Uploaded vehicles can be liked and shared by other users, and they can comment to ask questions or provide opinions. The makes and models can be from a list of known brands or a user can input a custom value if they have something particularly rare.
The feature has two main benefits, one being that users enjoy the feature and spend a lot more time on the website if they can interact with their own garage and comment and share other peoples. Secondly, DT gets access to a wealth of data about its users; such as how often they get a new vehicle and what they would really like to buy in the future.
I worked on the front-end portion of the feature on DTs responsive website such that it worked both on mobile and desktop browsers. The front-end was written in React and utilised the very cool Styled-Components library, which I enjoyed using.
One of the most exciting parts of working on this project was that it was announced to the world by James May of Top Gear and The Grand Tour fame. A competition was also ran where a user could win (a model of) their dream ride and the winner was announced by Richard Hammond.
I hope a lot of people get a lot of enjoyment from this neat feature.
In my blog post about DevOps I argued that a good Software Engineer knows not only about his code, but about how and where it is going to be ran. In the past I have been guilty of having an idea which service I might use to run a particular application (usually Microsoft Azure as I get big discounts as a former MSP) but not having had a full idea of the exact environment said application will execute in.
Recently I have been treating infrastructure as a first class citizen of any project I work on using the process of code-defined infrastructure. Instead of manually provisioning servers, databases, message queues and load balancers — as you might do when using the AWS Console or cPanel on a shared webhost — I create a deployment script and some configuration files. I have actually banned myself from manually creating new instances of anything in the cloud, in order to avoid unwittingly allowing any single instance to become a Pet.
Defining the infrastructure required by any given application in code has a few advantages:
- If someone tears down an environment by mistake we don’t have to try and remember all of the configuration to relaunch it. We have it in code.
- We can easily spin up new environments for any given feature branch rather than being stuck with just “staging” and “production” environments.
- Infrastructure changes show up in version control. You can see when and why you started shelling out for a more expensive EC2 instance.
- At a glance you can see everything an application requires and the associated names. This makes it easier to be sure that when you’re deleting the unhelpfully named “app” environment it won’t affect production.
- Scripting anything you can imagine is a lot easier, such as culling servers with old versions of an application programatically.
Introducing code defined infrastructure on one of our greenfield projects at PepperHQ has already paid dividends and the team has a desire to back-port the infrastructure management scripts and config files I developed to other services.
I developed the system with the idea of a git commit being the true “version” of a given application and that all infrastructure associated with a given version should be totally immutable. In order to deploy a new version of our new greenfield system you don’t upgrade the application on the server, you create a whole new environment with the new version on and then delete the old one — or keep it around for a hot swapping back in the event of an issue.
The infra-scripts also allow you to see what is live identified in a way useful to developers — by commit id and message. Other features include turning any environment into “production” or “staging” by updating their CNAMEs through AWS Route 53. When using the
yarn terminate to kill a particular environment checks are ran to ensure you’re not deleting “production” or “staging”. The scripts are developed in TypeScript using the
aws-sdk npm package.
Whilst this approach does require some more up-front investment than just manually creating environments as and when you need them, I recommend any developer writing systems people will really be relying on at least investigates this approach as there is a very quick return on investment. Some more out of the box solutions you could look into are Chef and AWS CloudFormation, though I ruled out using these for Pepper based on our internal requirements.
Got some new business cards at work today. They’re pretty nice.
A few months ago I integrated the Node Security Platform into the continous integration system we use at pHQ. This week it picked up a vulnerability for the first time (don’t worry, its since been patched 😉) which meant that I was alerted to the vulnerability and provided with a link to read about ways to mitigate the risk involved until a patch was available. Had we paid for a subscription to NSP it would submit a pull request to update the package(s) with the fix as soon as it was available.
In the case shown in the screenshot above you can see that the pHQ platform didn’t directly rely upon the vulnerable package, but had 5 dependencies which included it one way or another. If you’re not automatically checking for vulnerabilities then you may not find them as you probably don’t know how many packages you indirectly depend upon!
If you’re not using node something like Snyk may support your language.
As Software Engineers our job may be seen as producing features for users, but we have a duty to ensure that what we develop is secure and won’t put peoples money or personal information at risk. A dependency vulnerbility checked is one great tool to have in the box.
Last week I was promoted to Lead Software Engineer at PepperHQ.
As part of the meeting we discussed what I want to achieve in the year ahead. There’s a lot and I’m looking forward to it.
I’m going to try to be a bit better at keeping the blog up-to-date with details of my day-to-day work going forward.
In early September I decided I wanted to find a new role in which I could make more of an impact than at my previous jobs. After having spoken to the very enthusiastic CTO of PepperHQ, Andrew Hawkins, about a role as the Senior Software Engineer of the Pepper Platform I decided it would be the perfect place for me to make a mark.
Pepper build a series of iPhone and Android applications for resturants, retail and hospitality — primarily Coffee Shops at the moment — which allow users to pay for products and recieve awards for being a loyal customer.
In my mind the coolest use case of the Pepper Apps is CheckIn/Pay by balance. Imagine you work in Canary Wharf and visit the same Coffee Shop every morning to get your caffeine hit. Without Pepper you would have to go in, order your drink, wait for it to be made and then pay for it using cash or a credit card and, if you wanted to earn loyalty rewards, you would have to carry a flimsy bit of paper with you and get it stamped every morning — assuming you don’t lose it before you’ve collected enough stamps for a drink.
With Pepper you can automatically be “Checked in” to a location as soon as you are within a given distance of the store, perhaps just as you come out of the tube station. You can then make your order from your phone and have it ready for you as you get to the counter. Here’s the cool bit, you can just pick up your coffee and walk off. Checking in to the location earlier made your profile picture show up on the till in the store so the Baristas know that it’s your coffee. The payment is taken from your in app wallet (which can, optionally, be auto-topped up from your credit card, meaning you never have to think about it again). Your loyalty is also managed in-app.
Pepper is really one of those applications that makes the most sense when you see it in action and realise just how much time it would save someone who buys two or three coffees a week.
My role at the company is to be in charge of the pepper platform — all of the backend services, primarily Node.js, that manage the interaction of the applications and point of sales systems.
I’ve been at the company for 3 months now and am really enjoying my time here. It’s pretty neat to build a product people can see the value in, and that is available for use with companies that are household names.
So far in my time at Pepper I’ve added “Pseudo Currency” as a type of loyalty scheme, improved the development process by introducing Continuous Integreation, Linting and a Pull Request merge model using protected branches and started work on a series of improvements to the loyalty reward process.
I plan to keep the blog up-to-date with any developments at the new job.
A few weeks ago I was lucky enought to be part of the team that tried, and hopefully succeeded, to entice more developers to come and work with us at Trainline at the Silicon Milkroundabout tech recruitment event.
I really enjoy talking to people about technology and it was especially nice to talk to people who were genuinly interested in what we do at Trainline. Some, who were also recent graduates were especially interested in how I got my job and what I do in my day-to-day work — I reccomended that a lot of these people try events like HackTrain.
Like all cool tech companies we had merch to give out to people. My particular favourite was a Trainline ticket holder, which replaces my standard National Rail one and looks like it will be much easier to find due to its nice bright colours!
As well as nabbing some of the march, I also managed to get my hands on a Trainline Staff Polo, as beautifully modelled below:
I’m now entering the sixth week of my time at Trainline. It’s been quite the experience, due to it being my first full-time job and it being a time of significant changes at Trainline in terms of technology, branding and even an expansion into the European rail market.
It’s been great to be in a team of people who know so much about Software Engineering and the systems in use at Trainline, and whom are so willing to pass that knowledge on. I’ve also been blessed with some really cool projects to work on at the start of my trainline career. The first thing I was tasked with doing was replacing the standalone Best Fare Finder application, with deep links to an improved Best Fare Finder experience which was already built in to the main ticket purchasing flow.
The Best Fare Finder allows you to find the cheapest tickets avaliable between two stations if you’re willing to be flexible on dates and times of travel
The Standalone BFF, pictured above, wasn’t responsive; didn’t look like the the newly rebranded trainline homepage and purchase funnel and required additional maintenence. Removing it and using the best fare functionality baked into the main purchase flow improved the experience for users and reduced the costs, both financial and of technical debt, to the company. The inline best fare finder which replaces it can be seen below:
The pictures of popular locations you can see in the screenshot of the homepage, or Roundals as I’ve started calling them (inspired by the name of the famous Tube logo which is a similar shape), had to be updated as part of this work. This was quite exciting as it meant that within a week of my first job I had pushed a change, albeit relatively minor, to the home page of a website with millions of visitors a month. The deep linking I developed for this inline BFF is now also used in Trainline emails to customers.
As part of doing this work I discovered that some parts of the Best Fare Finder backend were a bit difficult to work with and that access to useful information could be greatly simplified leading to a reduction in the number of places in which static data was manually updated by humans. I proposed that a RESTful API was developed with some easy to understand endpoints that delivered a JSON response.
My development manager and product manager were supportive of the idea and therefore I started working on a set of requirements and use-cases for the product, which I simply called BFF API (Best Fare Finder Application Programming Interface). The resulting system is written in Node.js 4 — making great use of all the new features of ES6 such as block scoping, destructuring and arrow functions — Hapi.js and an assortment of the libraries avaiable from npm.
It is unlikely that this system will be made user facing itself, but it may contribute to some of the things a user can see from the backend.
I’ve learned rather a lot in the last few weeks: including the business and technological processes involved in developing enterprise quality software, how to navigate through and modify/maintain legacy systems and some more about the Node.js ecosystem. Expect a blog post in the new few weeks about how I’ve changed the way I develop Node.js code in order to aid correctness and the ability to tests things.
Whilst some parts of the transition from academia to Real Life™ have been difficult to get used to (what is waking up at 7am and commuting?!) overall I’m enjoying the new experience and feel like I’m learning as much or more than I was at university and being paid to do so — the best of both worlds!
I will keep the blog up-to-date with my progress at Trainline.
One of the questions I had to answer for both myself and interviewers was “What do you want to achieve in your first job?”. My answer was always a quote I read on a blog by a programmer hero of mine, Jeff Atwood. He said you should, as a junior software engineer, “endeavor to be the dumbest guy in the room” which simply means place yourself around intelligent experienced programmers and learn! This is what I wanted to do and I’ve been fortunate enough to be hired by a company that has an environment that will allow me to do that – thetrainline.com.
thetrainline.com sells train tickets in the UK and Europe both from its own website and by providing the software for Train Operating Companies such as Virgin Trains and First. They operate out of an office in Farringdon, one tube stop from London Kings Cross station.
I start in late September and honestly cannot wait to learn and earn with them. My official job title will be “Agile Developer”.
A big thank you is in order to Teck Loy Low who I met on The Hacktrain and subsequently put me in contact with thetrainline. If this isn’t a good advert for getting yourself involved with hackathons and the like I don’t know what is.