Tag Archive | Node.js

PepperHQ

The Pepper app for Vital Ingredient

In early September I decided I wanted to find a new role in which I could make more of an impact than at my previous jobs. After having spoken to the very enthusiastic CTO of PepperHQ, Andrew Hawkins, about a role as the Senior Software Engineer of the Pepper Platform I decided it would be the perfect place for me to make a mark.

Pepper build a series of iPhone and Android applications for resturants, retail and hospitality — primarily Coffee Shops at the moment — which allow users to pay for products and recieve awards for being a loyal customer.

In my mind the coolest use case of the Pepper Apps is CheckIn/Pay by balance. Imagine you work in Canary Wharf and visit the same Coffee Shop every morning to get your caffeine hit. Without Pepper you would have to go in, order your drink, wait for it to be made and then pay for it using cash or a credit card and, if you wanted to earn loyalty rewards, you would have to carry a flimsy bit of paper with you and get it stamped every morning — assuming you don’t lose it before you’ve collected enough stamps for a drink.

With Pepper you can automatically be “Checked in” to a location as soon as you are within a given distance of the store, perhaps just as you come out of the tube station. You can then make your order from your phone and have it ready for you as you get to the counter. Here’s the cool bit, you can just pick up your coffee and walk off. Checking in to the location earlier made your profile picture show up on the till in the store so the Baristas know that it’s your coffee. The payment is taken from your in app wallet (which can, optionally, be auto-topped up from your credit card, meaning you never have to think about it again). Your loyalty is also managed in-app.

Some Pepper Customers

Some Pepper Customers

Pepper is really one of those applications that makes the most sense when you see it in action and realise just how much time it would save someone who buys two or three coffees a week.

My role at the company is to be in charge of the pepper platform — all of the backend services, primarily Node.js, that manage the interaction of the applications and point of sales systems.

I’ve been at the company for 3 months now and am really enjoying my time here. It’s pretty neat to build a product people can see the value in, and that is available for use with companies that are household names.

So far in my time at Pepper I’ve added “Pseudo Currency” as a type of loyalty scheme, improved the development process by introducing Continuous Integreation, Linting and a Pull Request merge model using protected branches and started work on a series of improvements to the loyalty reward process.

I plan to keep the blog up-to-date with any developments at the new job.

Danny

Writing better quality Node.js applications

In February last year I started writing my first Node.js App, csblogs.com, alongside Rob Crocombe. The application has run without too many issues since then, serving around 1100 unique visitors per month. However, because it started out as a prototype we didn’t follow many of the best practices we should have done, and its starting to show now we want to extend the application.

Since writing a more complicated application at Trainline — which provides an API to clients and consumes many Windows Services, RESTful APIs and Redis Caches itself — I’ve realised how important it is to be using good software engineering techniques and tools from the very beginning of development.

Whilst most of the concepts in this post are language independent, the example tools I explain are all geared towards Node.

The Basics

These first few things are obvious, and are things you should be doing in all your projects.

Source Control

Source Control everything, even prototypes. The minuscule amount of disk space a git repository will require, and the few seconds every so often to write a commit message will be incomparable to the amount of time you will save by being able to revert a change, or check when changes occurred.

I’ve taken to using Github’s variant of the git flow pattern in which branches are deployed to production and only merged into master once they have been tested “in the wild”. This means that whatever code is in master is always certified as working in production and can be relied upon to be rolled back to in the event a new branch doesn’t work as intended. I like the WordPress Calypso Branch Naming Scheme to make it easy to understand what is being developed in each branch.

Branches use the following naming conventions:

  • add/{something} — When you are adding a completely new feature
  • update/{something} — When you are iterating on an existing feature
  • fix/{something} — When you are fixing something broken in a feature
  • try/{something} — When you are trying out an idea and want feedback

If you don’t like that naming convention or it doesn’t suit your needs, thats fine. Choose a naming convention and stick with it. As with so many stylistic choices in Software Engineering it isn’t the style that is important but the uniformity and consistency it brings.

Documentation

There are few things as annoying when developing software as opening a repository to find no information in how to build the project, run tests against the project or what data and functions the code it contains exposes to its consumers. When developing your code you should try to keep your documentation up to date with at least this information:

  • How to build
  • How to run tests
  • How to deploy
  • Data and functions exposed

Things like how to run your build, test and deployment scripts shouldn’t be changing so often as to be a pain to keep up to date. The data and functions exposed however may change reasonably often, especially if you are iterating whilst developing an API, so in order to make that task easier I suggest you use something like Swagger.io.

Testing

Testing code is one of the things that, 5 years into learning about developing software, I’m still learning about and still eager to learn more about. Good quality testing can be the difference between a bug rearing its head in production and costing you thousands of pounds and it being caught, thought about, and fixed earlier in the development cycle. Automated testing also means that you can be confident that any changes you make won’t be causing regression bugs.

When writing a new class, I first sketch out the interface — e.g. the public constructors, functions and data that the class will expose.

class Train {
     constructor(name, britishRailCode) {

     }

     getTopSpeed() {

     }

     determineLocation() {

     }
}

Then I spend some time thinking about all the potential edge cases, as well as the ‘happy path‘ through each of the functions. For example, what should happen when an invalid BR code is provided to the constructor? What happens if GPS coordinates cannot be determine due to faulty hardware — or in the case of html5 geolocation lack of user permission — in the determineLocation() call? What data should I get back from each of these functions? Is there a timeout after which the function should return an error if it hasn’t completed?

Once I have an idea of the expected behaviour of the class I start writing test, in a Behaviour Driven Development way, using the Chai Assertion Library and Mocha Test Framework. There are many advantages to use BDD, one of my favourites is that you dont write names for your unit tests, you state the expected behaviour in a full sentence — this makes it so much easier to realise the intention of the test and makes it so that test code can, in some senses, document the application code. Another great attribute of BDD is that it allows you to think in terms of what you want your code to behave like, rather than implementation details.

Test Coverage

Test coverage is a highly discredited method of determining the quality of a set of unit tests. Covering every line of code doesn’t mean you have thought of every conceivable edge case and therefore doesn’t guarantee your code is free of defects — indeed no testing can, as testing only shows the existence of bugs, not their absence. However, at minimum you should be covering every line of code — and every branch. (Yes, you can have a branch of code that doesn’t include any lines — I leave working out how as an exercise for the reader)

Linting

JavaScript allows you to make decisions on many areas of syntax. To use semi-colons, or not to use semi-colons — that is just one of the questions. When writing code in a team its easier if all of the code is formatted in the same way, so you don’t waste time reformatting it to a way you prefer code to be written. One way of doing this is to have everyone memorise your projects coding conventions and hope they stick to them, a much better way is to use a linter which will warn the programmer if they break any of the projects rules — this ensures that everybody writes in the same style.

An additional benefit of linting is that, depending on which linter you use, you get static analysis of your code provided to you too. This means the linter can point out any variables you have defined, but haven’t later used, for example.

I personally prefer to use ESLint as it allows different rules to be configured through the use of plugins. This means that you can have a set of rules for React JSX code and a different, more suitable, set of rules for your Mocha tests. For the bulk of my application code I use the official Airbnb style guide ESLint plugin — I like Airbnb’s focus on using modern JavaScript constructs and having code be as explicit as possible. They also provide lint rules for React code.

Commit Hooks

Linting and Testing is all well and good, but you need to have the members of your team buy in to using it. And, even on a one man team, I often forget to run tests and linting before I commit code resulting in broken builds and ugly code being in the master branch of my respository.

 

Pre-commit hooks to the rescue!

Pre-commit hooks to the rescue!

Commit hooks can be used to ensure that your unit tests and linter are always ran before a developer can commit their code. This means they can’t forget to lint or unit test and actually saves time in the long run. I use the pre-commit package to provide this functionality. In combination with good unit tests and linting, pre-commit hooks can help ensure that the code in your repository is always working and readable. (Note: a developer can decide to skip hooks if they’re in a rush to develop a hot fix, but this should be avoided under normal working conditions)

Configuration

Configuration is an important part of any application. It can be as simple as wanting to be able to change which database you want to connect to when on your location machine vs which one you connect to in production, however you don’t want this information to get into the public domain!

I used to use JSON files for configuration. However, these could easily be accidentally committed to git and make the secrets they contain public knowledge. Recently I’ve opted to use Environmental Variables for all the reasons outlined by Twelve Factor. The dotenv module for Node.js makes environmental variables easy to change in development. In the CSBlogs applications I’ve been writing I provide a sample.env file which includes all the names of environmental variables developers should be setting to get the app working in their local environment.

So, there it is. A quick run-down of some very basic steps to give yourself a nice place to work in JavaScript world. Now get developing!

Danny

On-boarding at Trainline

Trainline Homepage

I’m now entering the sixth week of my time at Trainline. It’s been quite the experience, due to it being my first full-time job and it being a time of significant changes at Trainline in terms of technology, branding and even an expansion into the European rail market.

It’s been great to be in a team of people who know so much about Software Engineering and the systems in use at Trainline, and whom are so willing to pass that knowledge on. I’ve also been blessed with some really cool projects to work on at the start of my trainline career. The first thing I was tasked with doing was replacing the standalone Best Fare Finder application, with deep links to an improved Best Fare Finder experience which was already built in to the main ticket purchasing flow.

The Best Fare Finder allows you to find the cheapest tickets avaliable between two stations if you’re willing to be flexible on dates and times of travel

Trainline Standalone BFF

Trainline Standalone BFF

The Standalone BFF, pictured above, wasn’t responsive; didn’t look like the the newly rebranded trainline homepage and purchase funnel and required additional maintenence. Removing it and using the best fare functionality baked into the main purchase flow improved the experience for users and reduced the costs, both financial and of technical debt, to the company. The inline best fare finder which replaces it can be seen below:

Trainline Inline BFF

Trainline Inline BFF

The pictures of popular locations you can see in the screenshot of the homepage, or Roundals as I’ve started calling them (inspired by the name of the famous Tube logo which is a similar shape), had to be updated as part of this work. This was quite exciting as it meant that within a week of my first job I had pushed a change, albeit relatively minor, to the home page of a website with millions of visitors a month. The deep linking I developed for this inline BFF is now also used in Trainline emails to customers.

As part of doing this work I discovered that some parts of the Best Fare Finder backend were a bit difficult to work with and that access to useful information could be greatly simplified leading to a reduction in the number of places in which static data was manually updated by humans. I proposed that a RESTful API was developed with some easy to understand endpoints that delivered a JSON response.

My development manager and product manager were supportive of the idea and therefore I started working on a set of requirements and use-cases for the product, which I simply called BFF API (Best Fare Finder Application Programming Interface). The resulting system is written in Node.js 4 — making great use of all the new features of ES6 such as block scoping, destructuring and arrow functions — Hapi.js and an assortment of the libraries avaiable from npm.

It is unlikely that this system will be made user facing itself, but it may contribute to some of the things a user can see from the backend.

I’ve learned rather a lot in the last few weeks: including the business and technological processes involved in developing enterprise quality software, how to navigate through and modify/maintain legacy systems and some more about the Node.js ecosystem. Expect a blog post in the new few weeks about how I’ve changed the way I develop Node.js code in order to aid correctness and the ability to tests things.

Whilst some parts of the transition from academia to Real Life™ have been difficult to get used to (what is waking up at 7am and commuting?!) overall I’m enjoying the new experience and feel like I’m learning as much or more than I was at university and being paid to do so — the best of both worlds!

I will keep the blog up-to-date with my progress at Trainline.

Danny

Computer Science Blogs Beta

CSBlogs.com Homepage - Desktop

Rob and I have both been doing a lot of work on CS Blogs since the last time I blogged about it. Its now in a usable state, and the public is now welcome to sign up and use the service, as long as they are aware there may be some bugs and changes to public interfaces at any time.

The service has been split up into 4 main areas, which will be discussed below:

csblogs.com – The CS Blogs Web App

CSBlogs.com provides the HTML5 website interface to Computer Science Blogs. The website itself is HTML5 and CSS 3 compliant, supports all screen sizes through responsive web design and supports high and low DPI devices through its use of scalable vector graphics for iconography.

Through the web app a user can read all blog posts on the homepage, select a blogger from a list and view their profile — including links to their social media, github and cv — or sign up for the service themselves.

One of the major flaws with the hullcompsciblogs system was that to sign up a user had to email the administrator and be added to a database manually. Updating a profile happened in the same way. CSBlogs.com times to entirely remove that pain point by providing a secure, easy way to get involved. Users are prompted to sign in with a service — either GitHub, WordPress or StackExchange — and then register. This use of OAuth services means that we never know a users password (meaning we can’t lose it) and that we can auto-fill some of their information upon sign in, such as email address and name, saving them precious time.

As with every part of the site a user can sign up, register manage and update their profile entirely from a mobile device.

api.csblogs.com – The CS Blogs Application Programming Interface

Everything that can be viewed and edited on the web application can be viewed and edited from any application which can interact with a RESTful JSON API. The web application itself is actually built onto of the same API functions.

We think making our data and functions available for use outside of our system will allow people to come up with some interesting applications for a multitude of platforms that we couldn’t support on our own. Alex Pringle has already started writing an Android App.

docs.csblogs.com – The CS Blogs Documentation Website

docs.csblogs.com is the source of information for all users, from application developers consuming the API to potential web app and feed aggregator developers. Alongside pages of documentation on functions and developer workflows there are live API docs and support forums.

In the screenshot below you can see a screenshot of a docs.csblogs.com page which shows a developer the expected outcome of an API call and actually allows them to test it, in a similar way to the Facebook graph explorer, live on the documentation page.

CS Blogs API Documentation

CS Blogs API Documentation

Thanks to readme.io for providing our documentation website for free due to us being an open source project they are interested in!

The CS Blogs Feed Aggregator

The feed aggregator is a node.js application which, every five minutes, requests the RSS/ATOM feed of each blogger and adds any new blogs to the CSBlogs database.

The job is triggered using a Microsoft Azure WebJob, however it is written so that it could also be triggered by a standard UNIX chronjob.

Whilst much of the actual RSS/ATOM parsing is provided by libraries it has been interesting to see inconsistencies between different platforms handling of syndication feeds. Some give you links to images used in blog posts, some don’t, some give you “Read more here” links, some don’t. A reasonable amount of code was written to ensure that all blog posts appear the same to end-users, no matter their original source.

Try it!

I welcome anyone who wants to to try to service now at http://csblogs.com. We would also love any help, whether that be submitting bugs via GitHub issues or writing code over at our public repository.

Danny