Rob and I have both been doing a lot of work on CS Blogs since the last time I blogged about it. Its now in a usable state, and the public is now welcome to sign up and use the service, as long as they are aware there may be some bugs and changes to public interfaces at any time.
The service has been split up into 4 main areas, which will be discussed below:
csblogs.com – The CS Blogs Web App
CSBlogs.com provides the HTML5 website interface to Computer Science Blogs. The website itself is HTML5 and CSS 3 compliant, supports all screen sizes through responsive web design and supports high and low DPI devices through its use of scalable vector graphics for iconography.
Through the web app a user can read all blog posts on the homepage, select a blogger from a list and view their profile — including links to their social media, github and cv — or sign up for the service themselves.
One of the major flaws with the hullcompsciblogs system was that to sign up a user had to email the administrator and be added to a database manually. Updating a profile happened in the same way. CSBlogs.com times to entirely remove that pain point by providing a secure, easy way to get involved. Users are prompted to sign in with a service — either GitHub, WordPress or StackExchange — and then register. This use of OAuth services means that we never know a users password (meaning we can’t lose it) and that we can auto-fill some of their information upon sign in, such as email address and name, saving them precious time.
As with every part of the site a user can sign up, register manage and update their profile entirely from a mobile device.
api.csblogs.com – The CS Blogs Application Programming Interface
Everything that can be viewed and edited on the web application can be viewed and edited from any application which can interact with a RESTful JSON API. The web application itself is actually built onto of the same API functions.
We think making our data and functions available for use outside of our system will allow people to come up with some interesting applications for a multitude of platforms that we couldn’t support on our own. Alex Pringle has already started writing an Android App.
docs.csblogs.com – The CS Blogs Documentation Website
docs.csblogs.com is the source of information for all users, from application developers consuming the API to potential web app and feed aggregator developers. Alongside pages of documentation on functions and developer workflows there are live API docs and support forums.
In the screenshot below you can see a screenshot of a docs.csblogs.com page which shows a developer the expected outcome of an API call and actually allows them to test it, in a similar way to the Facebook graph explorer, live on the documentation page.
Thanks to readme.io for providing our documentation website for free due to us being an open source project they are interested in!
The CS Blogs Feed Aggregator
The feed aggregator is a node.js application which, every five minutes, requests the RSS/ATOM feed of each blogger and adds any new blogs to the CSBlogs database.
The job is triggered using a Microsoft Azure WebJob, however it is written so that it could also be triggered by a standard UNIX chronjob.
Whilst much of the actual RSS/ATOM parsing is provided by libraries it has been interesting to see inconsistencies between different platforms handling of syndication feeds. Some give you links to images used in blog posts, some don’t, some give you “Read more here” links, some don’t. A reasonable amount of code was written to ensure that all blog posts appear the same to end-users, no matter their original source.
I welcome anyone who wants to to try to service now at http://csblogs.com. We would also love any help, whether that be submitting bugs via GitHub issues or writing code over at our public repository.
As some of you may have noticed over the past few weeks dannybrown.net has been updated — not only has the style changed, but so has much of the content. Previously my old domain, dantonybrown.com, showcased a portfolio, contact information and linked to this blog, but did so over a number of pages. Then when I bought dannybrown.net I got rid of the old site and simply redirected the domain to this blog. This new website showcases everything you need to know about me at a glance in one, simple to navigate page.
When I was in the planning stage of the project I decided very early on that I wanted the site to work just as well on mobile devices as it would on desktops. This is partly because an ever increasing number of people use their smartphone as their primary web browser and partly because I envisage many of the sites users to visit it after I give them one of my business cards, presumably whilst on the move or sat in a conference on their mobile device.
Whilst planning I also contacted my friend Harry Galuszka to get some professional graphic design done for my logo and some of the images I use. I’d like to thank him for his help 🙂
I used Notepad++ as the source editor for this project, due to its fantastic syntax highlighting, and a Custom-built Linux Virtual Machine on Windows Azure as the host, which I FTP’d my files to, to test them in action.
As part of a campaign to get developers writing code that is well optimized for internet explorer Microsoft have started giving away 3 month subscriptions to a fantastic service called BrowserStack. BrowserStack allow you to test your code in Virtual Machines hosted online, allowing you access to devices and operating systems you might not normally have access to. Below is a screenshot of me testing my website on an iPhone 5, a device to which I would normally have no access:
Using this service I was able to test my website against, and make changes to fix layout issues in the following Browsers:
- Internet Explorer 6/7/8/9/10/11
- Firefox 3.5 onwards
- Safari on Mac
- Safari on iOS
- Opera Mini
- Opera Mobile
- The Android Browser
- Chrome for Android
- Google Chrome
On the following OS’s
- Windows XP/Vista/ 7 / 8 / 8.1
- Mac OSX
- Android (tablets and mobile versions)
- iOS (iPads and iPhones)
Therefore ensuring I had tested my website on a wide variety of hardware. Its a shame to see that Windows Phone and Windows RT is not currently supported, but this was not too much of an issue as I own devices which run those operating systems myself.
The website is now installed on a shared Website instance on Windows Azure, in future I will be restyling this blog to fit in more with my website, however I think I will keep it hosted in WordPress.com as I like the fact that they automate backups and installation of new features and bug fixes to the WordPress platform, saving me a few jobs.
On the site itself I think I will be adding more information about my projects, and my education — e.g. module grades and links to download or view software made as part of coursework — soon.
Thanks for reading,
Obligatory xkcd Comic
Standards are one of the things which makes writing software a bit of a pain, but also allows the pleasure of writing a system which integrates, uses and enhances other systems. Indeed the Internet Browser you are reading this blog post on works using a multitude of standards, from the Internet Protocol and Transmission Control Protocol used to deliver the information to you, to the HTML and CSS used to render this web page on your screen.
For those people who are not entirely familiar with the term standards, a technical standard is — according to wikipedia — a “formal document that establishes uniform engineering or technical criteria, methods, processes and practices”. In laymans terms it’s a document that tells you how something works and how to implement it so that your implementation is compatible with other peoples. An example is RFC 2616, The W3C Standard for the HTTP/1.1 protocol.
Standards, in theory and most of the time in practice are fantastic, why invent a new way of guaranteeing the delivery of a packet over internet protocol if there’s already a standard written by a group of intelligent people who have tried to find and fix every edge case and is compatible with everyone elses system? There is no point.
The only time real issues do occur is when there is an edge case that hasn’t been thought of by the author of the RFC, or the standard is implemented incorrectly — either on purpose or by accident.
How strictly should we adhere to a standard?
The motivation behind for blog post was found when doing my coursework for the “Simulation and 3D Graphics” module in an API called OpenGL. I had written a program using OpenGL 3.0 on the computers in the Fenner Computer Laboratory in the University. However, when I went to the ITMB Computer Lab in the next building my code produced a completely different result. You can see the stark difference below.
The image on the left hand side is what I had been seeing as I engineered my code in the fenner laboratory. A column — textured with some “Epic Faces” — with a spotlight-lit sphere inside it. I’ll explain what the purpose of all this is in another blog post. On the right hand you can see what appeared using exactly the same code in the ITMB Lab, precisely nothing.
After many excruciating hours of searching — with some help from one of my housemates — I discovered the bug was being caused by me attempting to call the “glEnableVertexAttribArray()” method on a uniform member of a shader, something which you’re not meant to do, or indeed allowed to do in OpenGL. I had made a programming error, as every Software Engineer does many times a day. What is especially bad here however is that I wasn’t informed of my accident, either by error or by the thing simply not working.
Looking at the above image you may think that the Fenner Computer, using its Intel Graphics card, was “right” because it displayed the 3D objects I wanted it to. However, I would disagree. The AMD Graphics card on the ITMB Computer actually produced, according to the standard, the correct result. Nothing. Had I been working in ITMB when I produced the code I would have noticed that adding the glEnableVertexAttribArray() line broke everything, and would have immediately removed it, working in Fenner gave me a false sense of security that my code was in fact OK.
So, as far as I can tell this leaves us with a few options.
- We should say that in order to claim to implement the OpenGL standard, Graphics Card Manufacturers will have to write a, and work with a standard of how strict their implementations are, I would suggest that all graphics card manufacturers stick directly to the specification and throw exceptions or GL.ErrorCodes or display nothing on screen if there is an issue
- We write into the specification exactly what should happen in every possible edge case involved with the specification (this is obviously impossible)
- We don’t agree on a level of strictness and we waste countless hours of programming time, as we do at the moment
I sincerely hope that eventually number 1 happens, but I’m not sure how likely it is.
Should we ever add proprietary elements to a standard?
Another issue with standards its that sometimes they don’t fulfill everyone’s use cases, in other words they don’t allow everybody to do everything they would like. For example the body of standards used to make up web pages don’t allow websites to run executable code, such as c++ binaries, natively. This lead to things such as the much despised ActiveX control and the slightly less despised Java Web Applet.
The issues associated with this are many
- Users have to install an additional piece of software before they can enjoy certain functionality
- Users then have to ensure this software is up-to-date and manage it properly to avoid security failures
- The fact these downloads exist allow other people to fake them, allowing easier drive-by-download attacks.
I would argue that if we are ever using a standard, or claiming to use a standard we should endeavour to never add to it ourselves. Instead, the correct course of action would be to write a Request for Comment (RFC) and implement the feature in your browser or program, but have it turned off by default. This way develops can turn it on to test it and play around with it, but they won’t become reliant on a proprietary features to implement their solution. Everyone wins. Thankfully most of the browser makers are going this way, so the HTML5 and associated specifications shouldn’t be too fragmented.
Worried about the future
As mobile browsing is becoming more and more popular at a tremendous rate more and more websites are aiming to make mobile friendly sites, either by making completely separate sites such as http://m.bbc.co.uk/sport or by using responsive designs such as the brilliant one which is used at https://www.gov.uk/. Whilst I can do nothing but encourage the development of mobile sites, especially if they have feature parity with the desktop sites and have a great touch-centric UI, I am worried that history is repeating itself.
Sites famously used to have landing pages which said “Works best in Internet Explorer” or “Works best in Netscape”, this is because often those websites would use proprietary features from either browser, for example gradient backgrounds. Unfortunately, because the two major mobile operating systems, Android and iOS, both use WebKit rendering engines to render HTML, CSS etc content mobile website developers are not only utilizing proprietary features , but actually relying on them.
Though the sites don’t show annoying images that say “This site works best in webkit” the result is equally annoying — a substandard browsing experience for anyone who uses a browser which doesn’t use a WebKit rendering engine, examples include Opera Mobile and Internet Explorer.
Developers, learn from the past, and if you can at your organisation have a standard for implementing true standards!