Today we’re conducting an interview with one of our senior engineers, George Cairns. George has been working with AL for around four years now and we’re catching up with him to talk about life as an engineer, how he’s found his time at the company and a little about his most recent client project. At AL we have a culture of continuous learning and our methodology reflects our ability to always build, measure, learn.

George was most recently engaged on a project for a large global bank. He’s recently finished his position there to spend more time at AL HQ leading the Artificial Intelligence product development team.

This is the first in a series of forthcoming posts by George where we’ll focus on what he does best – building new solutions to complex problems to help our clients do better business through automation.

A little more about George;

So, how did you get into engineering

Well, I skipped university because I was tired of being skint. After A-levels, I began work at 18 in Halifax PLC, just after the building society demutualised. My first job there was to change backup tapes in the datacenter, and rebuild Windows NT 3.51 servers when they went bang, which was often. I had hoped to have switched to a glittering music career by now.

How about outside of work, what do you get up to?

I’m pretty good at counter strike and mixing manhattans, but I’m a terrible musician.

How many projects have you been on before?

I feel like I’ve been doing the same project over and over since 1998 but at a higher layer of abstraction each time.

What would you say the 3 key things you’ve learned from past experience are that helped you with this recent project?

Ignore your job title and work on what needs doing;

Being able to collaborate is more important than bleeding edge technical knowledge…..

….however, you need the bleeding edge technical knowledge anyway. The era of the architect who cannot code is over.

About the Project

Why were AL engaged in this project?

I was brought on board to help with Enterprise Architecture for a banking client. The system they had built delivered private cloud services, but in an ‘enterprise orchestration’ type way. This meant automated processes that had lots of infrastructure dependencies which couldn’t be easily tested, which became brittle when under development. In addition, the services underpinning the private cloud services were delivered mostly manually.

What improvements did the client hope the project would result in?

The key goal for the client was to modernise their systems and implement something better. Essentially this project was aimed to increase their IT productivity and enhance cost efficiency.

What did the stakeholders believe was the core issue, and how did you manage that?

The stakeholders had a perceived need for architectural change. When I came in, I decided that rather than just simply explain the concepts behind DevOps-style approaches, I would show them a real-world example.

So, how was the initial system running and what were your first changes?

Their cloud server delivery service used HP Operations Orchestration to tie together all of the different infrastructure services involved in delivering a running instance. Their offshore development team built automated workflows to connect out to AD, IPAM, Network, Hypervisor, Monitoring, etc, and those workflows were numerous and interdependent. I added their OO content into a proper version control system and used TeamCity, Docker and our own open source client library to deliver an immutable OS image of the fully configured workflow server.

Ok, and what impact did these changes have?

Essentially, the developer would be able to change OO code, check it in, and their content packs would be automatically built and unit tested, which would then trigger the build and deployment of a versioned OO Docker image.

This demonstrated how to easily illuminate dependency issues with their orchestration workflow code by allowing developers to simply check inflows; our tools triggered the automated build of their content. If the content failed to build due to a dependency issue, the developer would be notified. In addition, the immutable image showed the benefits inherent in the approach of shipping artefacts (rather than build scripts).

What effect did this have on the project team?

This proof of concept project influenced the architects in the team to start thinking seriously about incorporating modern software development techniques in the way they delivered their wider platform. It took about three weeks to deliver, in collaboration with the datacenter management tech leads.

Following the proof of concept, what were the next steps?

So, with our approach proved, I was asked to show how to scale it up. Devops and Immutable infrastructure are made a lot easier with containers, so there was interest in delivering a container hosting platform. After kicking the tyres of a few different systems we settled on Openshift, which is Red Hat’s enterprise Kubernetes wrapper. Our task was to deliver the platform in a repeatable, fully automated manner, but in the absence of any prior deployment tooling, we had to build it all from scratch. This meant we were able to select our own tools, which is a great situation to be in for someone who is an engineer at heart…

Which tools did you select to work with?

We chose to host the openshift cluster on the customer’s existing vcenter platform. We built our Redhat VM templates using Hashicorp’s Packer, and used Ansible to deploy and configure the machines. The whole openshift cluster could be destroyed and redeployed in one step, which was great for testing.

The wider architecture team were interested in the VM template build system we’d created, so I tidied it up, combined it with some code from a previous engagement, and factored it out into a separate nodejs module, called ‘repacker’. The tool allows OS build engineers to easily pull in packer image templates from anywhere (including open source repos), combine them with existing config management tools, and deliver full-stack application images to their hypervisor/cloud platform, with minimal code.

What were the main challenges you had to overcome, and how would they affect future projects?

The challenging thing about this project was trying to affect the momentum of such a large team. Even where the techniques were novel to the team and specifically targeted at solving the problems they obviously had, it took a long time to influence decision makers outside the immediate team. Getting a firewall open from our dev environment to their internal git server took over six months.

In future,  I would focus more on getting working tools into the hands of the struggling developers, using real code and real infrastructure. Building POCs with example data in order to get approval from architectural gatekeepers just adds an unnecessary step. If the solution works and is approved, it has a more immediate impact and can influence decision makers faster.

What were the key client outcomes as a result of this project?

The point of all this work is to ensure every hour a customer’s development team puts in goes towards moving the needle. In many large infrastructure development teams, there is an insane amount of busy-work relating to integration and managing environments. Infrastructure delivery projects aren’t that different to any other software project, and so the adoption of proven, professional development techniques is the way to win.

Thanks George! Keep an eye out for future posts from us, as we follow George’s progress in Artificial Intelligence product development.

< Back