Kubernetes has officially gone mainstream – that’s the verdict of the Cloud Native Computing Foundation’s 2021 survey. You certainly don’t go long without hearing about it these days, and it does seem that most of our customers are somewhere on the scale between dabbling in Kubernetes and using it at full production scale. Other distributions that are (relatively) easy to install and operate, like Rancher and OpenShift, compete in the hybrid and on-prem space. Even Docker Desktop now comes with an integrated (development-scale) Kubernetes distribution, meaning that starting to develop applications for Kubernetes has never been easier.
Organisations across industries are desperate to bring older applications into the cloud and maximise the gains from increased developer productivity, operational efficiency, scalability, reliability, and cost efficiency. Kubernetes may be the go-to for containerised apps, but it has strong competition from pure serverless or “container-as-a-service” offerings – on AWS, that’s primarily Lambda and ECS. While these services flaunt their simplicity and cost-efficiency, Kubernetes often wins the battle – despite its considerably higher complexity – because of its flexibility and cross-platform portability. It allows tech leaders to avoid the (perceived) bogeyman of vendor lock-in; pure serverless solutions tend to be tightly coupled to the particular provider, but If you can deploy applications using Kubernetes-native CI/CD in AWS, you can do it pretty much the same way in Azure, Google, or on-prem, making it the obvious choice for multi-cloud architectures. While many organisations will never actually exploit this portability, it remains a frequently cited advantage over cloud vendor-specific serverless services.
This flexibility comes at a price – technologists face the painful problem of sifting through the (ridiculously) vast array of distributions, controllers, operators, configurations, and other native add-ons that make the Kubernetes ecosystem so powerful but can bamboozle the uninitiated (as well as the initiated). This is a symptom of the “trough of disillusionment”, the Gartner term for the dip in productivity as high-performing adopters race ahead, but the majority are held back by a growing skills gap. It could be argued that Kubernetes is already well on its way up the “slope of enlightenment” by now – there’s plenty of help out there, but the challenge of plotting the path from zero to production-ready remains overwhelming for many.
Containerisation allows us to greatly streamline and standardise the software development lifecycle. The Kubernetes resource model allows us to scale this and build complex, highly automated platforms, defined in reams of YAML. Every single type of resource that can live within Kubernetes is defined and deployed in the same way (kubectl apply), meaning that pipelines for any resources or workloads can be largely standardised. GitOps takes this a step further – taking YAML from Git as the single source of truth and throwing it at Kubernetes to keep everything in sync and up to date. You then rely on the trusty control plane to make sure everything is deployed and kept running smoothly.
Along with slick deployments, reliability is another important benefit of Kubernetes that takes a fair bit of extra consideration and work to realise. Core Kubernetes functionality ensures that whatever declaratively defined resources given to it are deployed, and it attempts to return them to that state should something go awry. This gives us resilience at the pod level, but if your pods get overutilised and your application comes grinding to a halt, Kubernetes will just shrug – unless, that is, you’ve installed and configured autoscaling controllers to increase the number of pods and/or nodes. How scaling and associated metrics are configured, and how controllers interact with underlying cloud resources, can require considerable thought.
Achieving production-grade observability can also be a minefield. With quite a few different types of logs and metrics being created by Kubernetes components and applications, deciding how to capture and transmit these to a suitable endpoint – and then setting up all the required processes and permissions – is far from straightforward.
Managing security is even more of a challenge. With Kubernetes namespaces, policies, permissions and networking replacing (or adding to) the better-understood traditional security model, it can be hard to know if a workload is secure. The solution? Another tool (or five), of course! And that’s before we start talking about the dark art of service meshes.
According to Canonical’s 2022 Kubernetes and cloud native operations report, 48% of respondents (from across industries and job roles who attended Kubecon 2021) cited “lack of in-house skills/limited manpower” as their top Kubernetes challenge. While this is down from 54.5% in 2021, it shows a still significant barrier to confident and scalable adoption. Finding the right strategy and technology partners to help navigate the Kubernetes ecosystem is therefore still of vital importance to organisations needing to push on with their modernisation roadmap.
At Automation Logic, we’ve worked with clients across central government, banking and retail to get their modern applications up and running in Kubernetes.
Our Kubernetes Platform Accelerator encapsulates these years of experience and industry best practice, kick-starting Kubernetes adoption and closing the skills gap by providing automated solutions to common challenges. For more information on how we can help, click here or email info@automationlogic.com.
We work with our clients to de-risk and accelerate their business goals realisation. Our approach is based on tailoring our services to fit your needs leveraging our portfolio of strategy, execution, innovation and service delivery offerings to help you reach your objectives
We’re always on the lookout for exceptional talent and people who share our values. Even as we continue to grow, we maintain a family environment with respect and teamwork core to our culture.