In the fast-moving and often confusing world of technology and IT, we’re used to words having multiple meanings and interpretations. Every organisation has varying terminology, and many words we use daily have very different meanings in different contexts. Three prime examples of this are “workload”, “application” and “service”, which come up a lot when we discuss cloud migration.

Let’s look at some of the available definitions of these.


The Cambridge Dictionary gives us this definition: “The amount of work to be done, especially by a particular person or machine in a period of time”. This generic definition implies a rate of work rather than a single complete task, rather than what we might think of as a computing workload.

In the traditional computing sense, a workload is better described as “any program or application that runs on any computer” ( This might be a finite process, such as a batch processing job, or a long-running one such as an operating system.

In the cloud era, however, this definition has evolved to take on a somewhat different meaning. According to AWS, a workload is “a collection of resources and code that delivers business value, such as a customer-facing application or a backend process”. The scope of this is seemingly much broader than the previous definition, but makes sense in a cloud environment where every resource – compute, networking, storage – is all virtualised, abstracted away from the consumer. A database might be a workload in itself, but when tightly coupled with a frontend interface to provide a useful function, we could call this one workload.


Referring again to the Cambridge Dictionary, an application (in a computing sense) is “a computer program that is designed for a particular purpose”. This sounds quite similar to our computing definition of a workload; the difference seems to be the term “purpose”. After all, we often prepend the word “application” (or “app”) with a purpose – such as “email”, “banking”, “dating”, or “backup”.

This definition extends quite well across different situations – it applies to an app on your phone, just as much as it does to a backend business application such as an inventory system or a CMS. Of course, what we perceive as an application today is likely to be just one of the components, e.g. the frontend for one or more other applications, each of which is still “designed for a particular purpose”. The consumer of an application can be a user or another application/workload – with a modern app, it is often irrelevant whether this consumption is via a GUI or an API.

A good way to distinguish apps from workloads may be this: a developer can write some code to serve a particular purpose, store it in GitHub or publish it to an app store, and say they’ve written an application. But only when that app is run somewhere – a virtual machine, someone’s laptop or phone, or a Kubernetes cluster – does it become a workload.


This is where it gets interesting. If you’re a Linux sysadmin, you’re likely to think of a service as “an application (or set of applications) that runs in the background waiting to be used, or carrying out essential tasks” – that’s according to We know how to create “services” using systemd which allow us to keep a program running.

However, at the other end of the spectrum – when talking about users and public-facing websites – a service is (according to the Government Digital Service): “something that helps [the user] to do something – like learn to drive, buy a house, or become a childminder” (drawing on some government-specific examples). For these types of services with (human) users, there are special considerations around user experience and UI design, due to the unpredictability and variety of these users.

The word “service” has other established uses in cloud computing. For example, in terms such as “infrastructure as a service” and “software as a service”. In this context, the user gets given an interface to access those capabilities (infrastructure, software) without the need to create, run or configure any of the underlying systems themselves.

OASIS has a definition of “service” which does a reasonable job of encompassing all of these meanings: “a mechanism to enable access to one or more capabilities”. However, this suggests that the “access mechanism” in itself constitutes the service, with no thought to the underlying applications/workloads doing the heavy lifting. In the systemd sense, and in a Kubernetes sense, this is arguably correct. However, when talking about something like a government service, we tend to think of that as encompassing the user journey, the app(s) they use in the process, and the underlying workloads which fulfil it.

Now, let’s throw a spanner in the works and talk about microservices.

According to microservices – or microservice architecture – is an architectural style that structures an application as a collection of services that are:

  • Highly maintainable and testable,
  • Loosely coupled,
  • Independently deployable,
  • Organised around business capabilities, and
  • Owned by a small team.

An application, when developed with microservice architecture, is composed of several underlying services. Each of these serve a distinct purpose, or piece of functionality, to the overall system, via an API. In this sense, we could call each of these microservices ‘applications’. Of course, we could also refer to a microservice as a workload, when referring to the package of its constituent parts required to instantiate it (e.g. some containers, a database, and some configuration).

Is a microservice a ‘service’ to people in the world of user experience, service design and service blueprints though? Absolutely not.

Workload Migration 

Workloads are what we at AL prefer to discuss when carrying out cloud migrations – after all, if you aren’t running your applications somewhere, they aren’t providing much value. Generally, a workload will be the minimum viable “unit” for migration, as it includes the application’s supporting resources such as storage and networking – workloads are ultimately what underpin the business value of a system. Several related workloads may be put into a single “migration group” to ensure simultaneous migration, based on factors such as connectivity, latency, security or shared deployment patterns.

How we plan and execute workload migrations, taking into account the business needs and the characteristics of the workloads – as well as the source and destination environments – is a large topic for another article. In the meantime, you can learn more about our Workload Migration offering here.

Concluding thoughts

In a lot of ways, workloads, applications and services are just different sides of the same (geometrically improbable) coin. The subtle difference between a workload and an application is that the former is an instantiation of the latter. A service is the way in which the capability of that application/workload is consumed by a user or another application/workload to get something done, with wider considerations necessary for those facing human users – especially the general public.

There seems to be a kind of fractal pattern when applying these terms to our system components. If you look at the top level, you may see a service, through which a user interacts with an application, which has an underlying workload which as the end user we care little about. However, if we zoom in, that application may consist of several underlying applications, each providing their own ‘service’, and workloads which may in fact be decoupled (depending on whether it’s a monolithic or microservices architecture). And zooming in closer to the code level may reveal functions, classes and modules, which could be seen as individual workloads to the developer working at that level.

Essentially – it’s all relative!

To wrap up, here are my best attempts at giving us workable definitions:

A workload is a tightly coupled group of resources which run and support an application or capability.

An application is a piece of software which fulfils a specific purpose.

A service is the means of facilitating interactions with a workload in order to utilise its capabilities and fulfil a job to be done.

Ultimately, what matters is that what you create provides business value. It’s very difficult to use terminology in a completely consistent way, but that’s OK as long as users get value, and teams are able to collaborate with shared meaning to generate that value.

Written by Luke Wiltshire, Senior Consulting Engineer. 

< Back