Application deployment and configuration have become easier and easier over the past few years. These days, there’s a whole host of configuration management and automated deployment tools. If you’re a fan of the python way of doing things I’d very much recommend giving SaltStack a try. In this article I’m going to demonstrate a couple of the coolest things you can do with Salt, the sort of things that give you that eureka moment.
Let’s dive into an example. Say you have a web application you want to deploy on a cloud provider. Let’s say it has a database server, two web servers and a load balancer. With Salt Cloud you can write a cloud map, something like this:
ubuntu: - web01: grains: role: webserver - web02: grains: role: webserver - db01: grains: role: dbserver - lb01: grains: role: loadbalancer
This assumes you’ve already set up a cloud profile called ubuntu. It gives the machines a hostname and tags them with a role. Then all you need is a top file, matching these roles to Salt states. Then it’s just a matter of running Salt Cloud to create the virtual machines and then calling state.highstate from Salt to deploy the various roles.
However, there are a few things that we need to get a deployment like this working. One of the most awesome things about Salt is that it’s a very powerful remote execution engine. If you need to configure something based on some data (which you can get from another minion) it can really come into its own. A great example of where this is useful is in populating IP addresses between hosts. Take our example: the application on the web server will clearly need to know where to find the database server. However, because we’re going to deploy into the cloud, we don’t know what address the database server will get. Luckily Salt’s peer system comes to the rescue. First we have to explicitly enable peer communications (this is because peer communication could potentially be risky). If you put the following in your master config, you’re saying ‘I give permission for all minions to query each other’s IP addresses’:
peer: .*: - network.ip_addrs
Now we can use this information in a state file or a template. Say you have some config file in the form of a Jinja template which is deployed by Salt to anything with the web server role. And you want it to contain a line like:
DB_SERVER=10.x.x.x
Then, if you put the following in the template, Salt will populate the first IP address of the first server it finds which has the dbserver role.
DB_SERVER={% salt['publish.publish']('G@role:dbserver', 'network.ip_addrs', expr_form='compound').itervalues().next()['ret'][0] %}
This is a fairly basic example. A slightly more complicated example would be for the load balancer. The load balancer’s config file will need the addresses of all the web servers. Luckily enough you can do loops in Jinja, so that you just iterate over the return of the publish.publish. This time with ‘G@role:webserver’ of course. I’ll leave this one for you to try yourself.
That’s pretty cool right?! What does this mean? It means that we can run Salt Cloud to deploy our map. Then we can run state.highstate against the web servers, at which point they will query the master and ask what the database server’s IP address is. The master will then ask the database server what its address is and return it to the web server. This means that you are configuring your web server with live data from the environment.
With this sort of setup you can go from nothing to a full stack with just the following commands:
salt-cloud -P -m /etc/salt/cloud.map salt -G 'role:dbserver' state.highstate salt -G 'role:webserver' state.highstate salt -G 'role:loadbalancer' state.highstate
We have to call the high states in order because we want the database server to be installed and started before we start the web server. To make this a little neater I’ve contributed some new code to salt which allows you to put dependencies in your cloud map. This, combined with Salt Cloud’s start action, allows us to achieve the same result with just one command. If you want to give this a go, grab the latest version of Salt. Then:
In /etc/salt/cloud put:
start_action: state.highstate
Now change your cloud map to:
ubuntu: - web01: grains: role: webserver requires: - db01 - web02: grains: role: webserver requires: - db01 - db01: grains: role: dbserver - lb01: grains: role: loadbalancer requires: - web01 - web02
Finally run:
salt-cloud -P -m /etc/salt/cloud.map
You should find that this time, Salt Cloud will deploy all the machines into the cloud and install their minions in parallel. Then it will run state.highstate, first for the database server, then for the web servers and finally for the loadbalancer. So we can now deploy and configure a whole web application with one command. It get’s deployed in the right order and the configuration can be based on live data from the other minions. Have Fun!
We work with our clients to de-risk and accelerate their business goals realisation. Our approach is based on tailoring our services to fit your needs leveraging our portfolio of strategy, execution, innovation and service delivery offerings to help you reach your objectives
We’re always on the lookout for exceptional talent and people who share our values. Even as we continue to grow, we maintain a family environment with respect and teamwork core to our culture.
Companies can start deploying containerised workloads to Kubernetes In days not months, by leveraging Automation Logic’s Kubernetes Platform Accelerator.
Largest Irish-Founded Tech Employer Surpasses 3000 Employees Signing 15th and 16th Acquisition Deals Version 1, a leading digital transformation partner, is to acquire Automation Logic – for an undisclosed sum as part of its ambitious growth strategy for 2023 and beyond. The Automation Logic deal is subject to clearance by the National Security and Investment […]
Automation Logic were recognised as one of the UK’s best Workplaces again for the 5th year running! We also placed 29th in best workplaces for women. This highlights the great work that we are doing as a business, including all the wonderful work being put in by the employee-led DE&I groups. This award means so […]