In my last post I introduced an LDAP module for the Pillar data lookup system that comes with Salt.

This module has a powerful data inheritence model which allows common data to be shared between nodes and overwritten where something more specific is necessary.

There are scenarios, however, when it is useful to have a different way of expressing and organising configuration data.  In this post I introduce LDAP Pillar Lists: a storage and retrieval model where data matching certain criteria is aggregated (instead of overwritten) and presented as a list.

Using LDAP Pillar Lists, common data can be shared among many hosts, and host-specific data can be added to these common values (instead of being replaced, as they are with LDAP Pillar Key/Values).

A good example of where we might want to use LDAP Pillar Lists is when managing and deploying ssh public keys.  I want to be able to define common ssh keys high up my LDAP tree so that someone who needs access to a group of machines only needs to add their key to LDAP once and all servers defined beneath will have that key deployed to them.  In addition, where I need to grant an individual access to a particular machine, I can add the individual’s key to the machine’s LDAP record and the keys which get installed on that machine will be the individual’s key AND any common keys.

How does this look in practice? Let’s work through an example where we distribute SSH public keys using Salt and Pillar LDAP Lists.

Installation

Check out the original post for how to install and configure Pillar LDAP.

Configuration

Configuration is as before but with the addition of the ‘lists‘ keyword in the configuration file.  This key identifies the LDAP attributes that you wish to be treated as LDAP Pillar Lists.  In the example Pillar LDAP config file below I’ve used ‘saltList‘ which means I need a matching entry for this in my LDAP schema (see next):

ldap: &defaults
server: localhost
port: 389
tls: False
dn: o=acme,c=gb
binddn: uid=admin,o=acme,c=gb
bindpw: sssssh
attrs: [saltKeyValue]
lists: [saltList] <<<<<< Pillar LDAP Lists
scope: 1


hosts:
<<: *defaults
filter: ou=hosts
dn: o=customer,o=acme,c=gb


{{ fqdn }}:
<<: *defaults
filter: cn={{ fqdn }}
dn: ou=hosts,o=customer,o=acme,c=gb


search_order:
– hosts
– {{ fqdn }}

Schema

In the initial release of Pillar LDAP, I mention the ‘saltState‘ LDAP attribute, which can be seen as a special case of ‘saltList‘.  With ‘saltList‘, there’s really no need for ‘saltState‘ any more as ‘saltList‘ is a superset, so the suggested LDAP schema becomes:

attributetype ( .2.1.1.10.10
NAME 'saltList'
DESC 'Salt List'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )


attributetype ( .2.1.1.10.11
NAME 'saltKeyValue'
DESC 'Salt data expressed as a key=value pair'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )


objectclass ( .2.1.2.12
NAME 'saltData'
DESC 'Salt Data objectclass'
SUP top AUXILIARY
MAY ( saltstate $ saltkeyvalue ) )

LDAP

With our Schema in place, we can add our ssh public key data to our LDAP directory. Our example data looks something like the following:

o=customer,o=acme,c=gb

{‘saltList’:’ssh.key=ssh-dss AAAAB3…..==admin’,

|__ ou=hosts,o=customer,o=acme,c=gb

|__ cn=mba1,ou=hosts,o=customer,o=acme,c=gb

{‘saltList’:’ssh.key=ssh-dss BBBBA3…..==kris’}

&nbsp;

NB SSH key data has been truncated for readability.

We can test our data retrieval with the pillar.data command:

# salt 'mba1' pillar.data
{'mba1': 'ssh.key': ['ssh-dss AAAAB3...== admin',
'ssh-dss BBBBA3.....==kris']}

So Pillar has retrieved data from the series of LDAP searches defined in the Pillar config file and aggregated the attributes identified as Pillar LDAP Lists.  The result is a dictionary where the key is the name of the particular list and the value is a list of the aggregate values.

Salt State

Now let’s write our salt state for our authorized_keys file and have it make use of the pillar data:

Our ssh keys state sls file might look something like:

/root/.ssh/authorized_keys:

file:
- managed
- template: jinja
- source: salt://ssh/authorized_keys.tmpl

With the contents of authorized_keys.tmpl looking like:

# Ssh authorised key file

{% for key in pillar['ssh.key'] %}{{ key }} {% endfor %}

Finally, we run ‘highstate‘ to see the results:

# sudo salt 'mba1' state.highstate

mba1:
----------
State: - file
Name: /root/.ssh/authorized_keys
Function: managed
Result: True
Comment: File /root/.ssh/authorized_keys updated
Changes: diff: New file

Let’s check its contents:

# cat /root/.ssh/authorized_keys
ssh-dss AAAAB3…== admin
ssh-dss BBBBA3…== kris

Yay!  I have my common and my node-specific values.

I can use the same kind of pattern for user accounts or virtual hosts, but the most powerful example that presents itself would be the following entry in a top.sls file:

base:

'*':
   {% for state in pillar['states'] %} - {{ state }}{% endfor %}

So much configuration with so little code!

Now I can define the roles (as Pillar LDAP lists with keyword ‘states’) and the configuration of my servers in an external source and leave my states completely clean and free of server or environment-specific data.

< Back