In my last post I introduced an LDAP module for the Pillar data lookup system that comes with Salt.
This module has a powerful data inheritence model which allows common data to be shared between nodes and overwritten where something more specific is necessary.
There are scenarios, however, when it is useful to have a different way of expressing and organising configuration data. In this post I introduce LDAP Pillar Lists: a storage and retrieval model where data matching certain criteria is aggregated (instead of overwritten) and presented as a list.
Using LDAP Pillar Lists, common data can be shared among many hosts, and host-specific data can be added to these common values (instead of being replaced, as they are with LDAP Pillar Key/Values).
A good example of where we might want to use LDAP Pillar Lists is when managing and deploying ssh public keys. I want to be able to define common ssh keys high up my LDAP tree so that someone who needs access to a group of machines only needs to add their key to LDAP once and all servers defined beneath will have that key deployed to them. In addition, where I need to grant an individual access to a particular machine, I can add the individual’s key to the machine’s LDAP record and the keys which get installed on that machine will be the individual’s key AND any common keys.
How does this look in practice? Let’s work through an example where we distribute SSH public keys using Salt and Pillar LDAP Lists.
Check out the original post for how to install and configure Pillar LDAP.
Configuration is as before but with the addition of the ‘lists‘ keyword in the configuration file. This key identifies the LDAP attributes that you wish to be treated as LDAP Pillar Lists. In the example Pillar LDAP config file below I’ve used ‘saltList‘ which means I need a matching entry for this in my LDAP schema (see next):
ldap: &defaults server: localhost port: 389 tls: False dn: o=acme,c=gb binddn: uid=admin,o=acme,c=gb bindpw: sssssh attrs: [saltKeyValue] lists: [saltList] <<<<<< Pillar LDAP Lists scope: 1 hosts: <<: *defaults filter: ou=hosts dn: o=customer,o=acme,c=gb {{ fqdn }}: <<: *defaults filter: cn={{ fqdn }} dn: ou=hosts,o=customer,o=acme,c=gb search_order: – hosts – {{ fqdn }}
ldap: &defaults server: localhost port: 389 tls: False dn: o=acme,c=gb binddn: uid=admin,o=acme,c=gb bindpw: sssssh attrs: [saltKeyValue] lists: [saltList] <<<<<< Pillar LDAP Lists scope: 1
hosts: <<: *defaults filter: ou=hosts dn: o=customer,o=acme,c=gb
{{ fqdn }}: <<: *defaults filter: cn={{ fqdn }} dn: ou=hosts,o=customer,o=acme,c=gb
search_order: – hosts – {{ fqdn }}
In the initial release of Pillar LDAP, I mention the ‘saltState‘ LDAP attribute, which can be seen as a special case of ‘saltList‘. With ‘saltList‘, there’s really no need for ‘saltState‘ any more as ‘saltList‘ is a superset, so the suggested LDAP schema becomes:
attributetype ( .2.1.1.10.10 NAME 'saltList' DESC 'Salt List' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 ) attributetype ( .2.1.1.10.11 NAME 'saltKeyValue' DESC 'Salt data expressed as a key=value pair' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 ) objectclass ( .2.1.2.12 NAME 'saltData' DESC 'Salt Data objectclass' SUP top AUXILIARY MAY ( saltstate $ saltkeyvalue ) )
attributetype ( .2.1.1.10.10 NAME 'saltList' DESC 'Salt List' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
attributetype ( .2.1.1.10.11 NAME 'saltKeyValue' DESC 'Salt data expressed as a key=value pair' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
objectclass ( .2.1.2.12 NAME 'saltData' DESC 'Salt Data objectclass' SUP top AUXILIARY MAY ( saltstate $ saltkeyvalue ) )
With our Schema in place, we can add our ssh public key data to our LDAP directory. Our example data looks something like the following:
o=customer,o=acme,c=gb
{‘saltList’:’ssh.key=ssh-dss AAAAB3…..==admin’,
|__ ou=hosts,o=customer,o=acme,c=gb
|__ cn=mba1,ou=hosts,o=customer,o=acme,c=gb
{‘saltList’:’ssh.key=ssh-dss BBBBA3…..==kris’}
NB SSH key data has been truncated for readability.
We can test our data retrieval with the pillar.data command:
# salt 'mba1' pillar.data {'mba1': 'ssh.key': ['ssh-dss AAAAB3...== admin', 'ssh-dss BBBBA3.....==kris']}
# salt 'mba1' pillar.data
{'mba1': 'ssh.key': ['ssh-dss AAAAB3...== admin', 'ssh-dss BBBBA3.....==kris']}
So Pillar has retrieved data from the series of LDAP searches defined in the Pillar config file and aggregated the attributes identified as Pillar LDAP Lists. The result is a dictionary where the key is the name of the particular list and the value is a list of the aggregate values.
Now let’s write our salt state for our authorized_keys file and have it make use of the pillar data:
Our ssh keys state sls file might look something like: /root/.ssh/authorized_keys: file: - managed - template: jinja - source: salt://ssh/authorized_keys.tmpl With the contents of authorized_keys.tmpl looking like: # Ssh authorised key file {% for key in pillar['ssh.key'] %}{{ key }} {% endfor %} Finally, we run ‘highstate‘ to see the results:
/root/.ssh/authorized_keys: file: - managed - template: jinja - source: salt://ssh/authorized_keys.tmpl
# Ssh authorised key file {% for key in pillar['ssh.key'] %}{{ key }} {% endfor %}
# sudo salt 'mba1' state.highstate mba1: ---------- State: - file Name: /root/.ssh/authorized_keys Function: managed Result: True Comment: File /root/.ssh/authorized_keys updated Changes: diff: New file
# sudo salt 'mba1' state.highstate
mba1: ---------- State: - file Name: /root/.ssh/authorized_keys Function: managed Result: True Comment: File /root/.ssh/authorized_keys updated Changes: diff: New file
Let’s check its contents:
# cat /root/.ssh/authorized_keys ssh-dss AAAAB3…== admin ssh-dss BBBBA3…== kris
Yay! I have my common and my node-specific values.
I can use the same kind of pattern for user accounts or virtual hosts, but the most powerful example that presents itself would be the following entry in a top.sls file: base: '*': {% for state in pillar['states'] %} - {{ state }}{% endfor %} So much configuration with so little code!
base: '*': {% for state in pillar['states'] %} - {{ state }}{% endfor %}
Now I can define the roles (as Pillar LDAP lists with keyword ‘states’) and the configuration of my servers in an external source and leave my states completely clean and free of server or environment-specific data.
We work with our clients to de-risk and accelerate their business goals realisation. Our approach is based on tailoring our services to fit your needs leveraging our portfolio of strategy, execution, innovation and service delivery offerings to help you reach your objectives
We’re always on the lookout for exceptional talent and people who share our values. Even as we continue to grow, we maintain a family environment with respect and teamwork core to our culture.
Companies can start deploying containerised workloads to Kubernetes In days not months, by leveraging Automation Logic’s Kubernetes Platform Accelerator.
Largest Irish-Founded Tech Employer Surpasses 3000 Employees Signing 15th and 16th Acquisition Deals Version 1, a leading digital transformation partner, is to acquire Automation Logic – for an undisclosed sum as part of its ambitious growth strategy for 2023 and beyond. The Automation Logic deal is subject to clearance by the National Security and Investment […]
Automation Logic were recognised as one of the UK’s best Workplaces again for the 5th year running! We also placed 29th in best workplaces for women. This highlights the great work that we are doing as a business, including all the wonderful work being put in by the employee-led DE&I groups. This award means so […]