Pillar LDAP is a plugin module for the salt pillar system which allows external data (in this case data stored in an LDAP directory) to be incorporated into salt state files.
This post will show you how to install and configure Pillar LDAP and use it with your salt states.
Download the ‘pillar_ldap.py’ module from https://github.com/KrisSaxton/salt-ldap and drop it into the ‘pillar’ directory under the root of the salt python module.
An easy way to file your salt pkg root is to run: $ python Python 2.7.1 (r271:86832, Jun 16 2011, 16:59:05) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import salt >>> print salt.__file__ /Library/Python/2.7/site-packages/salt/__init__.pyc >>>
$ python Python 2.7.1 (r271:86832, Jun 16 2011, 16:59:05) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import salt >>> print salt.__file__ /Library/Python/2.7/site-packages/salt/__init__.pyc >>>
So here I can see that my salt root is ‘/Library/Python/2.7/site-packages/salt’ and so I drop my pillar ldap module into ‘/Library/Python/2.7/site-packages/salt/pillar’
Let your salt master know to load the Pillar LDAP module and where to find the module’s configuration file by adding the following line to the ‘ext_pillar’ section of your salt master config file:
ext_pillar: - pillar_ldap: /etc/salt/pillar/plugins/pillar_ldap.yaml
The config file referenced (‘pillar_ldap.yaml’) needs to be populated with a series of LDAP sources and an order in which to search them:
ldap: &defaults server: localhost port: 389 tls: False dn: o=acme,c=gb binddn: uid=admin,o=acme,c=gb bindpw: sssssh attrs: [saltKeyValue, saltState] scope: 1
hosts: <<: *defaults filter: ou=hosts dn: o=customer,o=acme,c=gb
{{ fqdn }}: <<: *defaults filter: cn={{ fqdn }} dn: ou=hosts,o=customer,o=acme,c=gb
search_order: – hosts – {{ fqdn }}
Essentially whatever is referenced in the ‘search_order’ list will be searched from first to last so for each entry in the ‘search_order’ you need an entry which defines all the LDAP details required to make the search.
Here I’ve used a YAML trick to cut down on the amount of typing I have to do; a generic ‘ldap’ entry is setup and then referenced by subsequent entries (using ‘<<: *default‘). This gives us a way of inheriting settings from the default ‘ldap’ entry, overriding values as needed.
Also note that this config file is itself a template allowing you to use grains anywhere you like. In this example I have an LDAP source which will will substitute the ‘fqdn’ of the calling minion in the LDAP search term.
So how does all this work in practice? The idea is to place LDAP attributes at various points in your LDAP directory which the Pillar LDAP module retrieves through a series of searches and then merges them into the pillar data dictionary so that they are available for referencing within your state files.
The clever (or not so clever, depending on whether or not you’ve seen it before – this behaviour is present in both HP Server Automation and R.I.Pienaar‘s hiera) is that where repeated instances of the same data are found during the searches, the instance found latest in the search order will override any earlier instances.
This system of precedence gives you a powerful hierarchical data model to configure more generic values early on in your search order and have them overridden as required, with the most specific (i.e. the last) search generally expected to be keyed on the node itself.
For those unfamiliar with this way of laying out your configuration data, a worked example is shown at the end of this post.
You can test your pillar config by running the pillar.data function:
salt '*' pillar.data
Further debugging can be done by directly running the ldap salt module on which the pillar module depends:
salt 'ldaphost' ldap.search filter=cn=myhost
See salt ldap module documentation for more info: https://github.com/KrisSaxton/salt-ldap
Pillar LDAP will search for and return any arbitrary LDAP attributes, and will return all attributes if the ‘attrs’ key is missing from the pillar config file. However if you are willing to take the time to modify your LDAP schema you can store and retrieve dedicated salt attributes which are more natural to work with and allow for more concise salt state files.
Hopefully salt will have its own LDAP OID at some point, but in the meantime you can add something like the example show below to your own schema, provided your LDAP directory is private (modify the OID to suit).
attributetype ( .2.1.1.10.10 NAME 'saltState' DESC 'Salt State' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
attributetype ( .2.1.1.10.11 NAME ‘saltKeyValue’ DESC ‘Salt data expressed as a key=value pair’ EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
objectclass ( .2.1.2.12 NAME ‘saltData’ DESC ‘Salt Data objectclass’ SUP top AUXILIARY MAY ( saltstate $ saltkeyvalue ) )
Add this to your LDAP directory server schema (e.g. with openldap on most Linux distros, add your schema file to /etc/openldap/schema/extra and restart slapd). Setting up and managing LDAP is beyond the scope of this post.
In this worked example, we will use everything detailed above to manage the content of our /etc/resolv.conf file on a host called ‘mba1’ using LDAP attributes.
1. Let’s add some appropriate LDAP content (excuse the crappy ASCII art, I hope you can make sense of it):
o=customer,o=acme,c=gb
{'saltKeyValue':'nameserver=10.0.0.1',
'saltKeyValue':'domain=customer.local'}
|__ ou=hosts,o=customer,o=acme,c=gb
{'saltKeyValue':'nameserver=172.16.0.1',
'saltKeyValue':'domain=hosts.customer.local'} |__ cn=mba1,ou=hosts,o=customer,o=acme,c=gb
{'saltKeyValue':'nameserver=192.168.0.1'}
These LDAP nodes also have also had the ‘saltData’ auxillary class added to them so they can hold ‘saltKeyValue’ attributes.
2. Check your attributes are accessible with the salt ldap.search function (NB for this you will need to either set the minion configuration or pass all the LDAP search options set in your pillar_ldap.yaml to the ldap.search module on the command line)
salt 'mba1' ldap.search filter=cn=mba1
{‘mba1’: {‘count’: 1, ‘results’: [[‘cn=mba1,ou=hosts,o=customer,o=acme,c=gb’, {‘saltKeyValue’: [‘nameserver=192.168.0.1’]}]], ‘time’: {‘human’: ‘1.7ms’, ‘raw’: ‘0.00173’}}}
3. Setup our pillar_ldap.yaml config file
ldap: &defaults server: localhost port: 389 tls: False dn: o=acme,c=gb binddn: uid=admin,o=acme,c=gb bindpw: sssssh attrs: [saltKeyValue] scope: 1
search_order: – customer – hosts – {{ fqdn }}
customer: <<: *defaults filter: ou=customer
Restart your salt master.
4. Test with pillar data:
salt 'mba1' pillar.data
{‘mba1’: {‘domain’: ‘hosts.customer.local’, ‘nameserver’: ‘192.168.0.1’}}
As you can see the SaltKey Attributes are written as top level dictionary keys so they will accessible within your state files as {{ pillar[‘domain’] }} and {{ pillar[‘nameserver’] }}
5. Write our salt state file for resolv.conf:
So my resolver state sls file looks something like:
/etc/resolv.conf: file: - managed - template: jinja - source: salt://resolver/resolv.conf.tmpl
With the contents of resolv.conf.tmpl looking like:
# # resolv.conf # nameserver {{ pillar['nameserver'] }} domain {{ pillar['domain'] }}
6. Run salt highstate (NB run with test=true if you want to see what salt will do without actually doing it)
sudo salt 'mba1' state.highstate
mba1: ---------- State: - file Name: /tmp/resolv.conf Function: managed Result: True Comment: File /tmp/resolv.conf updated Changes: diff: New file
Our /etc/resolv.conf now contains: # # resolv.conf # nameserver 192.168.0.1 domain hosts.customer.local
So as we can see salt uses the ‘nameserver’ value attached to the cn=mba1 entry (as it is the last in the Pillar LDAP search order) and the ‘domain’ value from the ou=hosts entry (as this occurs after the o=customer entry and there is no more specific value on cn=mba1).
7. Delete the server entry from cn=mba1
To complete the demo, let’s see what happens when we delete the ‘nameserver’ attribute from the cn=mba1 entry:
sudo salt 'mba1' state.highstate test=true
mba1: ———- State: – file Name: /tmp/resolv.conf Function: managed Result: None Comment: The following values are set to be changed: diff: — +++ @@ -7,5 +7,5 @@ # # This file is automatically generated. # -nameserver 192.168.0.1 +nameserver 172.16.0.1 domain hosts.customer.local
So we can see that in the absence of a more specific value attached to cn=mba1, pillar now, in effect, walks up the LDAP search branch and choses the next value if finds. In this case this the ‘nameserver’ value attached to ou=hosts.
That’s all for now; I’m off for a Twix.
We work with our clients to de-risk and accelerate their business goals realisation. Our approach is based on tailoring our services to fit your needs leveraging our portfolio of strategy, execution, innovation and service delivery offerings to help you reach your objectives
We’re always on the lookout for exceptional talent and people who share our values. Even as we continue to grow, we maintain a family environment with respect and teamwork core to our culture.
Companies can start deploying containerised workloads to Kubernetes In days not months, by leveraging Automation Logic’s Kubernetes Platform Accelerator.
Largest Irish-Founded Tech Employer Surpasses 3000 Employees Signing 15th and 16th Acquisition Deals Version 1, a leading digital transformation partner, is to acquire Automation Logic – for an undisclosed sum as part of its ambitious growth strategy for 2023 and beyond. The Automation Logic deal is subject to clearance by the National Security and Investment […]
Automation Logic were recognised as one of the UK’s best Workplaces again for the 5th year running! We also placed 29th in best workplaces for women. This highlights the great work that we are doing as a business, including all the wonderful work being put in by the employee-led DE&I groups. This award means so […]