Category Archives: Configuration Management

Make Ansible Nice and Tidy

This article is to demonstrate a simple user administration, and following basic rules:

  • data must be separated from code, avoid data in too many places,
  • critical information must be encrypted,
  • data could be grouped using some environment fact, for example: dev, staging and prod
  • ansible skill is no longer required, once systems is written down.

In an example below, we can forward/filter variable and pass authorized/public key. Sometimes it’s good to keep simple control to what is going to happen to user module/role.

# ansible-galaxy init roles/users --offline
- Role users was created successfully

# cat roles/users/tasks/main.yml
---
# tasks file for users
- name: ensure user exists
  user:
    name: "{{ user.username }}"
    comment: "{{ user.name }}"
    state: "{{ user.state if user_state is defined else 'present' }}"
    password: "{{ user.password }}"
  loop: "{{ users }}"
  loop_control:
    loop_var: user
- name: add authorized key
  authorized_key: user="{{ user.username }}" key="{{ user.publickey }}" state=present
  loop: "{{ users }}"
  loop_control:
    loop_var: user

We can use ansible to create sha-512 passwords:

# ansible -m debug -a msg="{{ 'test123'  | password_hash('sha512') }}" localhost
localhost | SUCCESS => {
    "msg": "$6$UnrDMoPPnIDtNT43$nQvXEqvApVTY09clkvrXg/M4B59qpS2yOM18E9luYXiHQUPmis18bpMKiDxNjd7Wl.QWJM3mFm1TxMnhi74M6/"
}
# ansible -m debug -a msg="{{ 'test123'  | password_hash('sha512') }}" localhost
localhost | SUCCESS => {
    "msg": "$6$Xfc.7wW3XdY8.urH$e40tqEmNHUGFFLdchoXui4.kYIidQm6YxztOQxiviWcKLGtIwCmVNLWGQ/YtM5PnPW5J3dHtm4AClB7OHRv6c/"
}

And use existing/any user public key.

# cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768

The Ansible recommended structure is using groups and multiple inventories, just like here. Per recommendation, if there’s any common user, we then use a symbolic link.

# tree environments/
environments/
├── dev
│   ├── group_vars
│   │   ├── all
│   │   │   └── users.yml
│   │   ├── db
│   │   └── web
│   └── hosts
├── prod
└── stage

We can group user variables using domain name.

# ansible-vault create dev/group_vars/all/users.yml
# ansible-vault edit dev/group_vars/all/users.yml

---
users:
  - name: Aimee
    username: aimee
    password: $6$UnrDMoPPnIDtNT43$nQvXEqvApVTY09clkvrXg/M4B59qpS2yOM18E9luYXiHQUPmis18bpMKiDxNjd7Wl.QWJM3mFm1TxMnhi74M6/
    publickey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768
  - name: Beta
    username: beta
    password: $6$Xfc.7wW3XdY8.urH$e40tqEmNHUGFFLdchoXui4.kYIidQm6YxztOQxiviWcKLGtIwCmVNLWGQ/YtM5PnPW5J3dHtm4AClB7OHRv6c/
    publickey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768`

Once all setup well, all of our user variables are very simple like above, and our main yml is very simple, nice and tidy like below. The advantages of it is, if there is any change to variable (in this example is user), we don’t need ansible skill to re-code. It just needs some basic yml variable editing skill, and it will reduce human error significantly. It also won’t leave data everywhere.

# cat mainuser.yml
---
- name: run on all host
  hosts: "*"
  roles:
    - users

We can make dev environment for default with defining default in ansible.cfg to avoid running in stage/prod unnecessarily.

[defaults]
inventory = ./environments/dev

And finally running it:

# ansible all --list-hosts
  hosts (1):
    192.168.2.101

# ansible-playbook mainuser.yml --ask-vault-pass
Vault password:

PLAY [run on all host] *************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************
ok: [192.168.2.101]

TASK [users : ensure user exists] ***************************************************************************************************************************************************
changed: [192.168.2.101] => (item={'name': 'Aimee', 'username': 'aimee', 'password': '$6$UnrDMoPPnIDtNT43$nQvXEqvApVTY09clkvrXg/M4B59qpS2yOM18E9luYXiHQUPmis18bpMKiDxNjd7Wl.QWJM3mFm1TxMnhi74M6/', 'publickey': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768'})
changed: [192.168.2.101] => (item={'name': 'Beta', 'username': 'beta', 'password': '$6$Xfc.7wW3XdY8.urH$e40tqEmNHUGFFLdchoXui4.kYIidQm6YxztOQxiviWcKLGtIwCmVNLWGQ/YtM5PnPW5J3dHtm4AClB7OHRv6c/', 'publickey': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768`'})

TASK [users : add authorized key] **************************************************************************************************************************************************
changed: [192.168.2.101] => (item={'name': 'Aimee', 'username': 'aimee', 'password': '$6$UnrDMoPPnIDtNT43$nQvXEqvApVTY09clkvrXg/M4B59qpS2yOM18E9luYXiHQUPmis18bpMKiDxNjd7Wl.QWJM3mFm1TxMnhi74M6/', 'publickey': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768'})
changed: [192.168.2.101] => (item={'name': 'Beta', 'username': 'beta', 'password': '$6$Xfc.7wW3XdY8.urH$e40tqEmNHUGFFLdchoXui4.kYIidQm6YxztOQxiviWcKLGtIwCmVNLWGQ/YtM5PnPW5J3dHtm4AClB7OHRv6c/', 'publickey': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCJLdGtM14KlKlDr5WpapcbQvE4ONEzDclL8MIrdtocrfX+WJye4sx5v9tEOKAvR6ELuCQiETH3fXXAdTVdl1+lBH4c2bEDR2HfPFkLyXNhTCDD4TkuLUUdsaM44JQWq0O91Enc9zJ4kmcihJ1pGagg3LHfK8tvUzlNSCZgTnEFHNZ7Ir1e16B34TBo67FJC2KYhYQdcH4Osgbcp+1ovenG2He4as/uQogMEqAdx3bZpK/jbiRseHKVdEpSaLPOu6YMmRruoujmHvqHXNbioN8STnvSNQDa88LNRjBJLIl2GyuwR6fxlnWPeXwPRC5aTnlwBs2+o3ON9PU3kpRcxBwDH+FXzazaqEh+Y4oOiytsruJP65NaXTeWWO8f3r55+C5xy7ZsWK2YHet8InXnFAbemFQCwjAWWZ7/+d/qpNdrTzXdJFp4IuwYp+hSSxfC2eqykLHEpAX+D+tL3T75oO1ZGVdlfsplFz5CbY1jNadN7QWJOwMNxAJqiDevRt2+NE= root@VM-101932768`'})

PLAY RECAP *************************************************************************************************************************************************************************
192.168.2.101              : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Ansible Hiera

We might be aware of Puppet Hiera flexibility. Below is an example to proxy hiera in native Ansible (without third party product/plugin).

An example, we have one external host inventory running on RedHat.

# cat demo
[web]
192.168.2.101

Below is the playbook. Once it connects to the inventory host, it will return ansible_facts[‘distribution’] = ‘RedHat’, similar to facter in Puppet. Using this O/S type, we can then define all O/S related roles specific variables in “vars/os_RedHat” variable file.

# cat main.yml
---
- name: talk to all hosts just so we can learn about them
  hosts: all
  vars_files:
    - "vars/os_{{ ansible_facts['distribution'] }}"
  roles:
    - sshd
  tasks:
    - name: tell us the variable in main.yml
      debug:
        var: sshd_ssh_packages

Below are variables that we defined in vars/os_RedHat variable file. By defining O/S specific variables in data hierarchy, the role can be made independent from O/S or any other hierarchy.

# cat vars/os_RedHat
# Information for the sshd for RedHat
sshd_package: "sshd"
sshd_ssh_packages:
  - openssh-server
  - openssh

The issue with Ansible hierarchy is their precedence order couldn’t be easily modified like in Puppet. The precedence order of role and host variables from highest is:

  • role vars
  • hosts vars
  • role default

Unfortunately role variable has higher precedence, therefore the nicest way to define variable is in role default, not in role vars. This role default can then be overwritten with host variables.

# cat roles/sshd/vars/main.yml
---

# cat roles/sshd/defaults/main.yml
---
sshd_package: "nothing"
sshd_ssh_packages: "nothing"

# cat roles/sshd/tasks/main.yml
---
- name: tell us the variable inside sshd role
  debug:
    var: sshd_ssh_packages

Let’s run the playbook.

# ansible-playbook -i demo main.yml

PLAY [talk to all hosts just so we can learn about them] **********

TASK [Gathering Facts] **********
ok: [192.168.2.101]

TASK [sshd : tell us the variable inside sshd role] **********
ok: [192.168.2.101] => {
    "sshd_ssh_packages": [
        "openssh-server",
        "openssh"
    ]
}

TASK [tell us the variable in main.yml] **********
ok: [192.168.2.101] => {
    "sshd_ssh_packages": [
        "openssh-server",
        "openssh"
    ]
}

PLAY RECAP **********
192.168.2.101              : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

If we comment out host variable, the role variable takes over. It means in case of finding undefined O/S, both host and role can warn/fail task from being processed.

# cat main.yml
---
- name: talk to all hosts just so we can learn about them
  hosts: all
#  vars_files:
#    - "vars/os_{{ ansible_facts['distribution'] }}"
  tasks:
    - name: tell us the variable in main.yml
      debug:
        var: sshd_ssh_packages
  roles:
    - sshd

# ansible-playbook -i demo main.yml

PLAY [talk to all hosts just so we can learn about them] 

TASK [Gathering Facts] 
ok: [192.168.2.101]

TASK [sshd : tell us the variable inside sshd role] 
ok: [192.168.2.101] => {
    "sshd_ssh_packages": "nothing"
}

TASK [tell us the variable in main.yml] 
ok: [192.168.2.101] => {
    "sshd_ssh_packages": "nothing"
}

PLAY RECAP 
192.168.2.101              : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

With ability to separate data and code completely, we can build similar data hierarchy like in Puppet Hiera. However, unfortunately in Ansible, we have to follow fixed Ansible precendence order and work from there.

Puppet 3.6

  • Attached version 3.6
  • Upgrade all masters first
  • Puppet DB /etc/puppetdb/conf.d/config.ini should have correct logging-config
  • Upgrade all clients packages to 3.6.
  • It should complete all upgrades within 15 mins.

    Once all boxes are upgraded, Puppet 3.6 comes with new package attribute, allow_virtual. The warning can be suppressed from the main manifest.

    http://docs.puppetlabs.com/puppet/3.6/reference/release_notes.html#changes-to-rpm-behavior-with-virtual-packages]]>

    Puppet cloud provisioning

  • VMWare
  • AWS
  • Google Cloud
  • It would be helpful, for example:
    • automated deployment and destruction of VM
    • users/customers control the deployment, no admin person is needed
    Since it’s too easy, the reference above is enough to explain the detail.]]>

    Protected: Multiple PuppetMasters

    This content is password protected. To view it please enter your password below:

    Building Our Own RPMS

    # yum install rpm-build redhat-rpm-config pinentry-gtk.x86_64 # adduser build # sudo su – build $ mkdir -p -m 700 ~/.gnupg $ gpg-agent –daemon –use-standard-socket –pinentry-program /usr/bin/pinentry-curses Do export the output from this line If don’t have key yet: gpg –gen-key On Centos6, if you find gpg-agent[1783]: command get_passphrase failed: Operation cancelled gpg: cancelled by user Login directly to the box using build, do not use sudo and generate again the key. If asked for random keyboard or mouse, do generate random with other shell: $ sudo rngd -r /dev/urandom $ mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS} Create a file called .rpmmacros that contains something like below:

    $ cat .rpmmacros
    %_topdir %(echo $HOME)/rpmbuild
    %_signature gpg
    %_gpg_path %(echo $HOME)/.gnupg
    %_gpg_name F77xxxxx
    %_gpgbin /usr/bin/gpg
    
    You can get the F77xxxx from the gpg list key:
    $ gpg --list-keys
    /home/build/.gnupg/pubring.gpg
    ------------------------------
    pub   20172K/F77xxxx 2011-11-05
    uid                  Build System 
    
    When ready, put your rpm source inside SRPMS and install it.
    rpm -Uvh package.src.rpm
    
    Enter SPECS folder and rebuild:
    rpmbuild -bb --sign yourpackage.spec
    
    Then check and look for RPM inside RPMS folder. To create public key
    $ gpg --armor --export
    
    To check your public GPG key:
    www.pgpdump.net
    
    ]]>

    Puppet template of a nested array

    module::hosts: – ipaddress: 1.1.1.1 names: one.com – ipaddress: 2.2.2.2 names: – two.zero.com – two.one.com – two.two.com ERB template:

    <% @hosts.each do |host| -%>
    <%= host['ipaddress'] %> <% host['names'].each do |val| -%><%= val+' ' %><% end %>
    <% end -%>
    
    ]]>

    MCollective ActiveMQ TLS CA-Verified using Puppet CA

    PuppetMaster 

    locate the dir and find the existing keys:

    # puppet agent --configprint ssldir
    /var/lib/puppet/ssl

    In this case:

    CA: /var/lib/puppet/ssl/certs/ca.pem
    PUBLIC: /var/lib/puppet/ssl/certs/puppetm.demo
    PRIVATE: /var/lib/puppet/ssl/private_keys/puppetm.demo

    If you like to test them:

    openssl x509 -noout -modulus -in /var/lib/puppet/ssl/certs/puppetm.demo | openssl md5
    (stdin)= d41d8cd98f00b204e9800998ecf8427e
    openssl rsa -noout -modulus -in /var/lib/puppet/ssl/private_keys/puppetm.demo | openssl md5
    (stdin)= d41d8cd98f00b204e9800998ecf8427e
    openssl verify -CAfile /var/lib/puppet/ssl/certs/ca.pem /var/lib/puppet/ssl/certs/puppetm.demo.pem
    /var/lib/puppet/ssl/certs/puppetm.demo: OK
    openssl x509 -in /var/lib/puppet/ssl/certs/puppetm.demo.pem -text -noout
    Validity
    Not Before: Nov 6 01:17:58 2013 GMT
    Not After : Nov 6 01:17:58 2018 GMT

    Since ActiveMQ is a java-based product, it requires truststores and keystores, the java way to deal with keys.
    The document mentioned that “The truststore is only required for CA-verified TLS. If you are using anonymous TLS, you may skip it.” Since we are using CA-verified TLS, we are going to use truststore.

    ActiveMQ requires truststore:

    keytool -import -alias "MyCA" -file /var/lib/puppet/ssl/certs/ca.pem -keystore truststore.jks
    enter password and confirmed yes

    You can test:

    keytool -list -keystore truststore.jks

    Results:

    Enter keystore password:
    Keystore type: JKS
    Keystore provider: SUN
    Your keystore contains 1 entry
    myca, Nov 25, 2013, trustedCertEntry,
    Certificate fingerprint (MD5): FC:A5:4E:D3:4D:16:0F:DD:10:12:97:6A:13:8C:EF:98

    Create keystore:

    cat /var/lib/puppet/ssl/private_keys/puppetm.demo.pem /var/lib/puppet/ssl/certs/puppetm.demo.pem > temp.pem
    
    openssl pkcs12 -export -in temp.pem -out activemq.p12 -name puppetm.demo
    enter password
    
    keytool -importkeystore -destkeystore keystore.jks -srckeystore activemq.p12 -srcstoretype PKCS12 -alias puppetm.demo
    Enter destination keystore password:
    Re-enter new password:
    Enter source keystore password:

    Test:

    keytool -list -keystore keystore.jks
    Enter keystore password:
    Keystore type: JKS
    Keystore provider: SUN
    Your keystore contains 1 entry
    puppetm.demo, Nov 25, 2013, PrivateKeyEntry,
    Certificate fingerprint (MD5): 4F:D9:22:D0:AC:B6:71:09:9B:0F:35:83:DB:E9:DF:DA
    
    openssl x509 -in /var/lib/puppet/ssl/certs/puppetm.demo.pem -fingerprint -md5
    MD5 Fingerprint=4F:D9:22:D0:AC:B6:71:09:9B:0F:35:83:DB:E9:DF:DA

    On the middleware/activemq server, add stomp+ssl, also the keystore and truststore:

    <transportConnectors>
    <transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
    <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613"/>
    <transportConnector name="stomp+ssl" uri="stomp+ssl://0.0.0.0:61614?needClientAuth=true"/>
    </transportConnectors>
    
    <sslContext>
    <sslContext
    keyStore="/etc/activemq/keystore.jks" keyStorePassword="passwordhere"
    trustStore="/etc/activemq/truststore.jks" trustStorePassword="passwordhere"
    />
    </sslContext>

    Connect each MCollective server to the middleware, use /var/log/mcollective.log to debug, make sure it stays connected correctly.

    INSTALL MCOLLECTIVE CLIENT:

    Again run this on puppetmaster server to get signed with PuppetMaster CA. If you have other server for the client, you can then copy these keys over:

    # puppet cert generate userclient
    Notice: userclient has a waiting certificate request
    Notice: Signed certificate request for userclient
    Notice: Removing file Puppet::SSL::CertificateRequest userclient at '/var/lib/puppet/ssl/ca/requests/userclient.pem'
    Notice: Removing file Puppet::SSL::CertificateRequest userclient at '/var/lib/puppet/ssl/certificate_requests/userclient.pem'
    
    #Important: Puppet 3.0 put the CA file in certs/ca.pem instead of ca/ca_crt.pem
    #also user public key is in certs/[user].pem instead of public_keys.pem
    
    #using ssl plugin, use
    public - no ssl: /var/lib/puppet/ssl/certs/userclient.pem
    
    mkdir /etc/mcollective/ssl/activemqssl
    cp /var/lib/puppet/ssl/certs/ca.pem ca.pem
    cp /var/lib/puppet/ssl/certs/userclient.pem userclient-cert.pem
    cp /var/lib/puppet/ssl/public_keys/userclient.pem userclient-public.pem
    cp /var/lib/puppet/ssl/private_keys/userclient.pem userclient-private.pem
    cp /var/lib/puppet/ssl/ca/signed/userclient.pem userclient-signed.pem

    /etc/mcollective/client.cfg

    connector = activemq
    plugin.activemq.pool.size = 1
    plugin.activemq.pool.1.host = localhost
    plugin.activemq.pool.1.user = mcollective
    plugin.activemq.pool.1.password = passwordhere
    
    plugin.activemq.pool.1.ssl = true
    #if ssl=false (default) -> port=61613, else if ssl=true -> port=61614
    #plugin.activemq.pool.1.port = 61613
    plugin.activemq.pool.1.port = 61614
    
    plugin.activemq.pool.1.ssl.ca = /etc/mcollective/ssl/activemqssl/ca.pem
    plugin.activemq.pool.1.ssl.key = /etc/mcollective/ssl/activemqssl/userclient-private.pem
    plugin.activemq.pool.1.ssl.cert = /etc/mcollective/ssl/activemqssl/userclient-cert.pem
    

    You can then test them: mco inventory someclient.demo
    ]]>

    Puppeting sudoers

     Method #1 – RedHat style, wheel group

      #includedir does not work on RHEL prior to 5.5, therefore wheel is still used
      #wheel only works on RedHat only, not UNIX and other
      if ($operatingsystem == 'RedHat') {
        augeas { 'sudowheel':
          context => '/files/etc/sudoers', # target file is /etc/sudoers
          changes => [
            # allow wheel users to use sudo
            'set spec[user = "%wheel"]/user %wheel',
            'set spec[user = "%wheel"]/host_group/host ALL',
            'set spec[user = "%wheel"]/host_group/command ALL',
            'set spec[user = "%wheel"]/host_group/command/runas_user ALL',
            'set spec[user = "%wheel"]/host_group/command/tag NOPASSWD',
            ]
         }
      }
    ON EACH USER:
                user { $user:
                  ensure => present,
                  groups => "wheel",
                  password => $userhash[$user]['password'],
                  managehome => true,
                  comment => $userhash[$user]['fullname'],
                  password_min_age => $password_min_age,
                  password_max_age => $password_max_age,
                }
    

     Method #2 – UNIX ways – creating a file inside /etc/sudoers.d

      # on RHEL5, this sudoers.d folder does not exist by default, therefore we require to create the folder
      file { "/etc/sudoers.d":
        ensure  => directory,
        owner   => root,
        group   => root,
        mode    => 0750,
      }
      #using /etc/sudoers.d/svc-system-config-user, UNIX way
      $sudoerusers = hiera_array(users::sudoerusers)
      file { 'svc-system-config-user':
        path    => '/etc/sudoers.d/svc-system-config-user',
        ensure  => file,
        mode    => 440,
        owner   => 'root',
        group   => 'root',
        content => template('users/svc-system-config-user.erb'),
      }
    The template, svc-system-config-user.erb:
    # NOTE: This is puppet managed
    <% @sudoerusers.each do |val| -%>
    <%= val %> ALL=(ALL) NOPASSWD: ALL
    <% end -%>
    

     Method #3 – Just edit the /etc/sudoers

    ON EACH USER:
              $sudochange1 = "set spec[user = '${user}']/user ${user}"
              $sudochange2 = "set spec[user = '${user}']/host_group/host ALL"
              $sudochange3 = "set spec[user = '${user}']/host_group/command ALL"
              $sudochange4 = "set spec[user = '${user}']/host_group/command/runas_user ALL"
              $sudochange5 = "set spec[user = '${user}']/host_group/command/tag NOPASSWD"
    
              augeas { "sudo${user}":
                context => '/files/etc/sudoers', # target file is /etc/sudoers
                changes => [ $sudochange1,
                    $sudochange2,
                    $sudochange3,
                    $sudochange4,
                    $sudochange5,
                ]
              }
    

    ]]>

    Puppet Hiera

  • class can share variable between each other,
  • easy to maintain,
  • enhance flexibility.
  • If you are a beginner that requires Puppet Enterprise (PE) GUI, unfortunately the PE hasn’t utilized hiera fully. I would think these combination of open-source like are sufficient enough:

    • hiera which is built on facts as much as possible
    • puppet-dashboard for monitoring
    • mcollective with SSL

    Once you know how to use it, it’s very easy to keep using hiera. It doesn’t need to modify modules massively rather than addition to hiera call, e.g.: hiera() to get hiera variables.

    Using hiera from previous version is only a bit of modification on how to apply modules.]]>