Monthly Archives: November 2013

MCollective ActiveMQ TLS CA-Verified using Puppet CA

PuppetMaster 

locate the dir and find the existing keys:

# puppet agent --configprint ssldir
/var/lib/puppet/ssl

In this case:

CA: /var/lib/puppet/ssl/certs/ca.pem
PUBLIC: /var/lib/puppet/ssl/certs/puppetm.demo
PRIVATE: /var/lib/puppet/ssl/private_keys/puppetm.demo

If you like to test them:

openssl x509 -noout -modulus -in /var/lib/puppet/ssl/certs/puppetm.demo | openssl md5
(stdin)= d41d8cd98f00b204e9800998ecf8427e
openssl rsa -noout -modulus -in /var/lib/puppet/ssl/private_keys/puppetm.demo | openssl md5
(stdin)= d41d8cd98f00b204e9800998ecf8427e
openssl verify -CAfile /var/lib/puppet/ssl/certs/ca.pem /var/lib/puppet/ssl/certs/puppetm.demo.pem
/var/lib/puppet/ssl/certs/puppetm.demo: OK
openssl x509 -in /var/lib/puppet/ssl/certs/puppetm.demo.pem -text -noout
Validity
Not Before: Nov 6 01:17:58 2013 GMT
Not After : Nov 6 01:17:58 2018 GMT

Since ActiveMQ is a java-based product, it requires truststores and keystores, the java way to deal with keys.
The document mentioned that “The truststore is only required for CA-verified TLS. If you are using anonymous TLS, you may skip it.” Since we are using CA-verified TLS, we are going to use truststore.

ActiveMQ requires truststore:

keytool -import -alias "MyCA" -file /var/lib/puppet/ssl/certs/ca.pem -keystore truststore.jks
enter password and confirmed yes

You can test:

keytool -list -keystore truststore.jks

Results:

Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
myca, Nov 25, 2013, trustedCertEntry,
Certificate fingerprint (MD5): FC:A5:4E:D3:4D:16:0F:DD:10:12:97:6A:13:8C:EF:98

Create keystore:

cat /var/lib/puppet/ssl/private_keys/puppetm.demo.pem /var/lib/puppet/ssl/certs/puppetm.demo.pem > temp.pem

openssl pkcs12 -export -in temp.pem -out activemq.p12 -name puppetm.demo
enter password

keytool -importkeystore -destkeystore keystore.jks -srckeystore activemq.p12 -srcstoretype PKCS12 -alias puppetm.demo
Enter destination keystore password:
Re-enter new password:
Enter source keystore password:

Test:

keytool -list -keystore keystore.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
puppetm.demo, Nov 25, 2013, PrivateKeyEntry,
Certificate fingerprint (MD5): 4F:D9:22:D0:AC:B6:71:09:9B:0F:35:83:DB:E9:DF:DA

openssl x509 -in /var/lib/puppet/ssl/certs/puppetm.demo.pem -fingerprint -md5
MD5 Fingerprint=4F:D9:22:D0:AC:B6:71:09:9B:0F:35:83:DB:E9:DF:DA

On the middleware/activemq server, add stomp+ssl, also the keystore and truststore:

<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
<transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613"/>
<transportConnector name="stomp+ssl" uri="stomp+ssl://0.0.0.0:61614?needClientAuth=true"/>
</transportConnectors>

<sslContext>
<sslContext
keyStore="/etc/activemq/keystore.jks" keyStorePassword="passwordhere"
trustStore="/etc/activemq/truststore.jks" trustStorePassword="passwordhere"
/>
</sslContext>

Connect each MCollective server to the middleware, use /var/log/mcollective.log to debug, make sure it stays connected correctly.

INSTALL MCOLLECTIVE CLIENT:

Again run this on puppetmaster server to get signed with PuppetMaster CA. If you have other server for the client, you can then copy these keys over:

# puppet cert generate userclient
Notice: userclient has a waiting certificate request
Notice: Signed certificate request for userclient
Notice: Removing file Puppet::SSL::CertificateRequest userclient at '/var/lib/puppet/ssl/ca/requests/userclient.pem'
Notice: Removing file Puppet::SSL::CertificateRequest userclient at '/var/lib/puppet/ssl/certificate_requests/userclient.pem'

#Important: Puppet 3.0 put the CA file in certs/ca.pem instead of ca/ca_crt.pem
#also user public key is in certs/[user].pem instead of public_keys.pem

#using ssl plugin, use
public - no ssl: /var/lib/puppet/ssl/certs/userclient.pem

mkdir /etc/mcollective/ssl/activemqssl
cp /var/lib/puppet/ssl/certs/ca.pem ca.pem
cp /var/lib/puppet/ssl/certs/userclient.pem userclient-cert.pem
cp /var/lib/puppet/ssl/public_keys/userclient.pem userclient-public.pem
cp /var/lib/puppet/ssl/private_keys/userclient.pem userclient-private.pem
cp /var/lib/puppet/ssl/ca/signed/userclient.pem userclient-signed.pem

/etc/mcollective/client.cfg

connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = localhost
plugin.activemq.pool.1.user = mcollective
plugin.activemq.pool.1.password = passwordhere

plugin.activemq.pool.1.ssl = true
#if ssl=false (default) -> port=61613, else if ssl=true -> port=61614
#plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.port = 61614

plugin.activemq.pool.1.ssl.ca = /etc/mcollective/ssl/activemqssl/ca.pem
plugin.activemq.pool.1.ssl.key = /etc/mcollective/ssl/activemqssl/userclient-private.pem
plugin.activemq.pool.1.ssl.cert = /etc/mcollective/ssl/activemqssl/userclient-cert.pem

You can then test them: mco inventory someclient.demo
]]>

Puppeting sudoers

 Method #1 – RedHat style, wheel group

  #includedir does not work on RHEL prior to 5.5, therefore wheel is still used
  #wheel only works on RedHat only, not UNIX and other
  if ($operatingsystem == 'RedHat') {
    augeas { 'sudowheel':
      context => '/files/etc/sudoers', # target file is /etc/sudoers
      changes => [
        # allow wheel users to use sudo
        'set spec[user = "%wheel"]/user %wheel',
        'set spec[user = "%wheel"]/host_group/host ALL',
        'set spec[user = "%wheel"]/host_group/command ALL',
        'set spec[user = "%wheel"]/host_group/command/runas_user ALL',
        'set spec[user = "%wheel"]/host_group/command/tag NOPASSWD',
        ]
     }
  }
ON EACH USER:
            user { $user:
              ensure => present,
              groups => "wheel",
              password => $userhash[$user]['password'],
              managehome => true,
              comment => $userhash[$user]['fullname'],
              password_min_age => $password_min_age,
              password_max_age => $password_max_age,
            }

 Method #2 – UNIX ways – creating a file inside /etc/sudoers.d

  # on RHEL5, this sudoers.d folder does not exist by default, therefore we require to create the folder
  file { "/etc/sudoers.d":
    ensure  => directory,
    owner   => root,
    group   => root,
    mode    => 0750,
  }
  #using /etc/sudoers.d/svc-system-config-user, UNIX way
  $sudoerusers = hiera_array(users::sudoerusers)
  file { 'svc-system-config-user':
    path    => '/etc/sudoers.d/svc-system-config-user',
    ensure  => file,
    mode    => 440,
    owner   => 'root',
    group   => 'root',
    content => template('users/svc-system-config-user.erb'),
  }
The template, svc-system-config-user.erb:
# NOTE: This is puppet managed
<% @sudoerusers.each do |val| -%>
<%= val %> ALL=(ALL) NOPASSWD: ALL
<% end -%>

 Method #3 – Just edit the /etc/sudoers

ON EACH USER:
          $sudochange1 = "set spec[user = '${user}']/user ${user}"
          $sudochange2 = "set spec[user = '${user}']/host_group/host ALL"
          $sudochange3 = "set spec[user = '${user}']/host_group/command ALL"
          $sudochange4 = "set spec[user = '${user}']/host_group/command/runas_user ALL"
          $sudochange5 = "set spec[user = '${user}']/host_group/command/tag NOPASSWD"

          augeas { "sudo${user}":
            context => '/files/etc/sudoers', # target file is /etc/sudoers
            changes => [ $sudochange1,
                $sudochange2,
                $sudochange3,
                $sudochange4,
                $sudochange5,
            ]
          }

]]>

Puppet Hiera

  • class can share variable between each other,
  • easy to maintain,
  • enhance flexibility.
  • If you are a beginner that requires Puppet Enterprise (PE) GUI, unfortunately the PE hasn’t utilized hiera fully. I would think these combination of open-source like are sufficient enough:

    • hiera which is built on facts as much as possible
    • puppet-dashboard for monitoring
    • mcollective with SSL

    Once you know how to use it, it’s very easy to keep using hiera. It doesn’t need to modify modules massively rather than addition to hiera call, e.g.: hiera() to get hiera variables.

    Using hiera from previous version is only a bit of modification on how to apply modules.]]>

    Marionette Collective SSL Setup

    # openssl genrsa -out server-private.pem 1024 # openssl rsa -in server-private.pem -out server-public.pem -outform PEM -pubout MCollective server.cfg

      securityprovider = ssl
      plugin.ssl_server_private = /etc/mcollective/ssl/server-private.pem
      plugin.ssl_server_public = /etc/mcollective/ssl/server-public.pem
      plugin.ssl_client_cert_dir = /etc/mcollective/ssl/clients/
      plugin.ssl.enforce_ttl = 0
    
    Create client certs:
    openssl genrsa -out username-private.pem 1024
    openssl rsa -in username-private.pem -out username-public.pem -outform PEM -pubout
    
    Save them:
    /home/username/.mc/username-private.pem
    /home/username/.mc/username-public.pem
    
    Distribute this to all mcollective servers/nodes:
    /etc/mcollective/ssl/clients/username-public.pem
    
    MCollective client.cfg
     securityprovider = ssl
     plugin.ssl_server_public = /etc/mcollective/ssl/server-public.pem
     plugin.ssl_client_private = /home/username/.mc/username-private.pem
     plugin.ssl_client_public = /home/username/.mc/username-public.pem
    
    ]]>

    Marionette Collective Plugins

    http://projects.puppetlabs.com/projects/mcollective-plugins/wiki

    Repo: https://yum.puppetlabs.com/el/6/products/x86_64/

    Each of them has:

    • agent, to be installed on each mcollective server/node
    • client, to be installed on mcollective client only
    • common, need to be installed on both node and agent.

    After you install client on mcollective client, then you can run plugin.

    Examples:
    To put all puppet to sleep:

    mco rpc puppet disable message="sleep for now"

    If later needs to wake them up again

    mco rpc puppet enable

    To wake up one server:

    mco rpc puppet enable -I puppetclient.demo

    Run puppet once on all nodes, 3 nodes per 60 seconds.

    # mco puppet runonce --batch 3  --batch-sleep 60
    / [ ======================> ] 9 / 125

    Or to run 5 concurrent puppet:

    # mco puppet runall 5

    To run once on specific host:

    mco puppet runonce -I puppetclient.demo

    Note:

    • Only enabled server can run puppet. If puppet is disabled, you need to enable it first.
    • If puppet is called for reload or run once, it won’t generate systems log.

    Puppet summary:

    # mco puppet summary

    Summary statistics for 125 nodes:
    Total resources: ▂▇▄▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 150.0 max: 155.0
    Out Of Sync resources: ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 0.0 max: 0.0
    Failed resources: ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 0.0 max: 0.0
    Changed resources: ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 0.0 max: 0.0
    Config Retrieval time (seconds): ▇▅▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 1.1 max: 6.7
    Total run-time (seconds): ▄▂▃▇▃▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 9.4 max: 19.2
    Time since last run (seconds): ▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅▇▅ min: 1.6k max: 6.0k
    
    

    mco service mcollective restart -I anode.com

    mco service sshd status –np -v -I anode.com

    mco service httpd status –np]]>

    Marionette Collective using ActiveMQ

    5.7 supports Java 1.7. About the middleware, RabbitMQ is not default middleware of MCollective, therefore we will use ActiveMQ as the middleware (java app). Components: * java-1.7.0-openjdk * ActiveMQ 5.8.0 * tanukiwrapper 3.5.9 – activeMQ dependency * mcollective 2.3.2 * mcollective-common 2.3.2 – mcollective dependency * rubygem-stomp 1.2.2 – mcollective-common dependency * mcollective-client 2.3.2 * mcollective-common 2.3.2 – mcollective-client dependency * rubygem-stomp 1.2.2 – mcollective-client dependency How does it work? Very easy. Mcollective server needs to get installed to each node/puppetclient. Those clients connect to a middleware, in this case we use ActiveMQ, through a protocol such as activemq protocol. An mcollective-client then connect to the middleware to talk to each mcollective server. Installing ActiveMQ:

    
    # yum install java
    # yum install activemq
    # vi /etc/activemq/activemq.xml and change password
                  authenticationUser username="mcollective" password="passwordhere" 
    chkconfig activemq on
    service activemq restart
    
    Now let’s configure each client. Each client needs mcollective server: puppetclient:
    yum install rubygem-stomp
    yum install mcollective
    Install mcollective-common-2.3.2-
    Install mcollective-2.3.2-
    
    Change /etc/mcollective/server.cfg
    connector = activemq
    plugin.activemq.pool.size = 1
    plugin.activemq.pool.1.host = middleware.com
    plugin.activemq.pool.1.port = 61613
    plugin.activemq.pool.1.user = mcollectiveuser
    plugin.activemq.pool.1.password = passwordhere
    
    Don’t forget add OUTBOUND port 61613
    cat mcollective
    # Allow mcollective to middleware
    chain OUTPUT {
    proto ( tcp udp ) dport 61613 ACCEPT;
    }
    service mcollective restart
    
    Setup client, somewhere.
    # wget http://downloads.puppetlabs.com/mcollective/mcollective-client-2.3.2-1.el6.noarch.rpm
    yum -y install rubygem-stomp
    rpm -Uvh mcollective-common-2.3.2-1.el6.noarch.rpm
    rpm -Uvh mcollective-client-2.3.2-1.el6.noarch.rpm
    
    vi /etc/mcollective/client.cfg
    connector = activemq
    plugin.activemq.pool.size = 1
    plugin.activemq.pool.1.host = middlewareserver.com
    plugin.activemq.pool.1.port = 61613
    plugin.activemq.pool.1.user = mcollectiveuser
    plugin.activemq.pool.1.password = passwordhere
    
    Now try mco ping
    $ sudo mco ping
    puppetclient.demo                        time=46.72 ms
    ---- ping statistics ----
    1 replies max: 46.72 min: 46.72 avg: 46.72
    
    ]]>