Category Archives: Configuration Management

Marionette Collective SSL Setup

# openssl genrsa -out server-private.pem 1024 # openssl rsa -in server-private.pem -out server-public.pem -outform PEM -pubout MCollective server.cfg

  securityprovider = ssl
  plugin.ssl_server_private = /etc/mcollective/ssl/server-private.pem
  plugin.ssl_server_public = /etc/mcollective/ssl/server-public.pem
  plugin.ssl_client_cert_dir = /etc/mcollective/ssl/clients/
  plugin.ssl.enforce_ttl = 0
Create client certs:
openssl genrsa -out username-private.pem 1024
openssl rsa -in username-private.pem -out username-public.pem -outform PEM -pubout
Save them:
/home/username/.mc/username-private.pem
/home/username/.mc/username-public.pem
Distribute this to all mcollective servers/nodes:
/etc/mcollective/ssl/clients/username-public.pem
MCollective client.cfg
 securityprovider = ssl
 plugin.ssl_server_public = /etc/mcollective/ssl/server-public.pem
 plugin.ssl_client_private = /home/username/.mc/username-private.pem
 plugin.ssl_client_public = /home/username/.mc/username-public.pem
]]>

Marionette Collective Plugins

http://projects.puppetlabs.com/projects/mcollective-plugins/wiki

Repo: https://yum.puppetlabs.com/el/6/products/x86_64/

Each of them has:

  • agent, to be installed on each mcollective server/node
  • client, to be installed on mcollective client only
  • common, need to be installed on both node and agent.

After you install client on mcollective client, then you can run plugin.

Examples:
To put all puppet to sleep:

mco rpc puppet disable message="sleep for now"

If later needs to wake them up again

mco rpc puppet enable

To wake up one server:

mco rpc puppet enable -I puppetclient.demo

Run puppet once on all nodes, 3 nodes per 60 seconds.

# mco puppet runonce --batch 3  --batch-sleep 60
/ [ ======================> ] 9 / 125

Or to run 5 concurrent puppet:

# mco puppet runall 5

To run once on specific host:

mco puppet runonce -I puppetclient.demo

Note:

  • Only enabled server can run puppet. If puppet is disabled, you need to enable it first.
  • If puppet is called for reload or run once, it won’t generate systems log.

Puppet summary:

# mco puppet summary

Summary statistics for 125 nodes:
Total resources: ▂▇▄▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 150.0 max: 155.0
Out Of Sync resources: ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 0.0 max: 0.0
Failed resources: ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 0.0 max: 0.0
Changed resources: ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 0.0 max: 0.0
Config Retrieval time (seconds): ▇▅▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 1.1 max: 6.7
Total run-time (seconds): ▄▂▃▇▃▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁ min: 9.4 max: 19.2
Time since last run (seconds): ▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅▇▅ min: 1.6k max: 6.0k

mco service mcollective restart -I anode.com

mco service sshd status –np -v -I anode.com

mco service httpd status –np]]>

Marionette Collective using ActiveMQ

5.7 supports Java 1.7. About the middleware, RabbitMQ is not default middleware of MCollective, therefore we will use ActiveMQ as the middleware (java app). Components: * java-1.7.0-openjdk * ActiveMQ 5.8.0 * tanukiwrapper 3.5.9 – activeMQ dependency * mcollective 2.3.2 * mcollective-common 2.3.2 – mcollective dependency * rubygem-stomp 1.2.2 – mcollective-common dependency * mcollective-client 2.3.2 * mcollective-common 2.3.2 – mcollective-client dependency * rubygem-stomp 1.2.2 – mcollective-client dependency How does it work? Very easy. Mcollective server needs to get installed to each node/puppetclient. Those clients connect to a middleware, in this case we use ActiveMQ, through a protocol such as activemq protocol. An mcollective-client then connect to the middleware to talk to each mcollective server. Installing ActiveMQ:


# yum install java
# yum install activemq
# vi /etc/activemq/activemq.xml and change password
              authenticationUser username="mcollective" password="passwordhere" 
chkconfig activemq on
service activemq restart
Now let’s configure each client. Each client needs mcollective server: puppetclient:
yum install rubygem-stomp
yum install mcollective
Install mcollective-common-2.3.2-
Install mcollective-2.3.2-

Change /etc/mcollective/server.cfg
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = middleware.com
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = mcollectiveuser
plugin.activemq.pool.1.password = passwordhere
Don’t forget add OUTBOUND port 61613
cat mcollective
# Allow mcollective to middleware
chain OUTPUT {
proto ( tcp udp ) dport 61613 ACCEPT;
}
service mcollective restart
Setup client, somewhere.
# wget http://downloads.puppetlabs.com/mcollective/mcollective-client-2.3.2-1.el6.noarch.rpm
yum -y install rubygem-stomp
rpm -Uvh mcollective-common-2.3.2-1.el6.noarch.rpm
rpm -Uvh mcollective-client-2.3.2-1.el6.noarch.rpm

vi /etc/mcollective/client.cfg
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = middlewareserver.com
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = mcollectiveuser
plugin.activemq.pool.1.password = passwordhere
Now try mco ping
$ sudo mco ping
puppetclient.demo                        time=46.72 ms
---- ping statistics ----
1 replies max: 46.72 min: 46.72 avg: 46.72
]]>

Develop and debug customized augeas lens in 5 minutes

#puppet agent –test –debug Debug: Augeas[mcollective](provider=augeas): Opening augeas with root /, lens path , flags 32 Debug: Augeas[mcollective](provider=augeas): Augeas version 0.9.0 is installed Debug: Augeas[mcollective](provider=augeas): Unable to optimize files loaded by context path, no glob matches Debug: Augeas[mcollective](provider=augeas): Will attempt to save and only run if files changed Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.host”, “hosthosthost”] Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.port”, “61613”] Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.user”, “mcollectivemcollective”] Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.password”, “passwordhere”] Debug: Augeas[mcollective](provider=augeas): Skipping because no files were changed Debug: Augeas[mcollective](provider=augeas): Closed the augeas connection Building your own augeas lens from scratch might be very frustrating for some people because of the language. Our main goal is to enable editing any application files. Therefore we are just going to use any working augeas lens. In this example, I will build mcollective lense for both server and client. We are going to put our lens in /usr/share/augeas/lenses instead of /usr/share/augeas/lenses/dist to avoid overwriting default lenses. Since mcollective config files are similar to sysctl.conf config file, we just need to copy from it:

cp /usr/share/augeas/lenses/dist/sysctl.aug /usr/share/augeas/lenses/mcollective.aug
Modify the file to suit name and location of your config file. Our mcollective.aug would then be like this:
module MCollective =
  autoload xfm

  let filter = incl "/etc/mcollective/server.cfg"
             . incl "/etc/mcollective/client.cfg"

  let eol = Util.eol
  let indent = Util.indent
  let key_re = /[A-Za-z0-9_.-]+/
  let eq = del /[ \t]*=[ \t]*/ " = "
  let value_re = /[^ \t\n](.*[^ \t\n])?/

  let comment = [ indent . label "#comment" . del /[#;][ \t]*/ "# "
        . store /([^ \t\n].*[^ \t\n]|[^ \t\n])/ . eol ]

  let empty = Util.empty

  let kv = [ indent . key key_re . eq . store value_re . eol ]

  let lns = (empty | comment | kv) *

  let xfm = transform lns filter
Since augtool doesn’t provide enough error log, we will use augparse to check the built lens.
 #augparse mcollective.aug
Fix any error. Note: augtool and augparse are available in augeas and augeas-devel. Done! You can now distribute this over puppet to customize any puppet client applications config file. However if you’d rather use any available lens without having to distribute aug file, you can do like this:
augeas { "mcollective":
context => "/files/etc/mcollective/server.cfg",
#similar to sysctl.lns
incl => "/etc/mcollective/server.cfg",
lens => "sysctl.lns",
changes => [
... bla bla bla
],
require => [ Package['mcollective'], ],
notify => Service['mcollective'],
}
]]>