Monthly Archives: October 2013

Develop and debug customized augeas lens in 5 minutes

#puppet agent –test –debug Debug: Augeas[mcollective](provider=augeas): Opening augeas with root /, lens path , flags 32 Debug: Augeas[mcollective](provider=augeas): Augeas version 0.9.0 is installed Debug: Augeas[mcollective](provider=augeas): Unable to optimize files loaded by context path, no glob matches Debug: Augeas[mcollective](provider=augeas): Will attempt to save and only run if files changed Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.host”, “hosthosthost”] Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.port”, “61613”] Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.user”, “mcollectivemcollective”] Debug: Augeas[mcollective](provider=augeas): sending command ‘set’ with params [“/files/etc/mcollective/server.cfg/plugin.stomp.password”, “passwordhere”] Debug: Augeas[mcollective](provider=augeas): Skipping because no files were changed Debug: Augeas[mcollective](provider=augeas): Closed the augeas connection Building your own augeas lens from scratch might be very frustrating for some people because of the language. Our main goal is to enable editing any application files. Therefore we are just going to use any working augeas lens. In this example, I will build mcollective lense for both server and client. We are going to put our lens in /usr/share/augeas/lenses instead of /usr/share/augeas/lenses/dist to avoid overwriting default lenses. Since mcollective config files are similar to sysctl.conf config file, we just need to copy from it:

cp /usr/share/augeas/lenses/dist/sysctl.aug /usr/share/augeas/lenses/mcollective.aug
Modify the file to suit name and location of your config file. Our mcollective.aug would then be like this:
module MCollective =
  autoload xfm

  let filter = incl "/etc/mcollective/server.cfg"
             . incl "/etc/mcollective/client.cfg"

  let eol = Util.eol
  let indent = Util.indent
  let key_re = /[A-Za-z0-9_.-]+/
  let eq = del /[ \t]*=[ \t]*/ " = "
  let value_re = /[^ \t\n](.*[^ \t\n])?/

  let comment = [ indent . label "#comment" . del /[#;][ \t]*/ "# "
        . store /([^ \t\n].*[^ \t\n]|[^ \t\n])/ . eol ]

  let empty = Util.empty

  let kv = [ indent . key key_re . eq . store value_re . eol ]

  let lns = (empty | comment | kv) *

  let xfm = transform lns filter
Since augtool doesn’t provide enough error log, we will use augparse to check the built lens.
 #augparse mcollective.aug
Fix any error. Note: augtool and augparse are available in augeas and augeas-devel. Done! You can now distribute this over puppet to customize any puppet client applications config file. However if you’d rather use any available lens without having to distribute aug file, you can do like this:
augeas { "mcollective":
context => "/files/etc/mcollective/server.cfg",
#similar to sysctl.lns
incl => "/etc/mcollective/server.cfg",
lens => "sysctl.lns",
changes => [
... bla bla bla
],
require => [ Package['mcollective'], ],
notify => Service['mcollective'],
}
]]>

Marionette Collective using RabbitMQ and Stomp Connector

Install Erlang: # yum -y install erlang Install RabbitMQ 3.2.0:http://www.rabbitmq.com/ rpm -Uvh http://www.rabbitmq.com/releases/rabbitmq-server/v3.2.0/rabbitmq-server-3.2.0-1.noarch.rpm cd /usr/lib/rabbitmq/lib/rabbitmq_server-3.2.0/plugins/ Install AMQP client library: http://www.rabbitmq.com/erlang-client.html cd /usr/lib/rabbitmq/lib/rabbitmq_server-3.2.0/plugins see if amqp_client-3.2.0.ez and rabbitmq_stomp-3.2.0.ez are available. Those are available in version 3.2.0. ref: https://www.rabbitmq.com/stomp.html # rabbitmq-plugins enable rabbitmq_stomp The following plugins have been enabled: amqp_client rabbitmq_stomp Plugin configuration has changed. Restart RabbitMQ for changes to take effect. RabbitMQ config: Reference: /usr/share/doc/rabbitmq-server-3.2.0/rabbitmq.config.example [root@puppetmaster ferm.rules.d]# cat /etc/rabbitmq/rabbitmq.config [ {rabbit_stomp, [{tcp_listeners, [61613]} ]} ]. [root@puppetmaster ferm.rules.d]# cat rabbitmq # Allow rabbitmq chain INPUT { proto ( tcp udp ) dport 61613 ACCEPT; } chain OUTPUT { proto ( tcp udp ) dport 61613 ACCEPT; } # service rabbitmq-server restart Restarting rabbitmq-server: RabbitMQ is not running SUCCESS rabbitmq-server. If there’s an error like this:

configuration file not found
Try to check your rabbitmq.config
ls -la  /etc/rabbitmq/
total 16
drwxr-xr-x   2 root root 4096 Nov  1 12:17 .
drwxr-xr-x. 76 root root 4096 Nov  1 12:15 ..
-rw-r-----   1 root root   18 Nov  1 12:15 enabled_plugins
-rw-r-----   1 root root   49 Nov  1 12:16 rabbitmq.config
Add read access:
chmod o+r /etc/rabbitmq/*
You can monitor any error log from /var/log/rabbitmq/rabbit\@puppetmaster.log You should also see the port open:
# netstat -nat | grep 61613
tcp 0 0 :::61613 :::* LISTEN
Create user called mcollective with password passwordhere:
# rabbitmqctl add_user mcollective passwordhere
Creating user "mcollective" ...
...done.
Add permission to the mcollective user and remove guest user:
# rabbitmqctl set_permissions -p / mcollective "^amq.gen-.*" ".*" ".*"
Setting permissions for user "mcollective" in vhost "/" ...
...done.
# rabbitmqctl delete_user guest
Deleting user "guest" ...
...done.
Now let’s configure each client. Each client needs mcollective server: puppetclient:
Find from here: http://downloads.puppetlabs.com/mcollective/
wget http://downloads.puppetlabs.com/mcollective/mcollective-common-2.0.0-1.el6.noarch.rpm
wget http://downloads.puppetlabs.com/mcollective/mcollective-2.0.0-1.el6.noarch.rpm
yum -y install rubygem-stomp
rpm -Uvh mcollective-common-2.0.0-1.el6.noarch.rpm
rpm -Uvh mcollective-2.0.0-1.el6.noarch.rpm

vi /etc/mcollective/server.cfg
connector = stomp
plugin.stomp.host = 10.70.0.139
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = passwordhere

Don't forget add the OUTBOUND port 61613
cat rabbitmq
# Allow mcollective to rabbitmq
chain OUTPUT {
proto ( tcp udp ) dport 61613 ACCEPT;
}

service mcollective restart
Monitor stomp server log from here /var/log/rabbitmq/rabbit\@puppetmaster.log
=INFO REPORT==== 30-Oct-2013::12:02:29 ===
accepting STOMP connection <0.222.0> (10.70.0.140:60076 -> 10.70.0.139:61613)
Setup client, somewhere:
# wget http://downloads.puppetlabs.com/mcollective/mcollective-client-2.0.0-1.el6.noarch.rpm
yum -y install rubygem-stomp
rpm -Uvh mcollective-common-2.0.0-1.el6.noarch.rpm
rpm -Uvh mcollective-client-2.0.0-1.el6.noarch.rpm

vi /etc/mcollective/client.cfg
connector = stomp
plugin.stomp.host = localhost
plugin.stomp.port = 61613
plugin.stomp.user = mcollective
plugin.stomp.password = passwordhere
Now try mco ping
$ sudo mco ping
puppetclient.demo                        time=46.72 ms
---- ping statistics ----
1 replies max: 46.72 min: 46.72 avg: 46.72
]]>

Debian 7

  • on grub press e
  • add init=/bin/bash at the end of linux /boot/vmlinuz … ro …
  • Ctrl-X to reboot
  • To add user to sudo:
    • login as a user
    • su root
    • adduser username sudo
    Change IP address: http://www.elfnet.org/2011/10/22/debian-change-ip-address/ Add ssh: apt-get install ssh]]>

    Puppet-dashboard 1.2 on RHEL 6.4

    http://docs.puppetlabs.com/dashboard/manual/1.2/bootstrapping.html yum install puppet-dashboard Create database and database account, something like this:

    CREATE DATABASE dashboard;
    CREATE USER 'dashboard'@'%' IDENTIFIED BY 'passwordhere';
    GRANT ALL PRIVILEGES ON dashboard.* TO 'dashboard'@'%';
    Allow 32M packet either from my.cnf (require restart) and update running variable:
    # Allowing 32MB allows an occasional 17MB row with plenty of spare room
    max_allowed_packet = 32M
    and
    mysql> set global max_allowed_packet = 33554432;
    Set database account properties here:
    #vi /usr/share/puppet-dashboard/config/database.yml
    production:
    database: dashboard
    username: dashboard
    password: passwordhere
    encoding: utf8
    adapter: mysql
    Setup tables etc:
    rake RAILS_ENV=production db:migrate
    Create puppet-dashboard log file:
    touch /usr/share/puppet-dashboard/log/production.log
    chmod 0666 /usr/share/puppet-dashboard/log/production.log
    Allow INBOUND port 3000 to puppetmaster and OUTBOUND port 3000 to puppet-dashboard. On each agent, make sure that all agents have reporting turned on:
    # vi /etc/puppet/puppet.conf (on each agent)
    [agent]
    report = true
    On puppetmaster, add [master]:
    # vi /etc/puppet/puppet.conf
    [master]
    reports = store, http
    reporturl = http://127.0.0.1:3000/reports/upload
    Restart puppetmaster:
    service puppetmaster restart
    Start puppet-dashboard:
    cd /usr/share/puppet-dashboard
    sudo -u puppet-dashboard ./script/server -e production
    or
    sudo -u puppet-dashboard ./script/server -e production -d
    Monitor /var/log/message and /usr/share/puppet-dashboard/log/production.log After client sends reports, you need delayed worker to process reports. To run 4 delayed worker (best practices is to match number of cpu):
    cd /usr/share/puppet-dashboard
    $ sudo -u puppet-dashboard env RAILS_ENV=production script/delayed_job -p dashboard -n 4 -m start
    However it’s not advised to run this manually, because if you run it on less cpu, the delay worker may stop working for sometimes. Therefore just run the puppet-dashboard-workers:
    $ service puppet-dashboard-workers restart
    
    Enjoy your dashboard:
    http://x.x.x.x:3000
    If delayed worker not working or too many pending tasks, simply kill delayed_job.x_monitor and rerun puppet.]]>

    Cassandra Basic Failover Test

    [root@cassandra1 /]# nodetool status Datacenter: 70 ============== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving — Address Load Tokens Owns (effective) Host ID Rack UN 10.70.0.136 66.55 KB 256 50.2% eab68af1-4c3a-448a-b64a-89432abbe13f 0 DN 10.70.0.137 96.67 KB 1 0.3% ca11bb3d-a171-49c9-a39b-d1b802c5a9d8 0 DN 10.70.0.138 80.98 KB 256 49.5% ab54aa7d-ad0a-4bf7-99e4-2d4465c64706 0 [default@unknown] use demodb1; Authenticated to keyspace: demodb1 [default@demodb1] GET users[utf8(‘bobbyjo’)][utf8(‘full_name’)]; => (name=full_name, value=Robert Jones, timestamp=1382458230649000) Elapsed time: 56 msec(s). Let’s turn down node 1 and 2 (.136 and .137). In this case connect to node 3 and you can’t get data on node 1:

    # cassandra-cli -host 10.70.0.138 -port 9160
    [default@unknown] use demodb1;
    Authenticated to keyspace: demodb1
    [default@demodb1] GET users[utf8('bobbyjo')][utf8('full_name')];
    null
    UnavailableException()
    at org.apache.cassandra.thrift.Cassandra$get_result.read(Cassandra.java:6608)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
    at org.apache.cassandra.thrift.Cassandra$Client.recv_get(Cassandra.java:556)
    at org.apache.cassandra.thrift.Cassandra$Client.get(Cassandra.java:541)
    at org.apache.cassandra.cli.CliClient.executeGet(CliClient.java:729)
    at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:216)
    at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:213)
    at org.apache.cassandra.cli.CliMain.main(CliMain.java:339)
    [default@demodb1] GET users[utf8('bobbyjo')][utf8('full_name')];
    null
    Let’s add data on node 3 (.138):
    SET users['capache']['full_name']='Cassandra Apache';
    SET users['capache']['email']='capache@gmail.com';
    SET users['capache']['state']='HS';
    SET users['capache']['gender']='F';
    SET users['capache']['birth_year']='1970';
    GET users[utf8('capache')][utf8('full_name')];
    => (name=full_name, value=Cassandra Apache, timestamp=1382527239265000)
    Elapsed time: 58 msec(s).
    Turn node 1(.136) back on:
    # cassandra-cli
    Connected to: "My Cassandra Cluster" on 127.0.0.1/9160
    Welcome to Cassandra CLI version 1.2.10
    [default@unknown] use demodb1;
    Authenticated to keyspace: demodb1
    [default@demodb1] GET users[utf8('bobbyjo')][utf8('full_name')];
    => (name=full_name, value=Robert Jones, timestamp=1382458230649000)
    Elapsed time: 94 msec(s).
    [default@demodb1] GET users[utf8('capache')][utf8('full_name')];
    => (name=full_name, value=Cassandra Apache, timestamp=1382527239265000)
    Elapsed time: 8.11 msec(s).
    ]]>

    Cassandra 1.2.10 on RHEL 6.4

  • Install Java
  • Setup DataStax repo
  • yum install dsc12
  • Examples: 3 nodes: 10.70.0.136 (seed), 10.70.0.137 and 10.70.0.138
    Stop Cassandra:
    $ sudo service cassandra stop
    Clear the data (not necessary for first run):
    $ sudo rm -rf /var/lib/cassandra/*
    vi /etc/cassandra/conf/cassandra.yaml
    cluster_name: 'My Cassandra Cluster'
    num_tokens: 256
    seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
    parameters:
    - seeds: "10.70.0.136"
    listen_address: 10.70.0.136
    rpc_address: 0.0.0.0
    endpoint_snitch: RackInferringSnitch
    Listen_address is the IP address of the node, change listen_address with 10.70.0.137 and 10.70.0.138 for node 2 and 3. On each server, run one by one:
    service cassandra start
    Check status:
    root@cassandra1 /]# nodetool status
    Datacenter: 70
    ==============
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    -- Address Load Tokens Owns Host ID Rack
    UN 10.70.0.137 120.53 KB 1 0.3% ca11bb3d-a171-49c9-a39b-d1b802c5a9d8 0
    UN 10.70.0.138 48.45 KB 256 49.5% ab54aa7d-ad0a-4bf7-99e4-2d4465c64706 0
    UN 10.70.0.136 33.99 KB 256 50.2% eab68af1-4c3a-448a-b64a-89432abbe13f 0
    To use the database, reference: http://www.datastax.com/docs/0.8/dml/using_cli
    $ cassandra-cli -host localhost -port 9160
    or just:
    # cassandra-cli
    Connected to: "My Cassandra Cluster" on 127.0.0.1/9160
    Welcome to Cassandra CLI version 1.2.10
    
    Type 'help;' or '?' for help.
    Type 'quit;' or 'exit;' to quit.
    
    [default@unknown] CREATE KEYSPACE demodb1
    ... with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
    ... and strategy_options = [{replication_factor:1}];
    WARNING: [{}] strategy_options syntax is deprecated, please use {}
    49b9ea34-770f-3f98-ad2c-bf6b40814ab8
    
    [default@unknown] use demodb1;
    
    CREATE COLUMN FAMILY users
    WITH comparator = UTF8Type
    AND key_validation_class=UTF8Type
    AND column_metadata = [
    {column_name: full_name, validation_class: UTF8Type}
    {column_name: email, validation_class: UTF8Type}
    {column_name: state, validation_class: UTF8Type}
    {column_name: gender, validation_class: UTF8Type}
    {column_name: birth_year, validation_class: LongType}
    ];
    
    SET users['bobbyjo']['full_name']='Robert Jones';
    SET users['bobbyjo']['email']='bobjones@gmail.com';
    SET users['bobbyjo']['state']='TX';
    SET users['bobbyjo']['gender']='M';
    SET users['bobbyjo']['birth_year']='1975';
    
    [default@demodb1] GET users[utf8('bobbyjo')][utf8('full_name')];
    => (name=full_name, value=Robert Jones, timestamp=1382458230649000)
    The easiest way to use Cassandra, is using cqlsh, example:
    cqlsh> use demodb1;
    cqlsh:demodb1>  select * from users;
    
     key     | birth_year | email              | full_name        | gender | state
    ---------+------------+--------------------+------------------+--------+-------
     capache |       1970 |  capache@gmail.com | Cassandra Apache |      F |    HS
     bobbyjo |       1975 | bobjones@gmail.com |     Robert Jones |      M |    TX
    ]]>

    ASP + IIS – enable parent paths on Plesk

    HTTP 500-Internal server error If it’s your own server, follow this: http://support.microsoft.com/kb/332117 However if it’s running in a webhosting such as Plesk (Oct 2013):

    • go to Virtual Directories
    • open directory properties
    • tick “Allow to use parent paths”
    ]]>

    Solaris vs RedHat

    svccfg -s system/identity:node listprop config svccfg -s system/identity:node setprop config/nodename=”new-host-name” svccfg -s system/identity:node setprop config/loopback=”new-host-name” svccfg -s system/identity:node refresh

    
    Home folder is located inside /export/home, instead of home.
    Use pkg, instead of yum.]]>