Installing Puppet on Solaris

There are a number of sites with information about installing Puppet on Solaris. They each contain slightly different instructions which get you most of the way there. With a little finesse it’s not hard to follow the instructions and get things working. This post includes yet another set of instructions for installing Puppet and getting things running. Hopefully with these instructions and others as reference your installation goes smoothly.

For those who are unfamiliar with Puppet, it is a tool for automating system administration. It is built and supported by Reductive Labs. They describe Puppet as  a declarative language for expressing system configuration, a client and server for distributing it, and a library for realizing the configuration. Rather than a system administrator having to follow procedures, run scripts and configure things by hand, Puppet enables defining a configuration and automatically applies it to specified servers and then maintains it. Puppet can be downloaded for many of the most popular operating systems. There is a download page with links to some installation instructions.

Installation on Solaris

1. To make installation more automated, install the Solaris package pkg-get. This tool simplifies getting the latest version of packages from a known site. A copy can be found at  Blastwave.

download http://www.blastwave.org/pkg_get.pkg to /tmp
Make sure the installation is done with root privilege. su to root.
run the following command from the /tmp directory

# pkgadd -d pkg_get.pkg

The package can also be added using the following command

#pkgadd -d http://www.opencsw.org/pkg_get.pkg

2) Verify that the pkg-get configuration file is configured for your region. In this case in the U.S. Change the default download site in the configuration file /opt/csw/etc/pkg-get.conf to:

url=http://www.ibiblio.org/pub/packages/solaris/opencsw/stable

or

url=http://www.ibiblio.org/pub/packages/solaris/opencsw/current

3) Add some new directories to your path.  pkg-get, wget and gpg are installed in /opt/csw/bin.

# export PATH=/opt/csw/bin:/opt/csw/sbin:/usr/local/bin:$PATH

4) Install the complete wget package. wget is a tool GNU tool used to download and install packages from the web. This is a very useful tool to automate installs and software updates. This tool will be used by pkg-get.

# pkg-get -i wget

Note:

If you haven’t installed the entire Solaris OS, the pkg-get may fail to install wget, with the error:

“no working version of wget found, in PATH”

This is probably due to missing  SUNWwgetr and SUNWwgetu packages. Install them by inserting an installation DVD disk into the DVDROM and mount it to /media/xxxx

Install the Solaris packages

# pkgadd -d . SUNWwgetr
# pkgadd -d . SUNWwgetu

5) Configure pkg-get to support automation.

# cp -p /var/pkg-get/admin-fullauto /var/pkg-get/admin

6) Install gnupg and an md5 utility so security validation of Blastwave packages can be done.

# pkg-get -i gnupg textutils

You may also need to define $LD_LIBRARY_PATH to /usr/sfw/lib to find needed libraries.

7) Copy the Blastwave PGP public key to the local host.

# wget –output-document=pgp.key http://www.blastwave.org/mirrors.html

8) Import pgp key

# gpg –import pgp.key

9) Verify that the following two lines in /opt/csw/etc/pkg-get.conf are COMMENTED OUT.

#use_gpg=false
#use_md5=false

10) Puppet is build with Ruby. Install the Ruby software (CSWruby) from Blastwave.

# pkg-get -i ruby

11) Install the Ruby Gems software (CSWrubygems) from Blastwave.

# pkg-get -i rubygems

12) Update to the latest versions and install a the gems used by Puppet

# gem update –system

# gem install facter

# gem install puppet –version ‘0.24.7’

or current version. The gem update command can also be used to update the software.

# gem update puppet

13) Create the puppet user and group:

Info to add in /etc/passwd: puppet:x:35001:35001:puppet user:/home/puppet:/bin/sh
Info to add in /etc/shadow: puppet:LK:::::::
Info to add in /etc/group: puppet::35001:

14) Create the following core directories and set the permissions:

# mkdir -p /sysprov/dist/apps /sysprov/runtime/puppet/prod/puppet/master
# chown -R puppet:puppet /sysprov/dist /sysprov/runtime

15) add puppet configuration definitions in /etc/puppet/puppet.conf. The initial content using your own puppetmaster hostname should be:

[puppetd]
server = myserver.mycompany.com
report = true

16) Repeat this process for the servers which will run Puppet. At least 2 should be set up. One will be the Master Puppet server, the other a Puppet client server that will be managed.

Validating the Installation and Configuring Secure Connections

To verify that the Puppet installation is working as expected, pick a single client to used as a testbed. With Puppet installed on that machine, run a single client against the central server to verify that everything is working correctly.

Start the master puppet daemon on the server defined in puppet.conf files.

# puppetmasterd –debug

Start the first client in verbose mode, with the –waitforcert flag enabled. The default server name for puppetd is Puppet. Use the server flag and define the server name running puppetmasterd. Later the server hostname can be added to the configuration file.

# puppetd –server myserver.mycompany.com –waitforcert 60 –test
Adding the –test flag causes puppetd to stay in the foreground, print extra output, only run once and then exit, and to just exit if the remote configuration fails to compile (by default, puppetd will use a cached configuration if there is a problem with the remote manifests).
Running the client should produce a message like:

info: Requesting certificate
warning: peer certificate won’t be verified in this SSL session
notice: Did not receive certificate

This message will repeat every 60 seconds with the above command. This is normal, since your server is not initially set up to auto-sign certificates as a security precaution. On your server running puppetmasterd, list the waiting certificates:

# puppetca –list

You should see the name of the test client. Now go ahead and sign the certificate:

# puppetca –sign myserver.mycompany.com

The test client should receive its certificate from the server, receive its configuration, apply it locally, and exit normally.
By default, puppetd runs with a waitforcert of five minutes; set the value to the desired number of seconds or to 0 to disable it entirely.

Getting this far, you now have puppet installed with a base initial configuration and secure connections defined between a Puppet master server and one puppet client server. At this point you can start defining manifests for desired server configurations.

There are various sample recipes and manifest to start working with. Viewing and editing some of thes is a good place to start learning how to create configuration defintions. If there is interest I can share sample as well if I have one that may be useful for your needs.

US Government Cites Cloud Computing as Part of Change

Typically ‘Change in Government’ is an oxymoron. It has been a mantra in the U.S. for past months. I’m all for changing things for the benefit of everyone. I’m positive about things becoming better in the next 4-8 years. I would like to be part of the positive changes. But claiming that Cloud Computing will be a positive change in how people work with the government is just plain marketing hype and jumping on the technology bandwagon.

This is just another example of a perceived (marketed) new technology that will solve our problems. Don’t hold your breath. Things don’t Change overnight. Cloud Computing is not a panacea for computing problems. It is an architecture perspective that can be leveraged for specific computing needs. There are a number of good application implementations using this architecture approach. However it’s not appropriate to force everything into this box.

In the case of the government and many corporations, existing applications can’t be easily ported over to Cloud Computing. Application software needs to be re-engineered or built from scratch to take best advantage of the cloud model.  There are small pockets of innovation in government and large industry. It’s typically sequestered off in a corner, has a hard time making headway and gets quickly surpassed by more agile small companies. This has been the scenario with most all new technologies and ideas.

Will the government’s use of Cloud Compuing introduce change? Hmm, maybe. Certainly not though if there is no substance behind the marketing hype.  And certainly not if they try implementing this from within the oragnization rather than using innovative startups. For anyone else considering Cloud Computing, make sure you understand what it is and why you want to use it. Then make sure you and others will benefit from your use of it. Otherwise your just spreading the hype and jumping on the bandwagon.

Moving Away from a Statically Defined Distributed Environment

Most distributed computing environments consist of a set of known devices in well known configurations. The devices are usually cataloged somewhere in a database by a person. If the state of the environment changes by a server being removed, replaced or taken offline, the database must be updated. This kind of environment I refer to as Statically Defined and inflexible.

This is a feasible way of accounting for a small number of machines in a network. However as data centers grow in dimensions to 100s or thousands of servers and as all computing devices begin to randomly come and go, keeping track of changes must become dynamic.

Assumptions in software architecture are typically made with the expectation that there will always be access to a desired server at a known location. To handle a worst case scenario it’s typical to configure clusters of servers  in case one fails and the software architect can ignore failures.  This only marginally protects against failures. More realistic designs must account for the fact that most anything can fail and any given moment. The architects philosophy should be that all things will fail and all failures should be handled.

Consider an architecture that represents a system that is hard to shut down rather than one representing handling a few failure scenarios. One such architecture represents the peer-to-peer (p2p) file sharing systems executing across the internet. From the perspective of any client, the system is always running and available. As long as the client has access to the internet, accessing shared files is almost always possible.

Core to p2p architecture is a network overlay using distributed hash table algorithms to manage mappings of hosts across the internet which dynamically join and leave.  Add to this

  • a mechanism to determine the attributes of the server such as hardware, OS, storage capacity, etc.,
  • software deployment and installation capabilities  at each host,
  • an algorithm to match services to a host that is best suited for executing the service
  • monitoring capabilities to insure services are executing to defined SLAs

Then you have an architecture that dynamically scales and maintains itself. Assimilator is one of the few systems that is capable of doing this today.

Some Cloud Computing vendors claim massive scaling capabilities. This of course assumes the vendor has many thousands of server and that clients have statically defined usage of servers in advance. True massive scaling will come with resources that are allocated automatically and managed dynamically without human intervention.

Reblog this post [with Zemanta]

Perian codecs for QuickTime on Mac OS X

In searching for some needed codecs for QuickTime to play avi files ripped from DVDs  on my Macbook Pro, I ran across Perian ‘The swiss-army knife for QuickTime’. Perian includes video and audio codecs for many common formats.

It’s simple to use! Download the disk image, open it, double click the Perian.prefPane icon and all the components are automatically installed in the correct directories. All the video and audio plays properly for me.

Hats off to the Perian Project Team for a great job!