Getting up Rails up and running on Amazon’s EC2

This is just for documentations sake, and only if you’re running Ubuntu 12.x. If you are running Ubuntu 12.x on an EC2 instance, then don’t use Ruby on Rails, it’s a pain to deploy. Amazon’s AMI is a homebrewed version of Linux that comes with Ruby ready, which is a better solution. However if you want to install rvm on your Ubuntu machine follow the documentation below:

  1. sudo apt-get update
  2. sudo apt-get install build-essential git-core curl libmysqlclient15-dev nodejs libcurl4-openssl-dev
  3. sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
  4. umask g+w
  5. source /etc/profile.d/rvm.sh
  6. rvm requirements
  7. sudo apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion
  8. sudo chown -R [user]:[user] /usr/local/rvm
  9. rvm install 1.9.3
  10. rvm –default use 1.9.3

If you are using rails:

  • gem install rails

If you are using passenger (with nginx):

  1. gem install passenger
  2. rvmsudo passenger-install-nginx-module
  3. copy over the /etc/init.d script from the wiki and fit it to your install

Now you think you’re set right?! Well if you’re developing apps on that instance, yes, you’re ok. I ran into an error with capistrano where it refused to use rvm’s ruby to precompile the assets for production, I found that I had to install ruby1.9.1 to get it to work.

source: http://www.the-tech-tutorial.com/?p=1868

Kerberos authentication under Lion

Firstly, thanks to Roy Long and Scott Gallagher for their presentation at the 2012 PSU Mac Admin Conference.

Also, thanks to Rusty Myers for the CLC Package.

Lion no longer uses MIT Kerberos, and has made the switch to Heimdal. This deprecates the krb5authnoverify method and hands kerberos authentication over to pam.d. To enable Kerberos authentication, a /Library/Preferences/edu.mit.Kerberos or /etc/krb5.conf file is needed. It should follow this format:
[libdefaults]
default_realm = CC.COLUMBIA.EDU
[realms]
CC.COLUMBIA.EDU = {
kdc = kerberos.cc.columbia.edu:88
kdc = krb2.cc.columbia.edu:88
admin_server = kerberos.cc.columbia.edu:749
default_domain = cc.columbia.edu
}
[domain_realm]
.cc.columbia.edu = CC.COLUMBIA.EDU
cc.columbia.edu = CC.COLUMBIA.EDU
.columbia.edu = CC.COLUMBIA.EDU
columbia.edu = CC.COLUMBIA.EDU
[logging]
kdc = FILE:/var/log/krb5kdc/kdc.log
admin_server = FILE:/var/log/krb5kdc/kadmin.log

Next we need to make changes to /etc/pam.d/authorization (red indicates deletion, and green indicates changes/additions):
# authorization: auth account
auth sufficient pam_krb5.so use_first_pass default_principal use_kcminit
auth optional pam_ntlm.so use_first_pass
auth required pam_opendirectory.so use_first_pass
account required pam_opendirectory.so

This handles authentication at the login screen, but we still need to update /etc/pam.d/screensaver to handle kerberos authentication:

# screensaver: auth account
auth sufficient pam_krb5.so use_first_pass default_principal use_kcminit
auth required pam_opendirectory.so use_first_pass
account required pam_opendirectory.so
account sufficient pam_self.so
account required pam_group.so no_warn group=admin,wheel fail_safe
account required pam_group.so no_warn deny group=admin,wheel ruser fail_safe

And that’s that. This can also be packaged and deployed to clients via Munki. PSU has a CLC package (though it’ll have to be reworked for Columbia’s environment). The basic steps are:

  1. Setup one client with LDAP
  2. Package the LDAP plist from /Library/Preferences/OpenDirectory/Configurations/LDAPv3
  3. Package the MIT Kerberos file
  4. Package /etc/pam.d/authorization, /etc/pam.d/screensaver
  5. Edit the CLC’s existing preflight to backup copies of the files you’re editing, e.g. MIT Kerberos file, /etc/pam.d/authorization, /etc/pam.d/screensaver, the LDAP plist
  6. Edit the CLC’s existing postflight to add the LDAP server to the machine’s search path and then remove the edit to /etc/authorization.
  7. Profit.

Resharing NFS Mounts on OS X Server 10.6

Assume, there are two OS X Servers, one with lots of free space and another that acts as an OD/AFP share. To setup the first server (the one with lots of space) as an NFS share, do the following:

  1. In Server Admin -> Share Points: Add a new share point. Under “Protocol Options” go to NFS and check on “Export this item and its contents to client list”
  2. Add second OS X Server IP to the the client list. Set the mapping Root to Root.
  3. In terminal enter the command ‘rpcinfo -p’ and make note of the ports.
  4. In Server Admin -> Firewall, make sure all ports from the step above are are open

On the Existing AFP server:

  1. In Server Admin -> Share Points: Right click a volume and select “Mount NFS Share..” and enter in the URL of the NFS share. Now this share will show up in the list.
  2. Click on the share and on “Protocol Options” and select AFP and check on “Share this item using AFP”
  3. Grant users privleges to the share

This is useful when there is an existing AFP Server that users connect to (which is running out of space).
References:
[1]10.6 Server: How to get NFS disk serving working properly
[2]Resharing NFS Mounts as AFP Share Points

Munki and postinstall scripts

One the great features of Munki is that it will run postinstall scripts. This is great for creating quick and dirty custom pkgs as well as customizing vendor supplied pkgs for any environment. Three such cases in our environment are (1) Adobe Acrobat 9, (2) MS Office 2011, (3) Firefox 4.0.1

Adobe Acrobat 9

Acrobat is the enterprise’s standard application for editing/creating PDFs. As such it’s important to keep Acrobat up to date and working on client machines. The problem with this occurs at two places. The first being updates. Before the release of CS5, Acrobat had no real solution to packaging and deploying updates, and this has plagued Acrobat 9. The latest acrobat update requires you to have all previous updates installed (the reason for this will be explained a little later), and it requires you to select which acrobat installation to upgrade. This can be very troublesome, however Munki easily solves that problem with update_for and requires keys. The most annoying feature Acrobat has is the self-heal feature which requires the user to authenticate on first run after a patch install to “fix” the installation. There is documentation on the Munki wiki as how to bypass this:

              <key>postinstall_script</key>
        <string>#!/bin/bash
sed -i .bak 's/\&lt;string\&gt;AcroEFGPro90SelfHeal\.xml\&lt;\/string\&gt;//g' /Applications/Adobe\ Acrobat\ 9\ Pro/Adobe\ Acrobat\ Pro.app/Contents/MacOS/SHInit.xml
sed -i .bak 's/\&lt;string\&gt;AcroEFGDist90SelfHeal\.xml\&lt;\/string\&gt;//g' /Applications/Adobe\ Acrobat\ 9\ Pro/Acrobat\ Distiller.app/Contents/MacOS/SHInit.xml
        </string>

This suppresses the self heal and the user is happy. However with this fix, a CORE file in Acrobat’s setup has been altered and the patching process will break in the future. Going back to a little earlier, the reason Acrobat requires that all previous updates be installed is because it checksums the files it expects, if a patch has been skipped, the checksum on a later patch will fail. This is the same for the SHInit.xml file. The update process doesn’t always checksum the file, but for certain patches it does. Therefore, a preinstall_script is needed to fix the installation before the update starts:

        <key>preinstall_script</key>
        <string>#!/bin/bash
          if [ -f "/Applications/Adobe Acrobat 9 Pro/Adobe Acrobat Pro.app/Contents/MacOS/SHInit.xml.bak" ]
          then
          mv "/Applications/Adobe Acrobat 9 Pro/Adobe Acrobat Pro.app/Contents/MacOS/SHInit.xml.bak" "/Applications/Adobe Acrobat 9 Pro/Adobe Acrobat Pro.app/Contents/MacOS/SHInit.xml"
          fi
        </string>

Now with little effort on the sysadmin’s part, Acrobat has been fixed.

MS Office 2011

Munki installs office very nicely, and in our environment, we have a pkg that customizes and registers Office. However, it does so by writing files to the User Template directory. This way Office is registered for every new local user on the machine. However, existing users would still need to register and activate Office. In order to install the pkg for them, we use the following postinstall_script to copy the files from the User Template directory to existing users directories:

        <key>postinstall_script</key>
        <string>#!/bin/sh
dir="/System/Library/User*Template/English.lproj/Library"

for folder in /Users/*
do
if [ $folder != "/Users/Shared" ]
then
/bin/cp $dir/"Preferences/com.microsoft.autoupdate2.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.error_reporting.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.Excel.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.language_register.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.office.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.outlook.database_daemon.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.outlook.office_reminders.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.Powerpoint.plist" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/com.microsoft.Word.plist" $folder/Library/Preferences/
chown -R ${folder/\/Users\//} "$folder/Library/Preferences/"

fi
done

exit 0</string>

This allows for quick and messy pkgs (such as the one above) to be cleaned up with a postinstall_script.

Firefox 4.0.1

The case of Firefox is very similar to that of Acrobat, because vendor supplied software is being customized for the environment. Firefox is customized by writing to three files within the application that contain our settings:

        <key>postinstall_script</key>
        <string>#!/bin/bash
localsettings="/Applications/Firefox.app/Contents/MacOS/defaults/pref/local-settings.js"
touch -f $localsettings
echo "// MyOrganization additions
pref('general.config.obscure_value', 0);
pref('general.config.filename', 'firefox_CUL.cfg');" &gt; $localsettings

firefox_cul="/Applications/Firefox.app/Contents/MacOS/firefox_CUL.cfg"
touch -f $firefox_cul
echo "// 
// This file sets some default prefs for use at Columbia Libraries
// and locks down some other prefs.
// application updates
//
lockPref('app.update.enabled', false);
lockPref('app.update.autoUpdateEnabled', false);
lockPref('extensions.update.autoUpdate', false);
lockPref('extensions.update.enabled', false);
lockPref('browser.search.update', false);
lockPref('browser.startup.homepage_override.mstone','ignore');
// Password Manager
pref('signon.rememberSignons', false);
// Default browser check
pref('browser.shell.checkDefaultBrowser', false);
// Home page
pref('browser.startup.homepage','http://library.columbia.edu');
pref('browser.startup.homepage_reset','http://library.columbia.edu');" &gt; $firefox_cul

override="/Applications/Firefox.app/Contents/MacOS/override.ini"
touch -f $override
echo "[XRE]
EnableProfileMigrator=false" &gt; $override

exit 0</string>

The main difference between the Acrobat and Firefox case is that in this case files are being created and then written to. There are other use cases such as RealPlayer where the postinstall_script create plists to register the product and hide annoying prompts from the user, however those are better handled with MCX rather than Munki.

Getting Munki up and running.

Munki is a great tool for installing/updating/uninstalling software packages. Munki provides a central repository of software and allows for easy software-build management. With Munki, you can keep all your software in the repo, write manifests for each of your builds and push out the manifests out. Another advantage of Munki (that may be later utilized in our environment) is that it allows non-admin users to update the software on their machines through the updates provided by Munki.
Previously we were using DeployStudio to push out tiered builds, however the downside of this method is that it’s not simple to mass-update/uninstall software. The methods to make these changes would include either (a) writing a script that uninstalls packages, (b) rebuilding the machine with the new software build. With Munki, all of that is eliminated, and it comes down to maintaining a software manifest.
Currently we’ve installed Munki on https://adams.cul.columbia.edu. The wiki pages used for this are the following:

Step 1: Create a CA and certificate for our Munki Server to use

When setting up Adams, I more or less followed the instructions above (Using Munki with SSL client certificates). A quick outline of the steps would be:
1. Download materials
2. Edit config/openssl.conf (optional, either do this step to setup default values or enter in values as you run the following steps)
3. run bin/createCA.sh – creates cert, use Common Name MUNKI_CA (identifies which CA you’re creating).
4. run bin/newServer.sh – create server certificate, use the domain of your server as the Common Name (e.g. Common Name = adams.cul.columbia.edu).
5. run bin/addClient.sh – creates client certificate, use Common Name has to be the argument you used to start the script.
6. Store the contents of the parent folder (that contains bin, demoCA, servers, clients) in /private/etc/munki)
7. In Server Admin (assuming your munki server is running on an OS X server 10.6+), under add the certificates you just created.

Step 2: Set up your repo

1. Add a site under “Web” with Server Admin, under the security tab, use the certificate you have just imported.
2. Within the directory of your site, create the following directories:
a. munki/
b. munki/repo
c. munki/repo/pkgs
d. munki/repo/pkgsinfo
e. munki/repo/catalogs
f. munki/repo/manifests

Step 3: Configure Munki on the server

When setting up Munki, you could follow the steps in “Installing on a standalone machine”, but I found a very useful alternative to that in the “Creating Disk Images” page. The nugget being: /usr/local/munki/munkiimport, which will import .pkg files and turn .mpkg files into a disk image for Munki. That’s not the best part though, the best part is that it creates the catalog files, pkginfos and puts the disk image/pkg in the repository. The only setup that needs to be done is running /usr/local/munki/munkiimport –configure to set up the basic values, i.e. the repo path, munki’s url.

Step 4: Secure the server with some basic authentication

Firstly, in Server Admin, go to Web -> Sites -> YOUR SITE -> Options and enable all overrides. Then in terminal cd to your repo directory and create a .htpasswd file with the following command:

$ htpasswd -c .htpasswd munki
New password:
Re-type new password:
Adding password for user munki

Then you need to create the .htaccess file in your repo’s directory with the following content:

AuthType Basic
AuthName "Munki Repository"
AuthUserFile /path/to/your/munki/repo_root/.htpasswd
Require valid-user

Step 5: Configure Munki on the client

First move munki_client.pem from clients/munki_client/ and cacert.pem from demoCA/ to the certs directory in /Library/Managed Installs/ folder (or wherever you’ve set the ManagedInstallDir) on the client, you may have to create this directory.
Munki’s configuration plist values lists all the possible values for the plist that we’ll use on the clients. Make sure the following are set properly:

  • ClientIdentifier (the manifest that the client is going to use)
  • ManagedInstallDir (local directory that Munki will use to store pkgs)
  • LogFile
  • LoggingLevel
  • UseClientCertificate (set to TRUE)
  • SoftwareRepoCAPath (set to the certs directory)
  • SoftwareRepoCACertificate (set to cacert.pem’s path)
  • SuppressUserNotification (if set to true, will make the process silent)

There’s more on that page, customize your own.

In our deployment we plan to:

1. Create a custom Munki client pkg, which will contain the certificates and ManagedInstalls.plist (as well as create custom directories if we see move in that direction). We will deploy this package via DeployStudio, and then run a script that will call Munki to install software.
2. Create sub directories within our repo based on software vendor and manifest.

Installing and using DeployStudio

Installing DeployStudio

First of all download the latest DeployStudio from their homepage(link). The instructions on their site are pretty clear for installation. First, run the installer and install DeployStudio. Then run the DeployStudio Assistant to setup your DeployStudio Server (Start the DeployStudio server when it prompts you).
Set the server address to your server, don’t change the port, set administrative account.
The next screen will allow you to choose whether or not your want your server to be a replica or to be a master.
After that, DS Assistant will ask your to choose where you want your repository, choose a network sharepoint (have an AFP sharepoint set up previously for this). Set the URL and authentication, don’t worry about the ‘advanced parameters’. If you have a mail-server then you can have DS send you mail upon completing a workflow (LTR). While it isn’t critical, it is recommended that you have a SSL certificate with which you can secure network traffic (i.e. the next step), in this step you also choose which interface you want DS to communicate over.
If you have an OD up and running and would like to designate multiple administrators for DeployStudio, you can drag/drop their groups in the next step into the appropriate places. Hit “Continue” and then hit “Continue” again… the options that you have skipped related to multicasting (which is still buggy over subnets). DeployStudio will now tell you setup is done, and your server is ready to use!

Using DeployStudio

Using DeployStudio is easier than setting it up. There is one thing you want to consider before jumping in though, are you going to deploy a monolithic image or a tiered build? There are advantages and disadvantages of both:
Monolithic Build:

Advantages Disadvantages
Easy to build
safe to deploy
Bulky
takes forever to deploy
hard to customize

Tiered Build:

Advantages Disadvantages
light
low network load
easy to pinpoint mistakes in workflow
Can be difficult to build custom packages
if a package isn’t configured properly, it’ll fail

While DS is great at deploying images in general, it’s my opinion that it’s best suited for the tiered build. Firstly, if there are procedures that Deploy Studio cannot handle in a workflow (i.e. setting up ldap connection to culdap), do them on the base image. Update the OS software on the base image and then netboot to the DS server via the bless command (sudo bless –netboot –server bsdp://your.DS.server.dns) and reboot. The machine will reboot to DS at which point you can make an image of the machine via one of the default workflows (“Create a Master From a Volume”). Once the image is made you can deploy it to any machine that boots to the DS server with another one of the default workflows (“Restore a master on a volume”).
With the base image ready, prepare software .pkgs/mpkgs for any software you wish to install and place them in the DS server’s package repository. Next open DS Admin and create a workflow. The ideal workflow should include the following steps (these are all drag and drops):
1. Partition Drive
2. Restore Image (restore your base)
3. Firmware lock
3. Install pkg (one of these each for each pkg you want to install)
note: you want to check on “postpone installation” so that it will install on the first launch.
4. Software Update (also postpone until reboot).

The main difference between the tiered build workflow and the monolithic workflow is step 3 in which the packages are installed. Also before creating the base image make sure to turn off the airport (otherwise the software update gets stuck). That more or less covers installing and running the basic Deploy Studio setup.