Getting up Rails up and running on Amazon’s EC2

This is just for documentations sake, and only if you’re running Ubuntu 12.x. If you are running Ubuntu 12.x on an EC2 instance, then don’t use Ruby on Rails, it’s a pain to deploy. Amazon’s AMI is a homebrewed version of Linux that comes with Ruby ready, which is a better solution. However if you want to install rvm on your Ubuntu machine follow the documentation below:

  1. sudo apt-get update
  2. sudo apt-get install build-essential git-core curl libmysqlclient15-dev nodejs libcurl4-openssl-dev
  3. sudo bash -s stable < <(curl -s
  4. umask g+w
  5. source /etc/profile.d/
  6. rvm requirements
  7. sudo apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion
  8. sudo chown -R [user]:[user] /usr/local/rvm
  9. rvm install 1.9.3
  10. rvm –default use 1.9.3

If you are using rails:

  • gem install rails

If you are using passenger (with nginx):

  1. gem install passenger
  2. rvmsudo passenger-install-nginx-module
  3. copy over the /etc/init.d script from the wiki and fit it to your install

Now you think you’re set right?! Well if you’re developing apps on that instance, yes, you’re ok. I ran into an error with capistrano where it refused to use rvm’s ruby to precompile the assets for production, I found that I had to install ruby1.9.1 to get it to work.


Kerberos authentication under Lion

Firstly, thanks to Roy Long and Scott Gallagher for their presentation at the 2012 PSU Mac Admin Conference.

Also, thanks to Rusty Myers for the CLC Package.

Lion no longer uses MIT Kerberos, and has made the switch to Heimdal. This deprecates the krb5authnoverify method and hands kerberos authentication over to pam.d. To enable Kerberos authentication, a /Library/Preferences/ or /etc/krb5.conf file is needed. It should follow this format:
default_realm = CC.COLUMBIA.EDU
kdc =
kdc =
admin_server =
default_domain =
kdc = FILE:/var/log/krb5kdc/kdc.log
admin_server = FILE:/var/log/krb5kdc/kadmin.log

Next we need to make changes to /etc/pam.d/authorization (red indicates deletion, and green indicates changes/additions):
# authorization: auth account
auth sufficient use_first_pass default_principal use_kcminit
auth optional use_first_pass
auth required use_first_pass
account required

This handles authentication at the login screen, but we still need to update /etc/pam.d/screensaver to handle kerberos authentication:

# screensaver: auth account
auth sufficient use_first_pass default_principal use_kcminit
auth required use_first_pass
account required
account sufficient
account required no_warn group=admin,wheel fail_safe
account required no_warn deny group=admin,wheel ruser fail_safe

And that’s that. This can also be packaged and deployed to clients via Munki. PSU has a CLC package (though it’ll have to be reworked for Columbia’s environment). The basic steps are:

  1. Setup one client with LDAP
  2. Package the LDAP plist from /Library/Preferences/OpenDirectory/Configurations/LDAPv3
  3. Package the MIT Kerberos file
  4. Package /etc/pam.d/authorization, /etc/pam.d/screensaver
  5. Edit the CLC’s existing preflight to backup copies of the files you’re editing, e.g. MIT Kerberos file, /etc/pam.d/authorization, /etc/pam.d/screensaver, the LDAP plist
  6. Edit the CLC’s existing postflight to add the LDAP server to the machine’s search path and then remove the edit to /etc/authorization.
  7. Profit.

Resharing NFS Mounts on OS X Server 10.6

Assume, there are two OS X Servers, one with lots of free space and another that acts as an OD/AFP share. To setup the first server (the one with lots of space) as an NFS share, do the following:

  1. In Server Admin -> Share Points: Add a new share point. Under “Protocol Options” go to NFS and check on “Export this item and its contents to client list”
  2. Add second OS X Server IP to the the client list. Set the mapping Root to Root.
  3. In terminal enter the command ‘rpcinfo -p’ and make note of the ports.
  4. In Server Admin -> Firewall, make sure all ports from the step above are are open

On the Existing AFP server:

  1. In Server Admin -> Share Points: Right click a volume and select “Mount NFS Share..” and enter in the URL of the NFS share. Now this share will show up in the list.
  2. Click on the share and on “Protocol Options” and select AFP and check on “Share this item using AFP”
  3. Grant users privleges to the share

This is useful when there is an existing AFP Server that users connect to (which is running out of space).
[1]10.6 Server: How to get NFS disk serving working properly
[2]Resharing NFS Mounts as AFP Share Points

Munki and postinstall scripts

One the great features of Munki is that it will run postinstall scripts. This is great for creating quick and dirty custom pkgs as well as customizing vendor supplied pkgs for any environment. Three such cases in our environment are (1) Adobe Acrobat 9, (2) MS Office 2011, (3) Firefox 4.0.1

Adobe Acrobat 9

Acrobat is the enterprise’s standard application for editing/creating PDFs. As such it’s important to keep Acrobat up to date and working on client machines. The problem with this occurs at two places. The first being updates. Before the release of CS5, Acrobat had no real solution to packaging and deploying updates, and this has plagued Acrobat 9. The latest acrobat update requires you to have all previous updates installed (the reason for this will be explained a little later), and it requires you to select which acrobat installation to upgrade. This can be very troublesome, however Munki easily solves that problem with update_for and requires keys. The most annoying feature Acrobat has is the self-heal feature which requires the user to authenticate on first run after a patch install to “fix” the installation. There is documentation on the Munki wiki as how to bypass this:

sed -i .bak 's/\&lt;string\&gt;AcroEFGPro90SelfHeal\.xml\&lt;\/string\&gt;//g' /Applications/Adobe\ Acrobat\ 9\ Pro/Adobe\ Acrobat\
sed -i .bak 's/\&lt;string\&gt;AcroEFGDist90SelfHeal\.xml\&lt;\/string\&gt;//g' /Applications/Adobe\ Acrobat\ 9\ Pro/Acrobat\

This suppresses the self heal and the user is happy. However with this fix, a CORE file in Acrobat’s setup has been altered and the patching process will break in the future. Going back to a little earlier, the reason Acrobat requires that all previous updates be installed is because it checksums the files it expects, if a patch has been skipped, the checksum on a later patch will fail. This is the same for the SHInit.xml file. The update process doesn’t always checksum the file, but for certain patches it does. Therefore, a preinstall_script is needed to fix the installation before the update starts:

          if [ -f "/Applications/Adobe Acrobat 9 Pro/Adobe Acrobat" ]
          mv "/Applications/Adobe Acrobat 9 Pro/Adobe Acrobat" "/Applications/Adobe Acrobat 9 Pro/Adobe Acrobat"

Now with little effort on the sysadmin’s part, Acrobat has been fixed.

MS Office 2011

Munki installs office very nicely, and in our environment, we have a pkg that customizes and registers Office. However, it does so by writing files to the User Template directory. This way Office is registered for every new local user on the machine. However, existing users would still need to register and activate Office. In order to install the pkg for them, we use the following postinstall_script to copy the files from the User Template directory to existing users directories:


for folder in /Users/*
if [ $folder != "/Users/Shared" ]
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
/bin/cp $dir/"Preferences/" $folder/Library/Preferences/
chown -R ${folder/\/Users\//} "$folder/Library/Preferences/"


exit 0</string>

This allows for quick and messy pkgs (such as the one above) to be cleaned up with a postinstall_script.

Firefox 4.0.1

The case of Firefox is very similar to that of Acrobat, because vendor supplied software is being customized for the environment. Firefox is customized by writing to three files within the application that contain our settings:

touch -f $localsettings
echo "// MyOrganization additions
pref('general.config.obscure_value', 0);
pref('general.config.filename', 'firefox_CUL.cfg');" &gt; $localsettings

touch -f $firefox_cul
echo "// 
// This file sets some default prefs for use at Columbia Libraries
// and locks down some other prefs.
// application updates
lockPref('app.update.enabled', false);
lockPref('app.update.autoUpdateEnabled', false);
lockPref('extensions.update.autoUpdate', false);
lockPref('extensions.update.enabled', false);
lockPref('', false);
// Password Manager
pref('signon.rememberSignons', false);
// Default browser check
pref('', false);
// Home page
pref('browser.startup.homepage_reset','');" &gt; $firefox_cul

touch -f $override
echo "[XRE]
EnableProfileMigrator=false" &gt; $override

exit 0</string>

The main difference between the Acrobat and Firefox case is that in this case files are being created and then written to. There are other use cases such as RealPlayer where the postinstall_script create plists to register the product and hide annoying prompts from the user, however those are better handled with MCX rather than Munki.

Getting Munki up and running.

Munki is a great tool for installing/updating/uninstalling software packages. Munki provides a central repository of software and allows for easy software-build management. With Munki, you can keep all your software in the repo, write manifests for each of your builds and push out the manifests out. Another advantage of Munki (that may be later utilized in our environment) is that it allows non-admin users to update the software on their machines through the updates provided by Munki.
Previously we were using DeployStudio to push out tiered builds, however the downside of this method is that it’s not simple to mass-update/uninstall software. The methods to make these changes would include either (a) writing a script that uninstalls packages, (b) rebuilding the machine with the new software build. With Munki, all of that is eliminated, and it comes down to maintaining a software manifest.
Currently we’ve installed Munki on The wiki pages used for this are the following:

Step 1: Create a CA and certificate for our Munki Server to use

When setting up Adams, I more or less followed the instructions above (Using Munki with SSL client certificates). A quick outline of the steps would be:
1. Download materials
2. Edit config/openssl.conf (optional, either do this step to setup default values or enter in values as you run the following steps)
3. run bin/ – creates cert, use Common Name MUNKI_CA (identifies which CA you’re creating).
4. run bin/ – create server certificate, use the domain of your server as the Common Name (e.g. Common Name =
5. run bin/ – creates client certificate, use Common Name has to be the argument you used to start the script.
6. Store the contents of the parent folder (that contains bin, demoCA, servers, clients) in /private/etc/munki)
7. In Server Admin (assuming your munki server is running on an OS X server 10.6+), under add the certificates you just created.

Step 2: Set up your repo

1. Add a site under “Web” with Server Admin, under the security tab, use the certificate you have just imported.
2. Within the directory of your site, create the following directories:
a. munki/
b. munki/repo
c. munki/repo/pkgs
d. munki/repo/pkgsinfo
e. munki/repo/catalogs
f. munki/repo/manifests

Step 3: Configure Munki on the server

When setting up Munki, you could follow the steps in “Installing on a standalone machine”, but I found a very useful alternative to that in the “Creating Disk Images” page. The nugget being: /usr/local/munki/munkiimport, which will import .pkg files and turn .mpkg files into a disk image for Munki. That’s not the best part though, the best part is that it creates the catalog files, pkginfos and puts the disk image/pkg in the repository. The only setup that needs to be done is running /usr/local/munki/munkiimport –configure to set up the basic values, i.e. the repo path, munki’s url.

Step 4: Secure the server with some basic authentication

Firstly, in Server Admin, go to Web -> Sites -> YOUR SITE -> Options and enable all overrides. Then in terminal cd to your repo directory and create a .htpasswd file with the following command:

$ htpasswd -c .htpasswd munki
New password:
Re-type new password:
Adding password for user munki

Then you need to create the .htaccess file in your repo’s directory with the following content:

AuthType Basic
AuthName "Munki Repository"
AuthUserFile /path/to/your/munki/repo_root/.htpasswd
Require valid-user

Step 5: Configure Munki on the client

First move munki_client.pem from clients/munki_client/ and cacert.pem from demoCA/ to the certs directory in /Library/Managed Installs/ folder (or wherever you’ve set the ManagedInstallDir) on the client, you may have to create this directory.
Munki’s configuration plist values lists all the possible values for the plist that we’ll use on the clients. Make sure the following are set properly:

  • ClientIdentifier (the manifest that the client is going to use)
  • ManagedInstallDir (local directory that Munki will use to store pkgs)
  • LogFile
  • LoggingLevel
  • UseClientCertificate (set to TRUE)
  • SoftwareRepoCAPath (set to the certs directory)
  • SoftwareRepoCACertificate (set to cacert.pem’s path)
  • SuppressUserNotification (if set to true, will make the process silent)

There’s more on that page, customize your own.

In our deployment we plan to:

1. Create a custom Munki client pkg, which will contain the certificates and ManagedInstalls.plist (as well as create custom directories if we see move in that direction). We will deploy this package via DeployStudio, and then run a script that will call Munki to install software.
2. Create sub directories within our repo based on software vendor and manifest.

Installing and using DeployStudio

Installing DeployStudio

First of all download the latest DeployStudio from their homepage(link). The instructions on their site are pretty clear for installation. First, run the installer and install DeployStudio. Then run the DeployStudio Assistant to setup your DeployStudio Server (Start the DeployStudio server when it prompts you).
Set the server address to your server, don’t change the port, set administrative account.
The next screen will allow you to choose whether or not your want your server to be a replica or to be a master.
After that, DS Assistant will ask your to choose where you want your repository, choose a network sharepoint (have an AFP sharepoint set up previously for this). Set the URL and authentication, don’t worry about the ‘advanced parameters’. If you have a mail-server then you can have DS send you mail upon completing a workflow (LTR). While it isn’t critical, it is recommended that you have a SSL certificate with which you can secure network traffic (i.e. the next step), in this step you also choose which interface you want DS to communicate over.
If you have an OD up and running and would like to designate multiple administrators for DeployStudio, you can drag/drop their groups in the next step into the appropriate places. Hit “Continue” and then hit “Continue” again… the options that you have skipped related to multicasting (which is still buggy over subnets). DeployStudio will now tell you setup is done, and your server is ready to use!

Using DeployStudio

Using DeployStudio is easier than setting it up. There is one thing you want to consider before jumping in though, are you going to deploy a monolithic image or a tiered build? There are advantages and disadvantages of both:
Monolithic Build:

Advantages Disadvantages
Easy to build
safe to deploy
takes forever to deploy
hard to customize

Tiered Build:

Advantages Disadvantages
low network load
easy to pinpoint mistakes in workflow
Can be difficult to build custom packages
if a package isn’t configured properly, it’ll fail

While DS is great at deploying images in general, it’s my opinion that it’s best suited for the tiered build. Firstly, if there are procedures that Deploy Studio cannot handle in a workflow (i.e. setting up ldap connection to culdap), do them on the base image. Update the OS software on the base image and then netboot to the DS server via the bless command (sudo bless –netboot –server bsdp://your.DS.server.dns) and reboot. The machine will reboot to DS at which point you can make an image of the machine via one of the default workflows (“Create a Master From a Volume”). Once the image is made you can deploy it to any machine that boots to the DS server with another one of the default workflows (“Restore a master on a volume”).
With the base image ready, prepare software .pkgs/mpkgs for any software you wish to install and place them in the DS server’s package repository. Next open DS Admin and create a workflow. The ideal workflow should include the following steps (these are all drag and drops):
1. Partition Drive
2. Restore Image (restore your base)
3. Firmware lock
3. Install pkg (one of these each for each pkg you want to install)
note: you want to check on “postpone installation” so that it will install on the first launch.
4. Software Update (also postpone until reboot).

The main difference between the tiered build workflow and the monolithic workflow is step 3 in which the packages are installed. Also before creating the base image make sure to turn off the airport (otherwise the software update gets stuck). That more or less covers installing and running the basic Deploy Studio setup.

Further slipstreaming of MS Office

By following the instruction from this post: Deploying MS Office 2011, we will have the MS Office installer pkg, and an prep-package that we created from the last post. Now there are two improvements we want to make.

  • The prep-package adds plists/files to the User Template, the next step would be to add the same files for existing users.
  • Merge the installer pkg and the prep package.

1. Adding ~/Library files for existing users

The prep package is in a .mpkg format and is comprised of a group of .pkg’s and a distribution file. In the PackageMaker project for the package, choose the last component and add a postflight script to it (we’ll look at the contents of this script later). Build the project and then open it in finder. You’ll find that each of the components is its own package. Find the package that contains the postflight file, in the Contents/Resources/ directory add all the ~/Library/ files you need to copy.
1. ~/Library/Preferences/
2. ~/Library/Preferences/
3. ~/Library/Preferences/
4. ~/Library/Preferences/
5. ~/Library/Preferences/
6. ~/Library/Preferences/
7. ~/Library/Preferences/
8. ~/Library/Preferences/
9. ~/Library/Preferences/Microsoft/
10. ~/Library/Application Support/Microsoft
Put ~/Library/Preferences/Microsoft in a folder named Library in the Contents/Resources/. Now for the postflight script. The basic jist is that the script has to copy the files from the Resources directory to the /Users/$user/Library/ for each user. The script looks like something like below, edit as needed: #!/bin/sh

# Moves ~/Library/ files from pkg to user’s directory
# Created by LITO on 3/14/11.
# Copyright 2011 Columbia. All rights reserved.

for folder in /Users/*
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
/bin/cp $1/Contents/Resources/ $folder/Library/Preferences/
if [ -d $folder/Library/Preferences/Microsoft ]
if [ -d “$folder/Library/Preferences/Microsoft/Office 2011” ]
/bin/cp $1/Contents/Resources/Library/Microsoft/Office*2011/* $folder/Library/Preferences/Microsoft/Office*2011/
/bin/chmod -R 775 $folder/Library/Preferences/
/bin/chown -R $USER $folder/Library/Preferences/

/bin/mkdir $folder/Library/Preferences/Microsoft
/bin/mkdir “$folder/Library/Preferences/Microsoft/Office 2011”
/bin/cp $1/Contents/Resources/Library/Microsoft/Office*2011/* $folder/Library/Preferences/Microsoft/Office*2011/
/bin/chmod -R 775 $folder/Library/Preferences/
/bin/chown -R $USER $folder/Library/Preferences/
if [ -d “$folder/Library/Application Support/Microsoft” ]
if [ -d “$folder/Library/Application Support/Microsoft/Office” ]
if [ -d “$folder/Library/Application Support/Microsoft/Office/User Templates” ]
/bin/cp $1/Contents/Resources/Microsoft/Office/User*Templates/* $folder/Library/Application*Support/Microsoft/Office/User*Templates/
/bin/chmod -R 775 $folder/Library/Application*Support/*
/bin/chown -R $USER $folder/Library/Application*Support/*
/bin/mkdir “$folder/Library/Application Support/Microsoft”
/bin/mkdir “$folder/Library/Application Support/Microsoft/Office”
/bin/mkdir “$folder/Library/Application Support/Microsoft/Office/User Templates”
/bin/cp $1/Contents/Resources/Microsoft/Office/User*Templates/* $folder/Library/Applications Support/Microsoft/Office/User*Templates/
/bin/chmod -R 775 $folder/Library/Application*Support/*
/bin/chown -R $USER $folder/Library/Application*Support/*


exit 0

note: It’s important to chmod 775 the folders otherwise the user will not have read rights and this will all be for naught.

2. Merging installer and prep packages

I used OfficeForMac’s help page on distribution.dist to execute my deployment. The first step was to open up the Office 2011 package and the prep package. move the packages from the prep package to the Office pkg (the directory in the package is /Contents/Packages). I would make a directory in Contents/Packages called user_lib and stash the prep packages there. Once the packages are in place, you want to edit Contents/distribution.dist
There are three types of edits, and it’s best to start from the bottom up. At the bottom you’ll find package references that tell you where to find the package, the size of the package, an user defined id (which is used in the next step), version and other actions. An example would be: <pkg-ref id="" version="14.0.2" installKBytes="123904" auth="Admin" onConclusion="None">file:./Contents/Packages/user_files/microsoft-1.pkg</pkg-ref>
Set the file’s path to the location of your package. Then set the installKBytes if you want, it’s not necessary. Set the pkg-ref id to something descriptive and brief, as it’ll be used in the next step. A package reference needs to be created for every package.
Next we have to add a choice to install our prep packages. Above the pkg-ref’s are the choices, add a choice like the one below:<choice id="user-lib" title="userlib-title" description="userlib-description" tooltip="userlib-tooltip" start_selected="true" start_enabled="true" start_visible="true">
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>
<pkg-ref id=""></pkg-ref>

In this format, the first thing to do is set the choice id, title and description. It’s also advisable to set the start_visible to false. The choice id will be referenced above, so it should be something short but descriptive. Within the <choice> and </choice> we add the packages that will be installed for the choice, identified by their pkg-ref ids.
After that is done, we want to add a line choice for our new choice, this is done in the choices-outline section by adding a line like: <line choice="user-lib"></line>
Now our install package will install office and also install the ~/Library files for all users and the user’s template. This package is easy to push out through ARD or synctool via the installer command making it totally silent.