ZFS + PCA, goodbye UFS

ZFS has been around for a while now.. I have used it for some data partitions, but when Sun added the ability to use it as the root filesystem, I was a little hesitant to start using it there. Part of it was because, I know if I get a root disk that crashes and it is on UFS, I can get in to it pretty well. ZFS was different and I was never really comfortable about using it for root, until last night. I have been looking for a way to keep a lot of Solaris machines up to date with the Recommended and Security patches and doing it with UFS seemed to be taking for ever. Part of the problem I had with keeping them updated with UFS was the shear downtime it required to install the cluster in single user mode. Multiply that by X number of machines and it is a never ending chore to update them.

This weekend I started looking at the PCA tool, since I have seen a lot of people mention good things about it. So off to my test machine and I installed a new VM with Solaris 10 10/09 ( update 8 ) in it. After the install was finished using a ZFS root, I decided to set up a PCA proxy server on another machine. The purpose of the PCA Proxy server is that it will be the one with access to the Internet to download the patches from sunsolve. It was extremely easy to do this, (in fact I have it running in a zone on my main server.)

  1. Created a new plain zone (can be on anything, but I wanted to keep it seperate).
  2. Configure the apache2 instance on the machine, by copying the /etc/apache2/httpd.conf-example to /etc/apache2/httpd.conf
  3. Edit the httpd.conf and change the line that says “Timeout 300” to be “Timeout 1800”. You need to make it at least 1800, if not more depending on the speed of your Internet connection. At 22Mb/s 1800 was ok for me.
  4. Create a directory /var/apache2/htdocs/patches, make it owned by webservd:webservd and 755 as the permissions.
  5. Download and save a copy of pca in /var/apache2/cgi-bin and call it pca-proxy.cgi. Make it owned by webservd:webservd and 755 as the permissions.
  6. Create a file in /etc called pca-proxy.conf. In it place the following:
  7. In order to make the proxy run a little faster on the first use, I decided to download and “cache” the latest security and recommended patch cluster. (You don’t need to do this, but if the patches are missing the pca proxy server will download them. Considering my machine needed 156 patches, this was faster…) Once the recommended and security patches were downloaded, I placed them in a temp place and unzipped the cluster. Once the cluster is unzipped, I needed to make zip files of each patch (so that the pca client can download the zip file). To do this, I went in to tmp/10_x86_Recommended/patches and ran the following:
    for i in `cat patch_order`
    zip -r $i $i
  8. Once the zipping is done, move all the patch zip files in to the /var/apache2/htdocs/patches directory.
  9. Start up the apache2 service “svcadm enable apache2”
  10. Now it is time to configure the client, copy the pca script to the client machine and place it some place, I used /root.
  11. Next create a config file /etc/pca.conf in it with the following:

    The first two lines tells pca where to find the patches and the patchdiag.xref file. The syslog line tells it to log all activity to local7 syslog facaility. The last line “safe=1” means: Safe patch installation. Checks all files for local modifications before installing a patch. A patch will not be installed if files with local modifications would be overwritten.

  12. Now that the config file is created, make sure that syslog is set to handle local7 info, I have mine set to local7.info going to /var/adm/local7.log. PCA will log the patch installation stuff to that log (i.e.:
    Apr 11 17:10:50 zfstest2 pca: [ID 702911 local7.notice] Installed patch 124631-36 (SunOS 5.10_x86: System Administration Applications, Network, and C)
    Apr 11 19:07:04 zfstest2 pca: [ID 702911 local7.notice] Failed to install patch 118246-21 (comm_dssetup 6.4-5.05_x86: core patch) rc=15

Now comes the part that makes ZFS worth using… We are going to create a new “boot environment” and then patch that environment”

  1. First we need to create a new BE;
    lucreate -n p20100411

    The p20100411 can be anything, I used today’s date since I patched the machine today.. Makes it easy to remember when the last time the machine was patched.

  2. Now we need to mount it
    lumount p20100411 /.alt.root 
  3. Now we can start patching;
     pca -i -R /.alt.root
  4. Because I cached most of the patches locally on my pca proxy, it should not take too long for it to download, unzip and install the patches in the alt root
  5. Once the patching is done, it will give you a summary line telling you how many patches were downloaded and installed:
    Download Summary: 156 total, 156 successful, 0 skipped, 0 failed
    Install Summary : 156 total, 156 successful, 0 skipped, 0 failed
  6. Now we need to unmount the alt root and activate it to boot:
    luumount p20100411
    luactivate p20100411
  7. Now just reboot the machine. You MUST use init or shutdown, if you don’t then it won’t boot in to the new boot environment. I use
    shutdown -g0 -i6 -y
  8. Depending on how long it takes for your machine to boot, when it comes back up it should be on the new ZFS file system:
    bash-3.00# df -h
    Filesystem             size   used  avail capacity  Mounted on
    rpool/ROOT/p20100411    49G   6.6G    38G    15%    /
  9. Now you can run that new patched system for how ever long it takes to verify your patches didn’t break anything. Once you are sure everything is ok, then you can delete the old install, in my case:
    ludelete s10x_u8wos_08a

    This should let you recover a little bit of space. In my case it was about 1.5 gig.

The only thing left is to set up a bunch of scripts to do “pca -l” about once a month to see what patches need installed and to log that. PCA has a lot of other functions than I went over here, in a couple of words, it seems to be kick ass. On top of that it is free! The ability to create new BE’s will definitely hope any one with the right amount of disk space be able to keep their system up to date.

One Tip, make sure you watch the output of the luactivate command. This is what is displayed:


The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.


In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

     mount -Fzfs /dev/dsk/c1t0d0s0 /mnt

3. Run  utility with out any arguments from the Parent boot
environment root slice, as shown below:


4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.


One Reply to “ZFS + PCA, goodbye UFS”

Comments are closed.