I began setting up my new sun server and san at home the other day.. (Picked up a Sun V20Z and a Sun T3 SAN Disk array very cheap)… Because I am going to be doing some IPv6 testing as well, I installed a Sun Gigaswift (aka Sun’s ce, Cassini Ethernet) card in to the machine along with the fibre channel card. I put the VMWare vSphere 4i cd in and went on with the install. But didn’t really pay attention that it did not see the CE card, just the two broadcom cards.. So I went ahead any ways thinking I will fix it later. But it seems that there is no drivers on the interweb for the Sun CE card for vmware? If any one knows of a place to get them let me know? Otherwise I will have to find a new card to use in it’s place.
Inside out Peanut Butter Cups
almost 7 years, finally done
Started over 6 years ago redoing one of the bedrooms in my house. I finally finished it this weekend. Needless to say, I absolutely hate wallpaper, it is the root of all evil. If you are going to put wall paper on, have a hopefully professional do it, so when people want to remove it, it doesn’t rip the backing off the wall board down. Here are some pictures of before, during and the final look.
VMWare Fusion tip
For some reason last night my Windows XP image in VMWare fusion locked up during a update to the Microsoft Security stuff. I tried doing the “Virtual Machine -> Shutdown” which looked like it was going to work. However Windows just set for ever on the “Shutting Down Windows” screen. Well if you hold down the Option key when you click on the “Virtual Machine” menu item, the word “Force” shows up in front of all the options. This is the equivalent of pressing the power button when you click “Force Shutdown”. With out that, VMWare is trying to do a “nice” shutdown. So I forced a “reset” and everything came up fine… Hope this helps some one else who is “hung”
ZFS + PCA, goodbye UFS
ZFS has been around for a while now.. I have used it for some data partitions, but when Sun added the ability to use it as the root filesystem, I was a little hesitant to start using it there. Part of it was because, I know if I get a root disk that crashes and it is on UFS, I can get in to it pretty well. ZFS was different and I was never really comfortable about using it for root, until last night. I have been looking for a way to keep a lot of Solaris machines up to date with the Recommended and Security patches and doing it with UFS seemed to be taking for ever. Part of the problem I had with keeping them updated with UFS was the shear downtime it required to install the cluster in single user mode. Multiply that by X number of machines and it is a never ending chore to update them.
This weekend I started looking at the PCA tool, since I have seen a lot of people mention good things about it. So off to my test machine and I installed a new VM with Solaris 10 10/09 ( update 8 ) in it. After the install was finished using a ZFS root, I decided to set up a PCA proxy server on another machine. The purpose of the PCA Proxy server is that it will be the one with access to the Internet to download the patches from sunsolve. It was extremely easy to do this, (in fact I have it running in a zone on my main server.)
- Created a new plain zone (can be on anything, but I wanted to keep it seperate).
- Configure the apache2 instance on the machine, by copying the /etc/apache2/httpd.conf-example to /etc/apache2/httpd.conf
- Edit the httpd.conf and change the line that says “Timeout 300” to be “Timeout 1800”. You need to make it at least 1800, if not more depending on the speed of your Internet connection. At 22Mb/s 1800 was ok for me.
- Create a directory /var/apache2/htdocs/patches, make it owned by webservd:webservd and 755 as the permissions.
- Download and save a copy of pca in /var/apache2/cgi-bin and call it pca-proxy.cgi. Make it owned by webservd:webservd and 755 as the permissions.
- Create a file in /etc called pca-proxy.conf. In it place the following:
xrefdir=/var/apache2/htdocs/patches patchdir=/var/apache2/htdocs/patches user=sunsolveusername passwd=sunsolvepassword
- In order to make the proxy run a little faster on the first use, I decided to download and “cache” the latest security and recommended patch cluster. (You don’t need to do this, but if the patches are missing the pca proxy server will download them. Considering my machine needed 156 patches, this was faster…) Once the recommended and security patches were downloaded, I placed them in a temp place and unzipped the cluster. Once the cluster is unzipped, I needed to make zip files of each patch (so that the pca client can download the zip file). To do this, I went in to tmp/10_x86_Recommended/patches and ran the following:
for i in `cat patch_order` do zip -r $i $i done
- Once the zipping is done, move all the patch zip files in to the /var/apache2/htdocs/patches directory.
- Start up the apache2 service “svcadm enable apache2”
- Now it is time to configure the client, copy the pca script to the client machine and place it some place, I used /root.
- Next create a config file /etc/pca.conf in it with the following:
patchurl=http://pca-host/cgi-bin/pca-proxy.cgi xrefurl=http://pca-host/cgi-bin/pca-proxy.cgi syslog=local7 safe=1
The first two lines tells pca where to find the patches and the patchdiag.xref file. The syslog line tells it to log all activity to local7 syslog facaility. The last line “safe=1” means: Safe patch installation. Checks all files for local modifications before installing a patch. A patch will not be installed if files with local modifications would be overwritten.
- Now that the config file is created, make sure that syslog is set to handle local7 info, I have mine set to local7.info going to /var/adm/local7.log. PCA will log the patch installation stuff to that log (i.e.:
Apr 11 17:10:50 zfstest2 pca: [ID 702911 local7.notice] Installed patch 124631-36 (SunOS 5.10_x86: System Administration Applications, Network, and C) Apr 11 19:07:04 zfstest2 pca: [ID 702911 local7.notice] Failed to install patch 118246-21 (comm_dssetup 6.4-5.05_x86: core patch) rc=15
Now comes the part that makes ZFS worth using… We are going to create a new “boot environment” and then patch that environment”
- First we need to create a new BE;
lucreate -n p20100411
The p20100411 can be anything, I used today’s date since I patched the machine today.. Makes it easy to remember when the last time the machine was patched.
- Now we need to mount it
lumount p20100411 /.alt.root
- Now we can start patching;
pca -i -R /.alt.root
- Because I cached most of the patches locally on my pca proxy, it should not take too long for it to download, unzip and install the patches in the alt root
- Once the patching is done, it will give you a summary line telling you how many patches were downloaded and installed:
Download Summary: 156 total, 156 successful, 0 skipped, 0 failed Install Summary : 156 total, 156 successful, 0 skipped, 0 failed
- Now we need to unmount the alt root and activate it to boot:
luumount p20100411 luactivate p20100411
- Now just reboot the machine. You MUST use init or shutdown, if you don’t then it won’t boot in to the new boot environment. I use
shutdown -g0 -i6 -y
- Depending on how long it takes for your machine to boot, when it comes back up it should be on the new ZFS file system:
bash-3.00# df -h Filesystem size used avail capacity Mounted on rpool/ROOT/p20100411 49G 6.6G 38G 15% /
- Now you can run that new patched system for how ever long it takes to verify your patches didn’t break anything. Once you are sure everything is ok, then you can delete the old install, in my case:
ludelete s10x_u8wos_08a
This should let you recover a little bit of space. In my case it was about 1.5 gig.
The only thing left is to set up a bunch of scripts to do “pca -l” about once a month to see what patches need installed and to log that. PCA has a lot of other functions than I went over here, in a couple of words, it seems to be kick ass. On top of that it is free! The ability to create new BE’s will definitely hope any one with the right amount of disk space be able to keep their system up to date.
One Tip, make sure you watch the output of the luactivate command. This is what is displayed:
********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Boot from Solaris failsafe or boot in single user mode from the Solaris Install CD or Network. 2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount: mount -Fzfs /dev/dsk/c1t0d0s0 /mnt 3. Runutility with out any arguments from the Parent boot environment root slice, as shown below: /mnt/sbin/luactivate 4. luactivate, activates the previous working boot environment and indicates the result. 5. Exit Single User mode and reboot the machine. **********************************************************************