Installing Solaris on a large harddrive

Because of the storms a couple of weeks ago, the harddrives in my one solaris box had failed. I bought a couple of new 500gb harddrives to replace the 80 that was in it. So I downloaded the newest update of Solaris 10 (update 7) and did the install. Everything worked perfect, then it rebooted. All I got was the “grub>” prompt. Nothing would let me boot it. I booted back off of the cdrom and updated the boot-archive, everything seemed fine, but it still would not boot off of the harddrive. So I tried to do it by hand at the grub> prompt:

grub> kernel /platform/i86pc/multiboot
Error 18: selected cylinder exceeds maximum supported by bios

Great.. now what. It appears that you have to have the boot info in the first 1023 cylinders of the harddrive. Well this big 500GB harddrive has over 60,000+ cylinders on it. Guess what, the Solaris installer put the swap on 3-263 and then put the root on 58884 – some high number.. So obviously grub is going to have a problem.

So how do you fix this. There is 2 possible ways that I can think of and possibly a third.

The first would be for Sun to change the installer to not automatically place root at the end of the disk.

The second is when you are getting ready to do the install, pre-layout the disks using format from the command line.

The third is what I am trying, since the remaining of the disk has not been used yet, I am going to move the partitions around with dd. The first thing I did was delete the swap partition on slice 1. The second was to create a new temp partition on slice 3 that is the exact same size as slice 0. Now using dd I did the following:

dd if=/dev/dsk/c0d0s0 of=/dev/dsk/c0d0s3 bs=4096k

Hopefully this won’t take too long as the partition is only 15gb in size. You could have done the same thing with tar and have both partitions mounted. Once the dd finishes. I will delete the slice 0 and recreate it starting at cylinder 3 and give it the same size as it was before, and then dd the info back. Once that is done, delete the temp slice and recreate a swap partition. Hopefully this works.

Fixing Dynamic DNS

My Solaris machine that ran DHCP/DNS and Routing for my home network died tonight after having been running for over 3 and a half years no-stop. So I had to re-setup my dhcp and dns on another machine, luckly I had backed up the stuff that was on the old machine a month or so ago, but some info had changed. In particular was the Dynamic DNS that I had setup and linked with the DHCP server (I use ISC’s DHCP and DNS). So I got the backup restored on another server and everything running, but a couple of hosts would not work. Come to find out the backup I had was several months old (no problem the machine did not change that much), but what did change was my IP address to the world (It changed some time in march or april after having been the same for over 3 years).

Well I had forgot how to update the Dynamic DNS stuff so I had to go hunting. This is what I did:

1. You can update the info dynamicly using nsupdate (if you have it configured to do so, which I did). So I did the following:


#nsupdate

server 10.0.0.69

key dhcpupdate u23ove098uy2ok3n12339==

zone homenetwork.net

update delete homenetwork.net

send

update add homenetwork.net 18000 IN A 10.0.0.1

send

^D

So now that part worked, but I noticed that I screwed up one of the NS records (it had the ip with the domain) at some point. So again to delete and add a new NS record:


#nsupdate

server 10.0.0.69

key dhcpupdate u23ove098uy2ok3n12339==

update delete homenetwork.net. NS 10.0.0.69.homenetwork.net.

send

update add homenetwork.net. 86400 IN NS ns.homenetwork.net.

send

^D

So that is all fine and well, but I am used to editing the files by hand… Didn’t realize until tonight that I could actually still do that. Any one who has used DDNS from ISC will notice that in the zones directory there will be files with a .jnl attached to it for the zones that are dynamic dns enabled. Those files are binary so viewing them looks very weird. I always thought for some reason that those were “it”, and the files that I used to use were no longer used. But they are….. The old files are updated about every 15 minutes with the info that is in the jnl files. But if you want to edit the files like  you always have, but still use ddns, you can. All you need to do is “freeze” updates, edit the files and then “thaw” the zones. When you freeze the zone it will flush the info in the jnl files to the files you are used to editing. All you need to do is the following:


rndc freeze

Edit the files


rndc thaw

Your changes will now be available.

Ultra Restricted Shell in Solaris

How to setup a readonly environment on Solaris:

If you want to give a specific user readonly access to your solaris machine via ssh, and want to log everything they do, it is sort of easy to setup. Here is a quick step-by-step guide to setting it up.

1. First you will need to chose what restricted shell you want to use. In this case I used bash as I wanted the .bash_history file to contain the exact time every command was run on the system. Since Solaris does not come with the rbash command, the only thing you need to do is make a copy of /usr/bin/bash to /usr/bin/rbash.

2. Make the user’s shell be /usr/bin/rbash, this will make them use the restricted bash shell.

3. Make their home directory owned by root.

4. Make their .profile owned by root

5. Create a .bash_history file and make it owned by that user. This should be the only file in their directory that is owned by the user.

6. Pick a location for your “restricted” binaries to reside. If this user will be logging in to multiple machines and you have a shared file system (say /home) I would suggest making the directory in /home; say /home/rbin.. This way you only have to put /home/rbin in their PATH.

7. Make symbolic links in your restricted binary directory to the binaries you want to run. I.e. ls, ps, more, prstat,passwd and hostname :

lrwxrwxrwx 1 root root 17 Feb 19 20:47 hostname -> /usr/bin/hostname*
lrwxrwxrwx 1 root root 11 Feb 19 19:56 ls -> /usr/bin/ls*
lrwxrwxrwx 1 root root 13 Feb 19 19:57 more -> /usr/bin/more*
lrwxrwxrwx 1 root root 15 Feb 19 19:56 prstat -> /usr/bin/prstat*
lrwxrwxrwx 1 root root 11 Feb 19 19:56 ps -> /usr/bin/ps*
lrwxrwxrwx 1 root root 11 Feb 19 19:56 passwd -> /usr/bin/passwd*

By making these sym links instead of the actual binaries, you do not have to worry if you have multiple platforms that you are going between (i.e. Sparc, x86) and doing custom logic to use the right binary.

8. Create the users .profile with the following in it:

readonly PATH=/home/rbin
readonly TMOUT=900
readonly EXTENDED_HISTORY=ON
readonly HOSTNAME="`hostname`"
readonly export HISTTIMEFORMAT="%F %T "
readonly export PS1='${HOSTNAME}:${PWD}> '

This will make it so they can not change any of the Environment variables. It sets their path to /home/rbin. Sets a inactivity time out to be 15 minutes. Sets the extended history to be on (this logs the time each command was executed in their .bash_history file). And finally sets their prompt and makes it readonly as well.

9. The last thing you need to do is change the permissions on the scp and sftp-server binaries so that the user can not execute them. Otherwise, they would be able to download files and go any where on the server they want. (Restricted shell will prevent them from cd’ing out of their home directory) To do this, I created a group and put my user in it as their primary group. Say the group was called rdonly. Now I do the following:


setfacl -m group:rdonly:--- /usr/lib/ssh/sftp-server
setfacl -m group:rdonly:--- /usr/bin/scp

So the files should show up like this now:

bash-3.00# ls -la /usr/lib/ssh/sftp-server /usr/bin/scp
-r-xr-xr-x+ 1 root bin 40484 Jan 22 2005 /usr/bin/scp
-r-xr-xr-x+ 1 root bin 35376 Jan 22 2005 /usr/lib/ssh/sftp-server

And the getfacl will look like this:


bash-3.00# getfacl /usr/bin/scp

# file: /usr/bin/scp
# owner: root
# group: bin
user::r-x
group::r-x #effective:r-x
group:rdonly:--- #effective:---
mask:r-x
other:r-x

This makes it so when the user tries to sftp or scp in to the machine, it will immediately disconnect them as they don’t have permissions to run those 2 executables.

That is about it. Don’t forget to set their password, make sure it has a policy set on it to be changed often and require a combination of letters, numbers and special characters and that it is at least 8 characters in length.

So now when the user logs in they will see something similar to this:

[laptop:~] unixwiz% ssh unixwiz@fozzy
Password:
Last login: Thu Feb 19 22:10:15 2009 from laptop
fozzy:/home/unixwiz> cd /
-rbash: cd: restricted
fozzy:/home/unixwiz> vi /tmp/test
-rbash: vi: command not found
fozzy:/home/unixwiz> PATH=$PATH:/usr/bin
-rbash: PATH: readonly variable
fozzy:/home/unixwiz> timed out waiting for input: auto-logout

As you can see, it will give you errors if you try to do something that you are not allowed to do. The last line shows the time out message where it closes the connection due to inactivity.

Now if the administrator goes and looks at the users .bash_history file they would see this:

#1235099570
cd /
#1235099577
vi /tmp/test
#1235099587
PATH=$PATH:/usr/bin

The #number is the exact time that the user ran the command below it. The item is the seconds since the epoch…

Increase in Solaris patch cluster size

Justin pointed out to me tonight that the new Recommended and Security patch clusters for Solaris 10 were quite large. And right he was.. It seems that in the last month or so the 10 Recommended patch cluster has grown from 400meg to over 1GB in size now. (not to mention that he said you need almost 2.5 gig of space to unzip it.)

Looking at the cluster readme, I can take a guess as to why it is so large, there are multiple kernel patches. There are also updates to Firefox, Thunderbird, Adobe Acrobat Reader, Adobe Flash plugin, multiple Java Updates and a patch for the defunct Mozilla 1.7 browser (which in my opinion should just be removed completely).

The interesting part is that the X86 patch cluster is over 300Meg smaller than the sparc one. The only major difference that I can think of is that X86 doesn’t have Adobe Acrobat Reader yet.

I wish Sun would just offer a ISO download of all the patches now (sort of like the EIS dvd) for the public.  Also need to find a faster way to download the patches.

OpenSolaris vs Solaris

This weekend I went to install the new Communications Suite with Convergence and I decieded to install OpenSolaris 2008.5 on my machine and put the Comms Suite in a zone on it (so I could easily blow it away after my testing was done..)

Let me be probably not the first to say that OpenSolaris != Solaris.. I have been using Solaris 10 since it was in beta, and OpenSolaris through me for a couple of loops…

First are some of the cool things I liked:

1. The interface, it is updated and seemed a lot faster.

2. The ease of “patching” only took about half an hour to do a pkg image update.

3. Zfs root made it easy to roll back changes..

Now the parts that i had problems with and did not like too well.

1. I had to download a driver for my ethernet card as the one Sun delivers (sk98sol) is still too old and did not support my card which is one built on to a 3+ year old motherboard.

2. To create a zone, you MUST have a network connection (and at least to the internet for the time being). This really made me mad as I sometimes don’t have access to the Internet, and if I need to create a zone, I don’t want to have to wait for it to download 200+ Mb of packages, that are already on the machine in the first place.

3. No more “full root zones”, I created a zone in the hopes of installing the Comms Suite in it, only to find out that it was not a full root zone and stuff that is required by the Comms Installer to be there wasn’t and therefor I could not install it… Such simple things like unzip and perl are missing from the newly created zone.

In the end, I ended up reinstalling the box with Solaris 10 05/08, which was a task in itself. See when you install OpenSolaris it makes the root drive zfs, and did  some weird things to the VTOC. Therefore when I went in to do the install of the “older” Solaris 10 05/08, the installer would show me the disk, let me “carve” it up like I wanted in the gui and via command line, but when the install went to go on, the installer always came back saying that there was not enough disk to install Solaris. What I ended up having to do was go and do a “format -e” and then fdisk and delete the Solaris partition that was made by OpenSolaris, and let the Solaris installer create it’s own fdisk partition again.

So after finally getting Solaris 10 installed and the latest Recommended/Security/Sun Alert patches put on, I called it a night and left the Comms install for next weekend.

Overall I think OpenSolaris is going in the right direction, but there needs to be a lot of things fixed in it.. The biggest is the zones, there should be an option for “cloning” the already installed OS, since it is already on a ZFS pool. The second is that there should be an option when creating the zone as to what kind of zone it should be, whether a full (which would load every package, so you don’t have to try and do it  your self), sparse or maybe a new one called Jail which has everything in it read only.