Moving VM’s between hosts

About a year ago I purchased a 1U IBM X3550 server to run VMware vSphere 5 on. While it was cool to have a server that had dual quad procs and 8 gig of ram in it, the noise it put off was too much for my family room. (Just think of half a dozen 1 inch fans running at 15,000RPM almost constantly.) Recently I have been spending more time in the family room and the noise has gotten to a level that it is almost impossible to do anything in the room with out hearing it. (Like watch tv, a movie, play a game, etc.) So I started looking at my favorite used hardware site, geeks.com, for a new “server”. Well it finally arrived today, an HP XW8600 workstation. It is another dual quad proc, however it has 16GB of ram, and 12 SATA ports and a larger case, and the best of all, almost absolutely quiet.

So with it installed, I needed to start moving the VM’s from the IBM Server to the HP Server. In an enterprise environment, this usually isn’t a problem as you usually have a shared storage (SAN) that each of the hosts connect to. Well in my little home lab I don’t have shared storage. I did try to use COMSTAR in Solaris 10 to export a “Disk” as an iSCSI target. While this would work, it was going to take forever to transfer 1TB of VM’s from one server to a VM running on my Mac and back to the new server.

So a googling I went, and what I found was a way easier way to copy the VM’s over. ovftool, which runs on Windows, Linux and Mac. What it does is allow you to export and import OVF files to a VMware host. The side benefit of that is that you can export from one and import to another all on one line.

So I downloaded the Mac version and started coping. The basic syntax is like this:


./ovftool -ds=TargetDataStoreName vi://root@sourcevSphereHost/SourceVM vi://root@destvSphereHost

So if one of my VM’s is called mtdew, and I had it thin provisioned on the source host and wanted it the same on the destination host, and my datastore is called “vmwareraid” I would run this:

./ovftool -ds=vmwareraid -dm=thin vi://root@ibmx3550/mtdew vi://root@hpxw8600

where ibmx3550 is the source server and hpxw8600 is the destination server. If you don’t specify the “-dm=thin” then when it is copied over, it will become a “thick” disk, aka us the entire space allocated when created. (I.E. a 50GB disk that only has 10GB in use would still use 50GB if the -dm=thin is not used.)

There are some gotchas that you will have to look out for:

  1. Network configs, I had one VM that had multiple internal network’s defined. Those were not defined on the new server, so there is a “mapping” that you have to do. I decided I didn’t need them on the new server so I just deleted them before I copied it over.
  2. VM’s must be in a powered off state. I tried them in a “paused” state and it did not want to run right.
  3. It takes time, depending on the speed of the network, disk, etc, it will take a lot of time to do this, and the VM’s have to be down while it happens. So definitely not a way to move “production” vm’s unless you have a maintenance window.
  4. It will show % complete as it goes, which is cool, but the way it does it is weird. It will show the % at like 11 or 12 and then I turn my head and all of the sudden it says it is completed.
  5. I did have some issues with a vm that I am not sure what happened to it, but when I try to copy it, I get an error: “Error: vim.fault.FileNotFound”… It may be due to me renaming something on the vm at some point in the past.

Hope this helps some other “home lab user”…

 

 

Fixing Dynamic DNS

My Solaris machine that ran DHCP/DNS and Routing for my home network died tonight after having been running for over 3 and a half years no-stop. So I had to re-setup my dhcp and dns on another machine, luckly I had backed up the stuff that was on the old machine a month or so ago, but some info had changed. In particular was the Dynamic DNS that I had setup and linked with the DHCP server (I use ISC’s DHCP and DNS). So I got the backup restored on another server and everything running, but a couple of hosts would not work. Come to find out the backup I had was several months old (no problem the machine did not change that much), but what did change was my IP address to the world (It changed some time in march or april after having been the same for over 3 years).

Well I had forgot how to update the Dynamic DNS stuff so I had to go hunting. This is what I did:

1. You can update the info dynamicly using nsupdate (if you have it configured to do so, which I did). So I did the following:


#nsupdate

server 10.0.0.69

key dhcpupdate u23ove098uy2ok3n12339==

zone homenetwork.net

update delete homenetwork.net

send

update add homenetwork.net 18000 IN A 10.0.0.1

send

^D

So now that part worked, but I noticed that I screwed up one of the NS records (it had the ip with the domain) at some point. So again to delete and add a new NS record:


#nsupdate

server 10.0.0.69

key dhcpupdate u23ove098uy2ok3n12339==

update delete homenetwork.net. NS 10.0.0.69.homenetwork.net.

send

update add homenetwork.net. 86400 IN NS ns.homenetwork.net.

send

^D

So that is all fine and well, but I am used to editing the files by hand… Didn’t realize until tonight that I could actually still do that. Any one who has used DDNS from ISC will notice that in the zones directory there will be files with a .jnl attached to it for the zones that are dynamic dns enabled. Those files are binary so viewing them looks very weird. I always thought for some reason that those were “it”, and the files that I used to use were no longer used. But they are….. The old files are updated about every 15 minutes with the info that is in the jnl files. But if you want to edit the files likeĀ  you always have, but still use ddns, you can. All you need to do is “freeze” updates, edit the files and then “thaw” the zones. When you freeze the zone it will flush the info in the jnl files to the files you are used to editing. All you need to do is the following:


rndc freeze

Edit the files


rndc thaw

Your changes will now be available.

Ultra Restricted Shell in Solaris

How to setup a readonly environment on Solaris:

If you want to give a specific user readonly access to your solaris machine via ssh, and want to log everything they do, it is sort of easy to setup. Here is a quick step-by-step guide to setting it up.

1. First you will need to chose what restricted shell you want to use. In this case I used bash as I wanted the .bash_history file to contain the exact time every command was run on the system. Since Solaris does not come with the rbash command, the only thing you need to do is make a copy of /usr/bin/bash to /usr/bin/rbash.

2. Make the user’s shell be /usr/bin/rbash, this will make them use the restricted bash shell.

3. Make their home directory owned by root.

4. Make their .profile owned by root

5. Create a .bash_history file and make it owned by that user. This should be the only file in their directory that is owned by the user.

6. Pick a location for your “restricted” binaries to reside. If this user will be logging in to multiple machines and you have a shared file system (say /home) I would suggest making the directory in /home; say /home/rbin.. This way you only have to put /home/rbin in their PATH.

7. Make symbolic links in your restricted binary directory to the binaries you want to run. I.e. ls, ps, more, prstat,passwd and hostname :

lrwxrwxrwx 1 root root 17 Feb 19 20:47 hostname -> /usr/bin/hostname*
lrwxrwxrwx 1 root root 11 Feb 19 19:56 ls -> /usr/bin/ls*
lrwxrwxrwx 1 root root 13 Feb 19 19:57 more -> /usr/bin/more*
lrwxrwxrwx 1 root root 15 Feb 19 19:56 prstat -> /usr/bin/prstat*
lrwxrwxrwx 1 root root 11 Feb 19 19:56 ps -> /usr/bin/ps*
lrwxrwxrwx 1 root root 11 Feb 19 19:56 passwd -> /usr/bin/passwd*

By making these sym links instead of the actual binaries, you do not have to worry if you have multiple platforms that you are going between (i.e. Sparc, x86) and doing custom logic to use the right binary.

8. Create the users .profile with the following in it:

readonly PATH=/home/rbin
readonly TMOUT=900
readonly EXTENDED_HISTORY=ON
readonly HOSTNAME="`hostname`"
readonly export HISTTIMEFORMAT="%F %T "
readonly export PS1='${HOSTNAME}:${PWD}> '

This will make it so they can not change any of the Environment variables. It sets their path to /home/rbin. Sets a inactivity time out to be 15 minutes. Sets the extended history to be on (this logs the time each command was executed in their .bash_history file). And finally sets their prompt and makes it readonly as well.

9. The last thing you need to do is change the permissions on the scp and sftp-server binaries so that the user can not execute them. Otherwise, they would be able to download files and go any where on the server they want. (Restricted shell will prevent them from cd’ing out of their home directory) To do this, I created a group and put my user in it as their primary group. Say the group was called rdonly. Now I do the following:


setfacl -m group:rdonly:--- /usr/lib/ssh/sftp-server
setfacl -m group:rdonly:--- /usr/bin/scp

So the files should show up like this now:

bash-3.00# ls -la /usr/lib/ssh/sftp-server /usr/bin/scp
-r-xr-xr-x+ 1 root bin 40484 Jan 22 2005 /usr/bin/scp
-r-xr-xr-x+ 1 root bin 35376 Jan 22 2005 /usr/lib/ssh/sftp-server

And the getfacl will look like this:


bash-3.00# getfacl /usr/bin/scp

# file: /usr/bin/scp
# owner: root
# group: bin
user::r-x
group::r-x #effective:r-x
group:rdonly:--- #effective:---
mask:r-x
other:r-x

This makes it so when the user tries to sftp or scp in to the machine, it will immediately disconnect them as they don’t have permissions to run those 2 executables.

That is about it. Don’t forget to set their password, make sure it has a policy set on it to be changed often and require a combination of letters, numbers and special characters and that it is at least 8 characters in length.

So now when the user logs in they will see something similar to this:

[laptop:~] unixwiz% ssh unixwiz@fozzy
Password:
Last login: Thu Feb 19 22:10:15 2009 from laptop
fozzy:/home/unixwiz> cd /
-rbash: cd: restricted
fozzy:/home/unixwiz> vi /tmp/test
-rbash: vi: command not found
fozzy:/home/unixwiz> PATH=$PATH:/usr/bin
-rbash: PATH: readonly variable
fozzy:/home/unixwiz> timed out waiting for input: auto-logout

As you can see, it will give you errors if you try to do something that you are not allowed to do. The last line shows the time out message where it closes the connection due to inactivity.

Now if the administrator goes and looks at the users .bash_history file they would see this:

#1235099570
cd /
#1235099577
vi /tmp/test
#1235099587
PATH=$PATH:/usr/bin

The #number is the exact time that the user ran the command below it. The item is the seconds since the epoch…