set default DISK$DATA:[OS.EOL]

Seeing the news the other day of HP is discontinuing OpenVMS brought back some memories. Mostly of all the different operating system’s I have used that are no longer around or have changed a lot. Back in my undergrad days, one of the first OS’ we used to program on was OpenVMS on a VAX. It was for an engineering class and we had to use Fortran 77. I remember our quota’s used to be about 5MB in size, which at the time was “huge”.

So to list some of the other OS’ I have seen gone by the way side (and other computer related items that had a huge effect on where I am at today.)

1. OpenVMS (used it between 1994 and 1999)

2. VM/ESA (not really gone, now called zOS, but haven’t touched it since about 2001)

3. Gopher (this is the “original” web…)

4. IRIX (SGI’s UNIX platform. I still have 2 SGI Indy’s and a copy of 5.3 and I believe 6.5, but haven’t had them on in years, maybe a vacation project some time.)

5. SunOS (not Solaris, but the old BSD based SunOS 4.x) my things have changed in the 19 years that I have been doing Solaris work

6. mSQL (mini sql). Not really gone, but surpassed by other’s (mysql, mariadb, etc). I used msql as my first PHP/FI + DB + Apache installation on a Solaris 2.6 box. I wrote a network management application that controlled DNS, DHCP, etc for university dorm connection management.

7. Trumpet Winsock, for the good old Windows 3.1 days when you needed a way to do TCP/IP over modem or ethernet.

8. NCSA Mosaic, the web browser that is credited with popularizing the WWW. Used to use this on some old SGI and DEC machines.

9. ULTRIX, DEC’s version of UNIX. It was on a lot of DECstations in the Engineering department and one computer in the CS department. Used to have a teacher that made us make sure everything compiled on it vs the Solaris or Linux hosts.

10. AltaVista, Search engine to use before Google came around. Now it is just a “front end” to Yahoo search 🙁

11. Atari 400, used to have one of these at the grandparents house to tinker on.

12. Commodore 64, used to have a couple of these when I lived at home. We I learned some BASIC programming. (Later went on to try Visual Basic programming on Windows 3.11 on a 80486 DX4-100 AMD PC.)

13. BeOS, was a really neat idea, excellent media support, unfortunately it was around the time of the PC vs Mac battle so getting buy in was hard.

 

This all also brings back memories how of rudimentary computers were back then and the lack of security. There was no SSH, everything on the VM, OpenVMS and UNIX machines was done through telnet. There was no SSL, and people didn’t think twice about typing in a credit card number on a web site.

I also remember doing web surfing with Lynx on various UNIX systems. And what goes along with Web browsing then email, the first GUI email client I remember using was Pegasus Mail on a Novell Netware based mail system. Once people started doing POP3 mail, people switched over to Eudora Mail. Which I used for a while, but not a lot. I for some reason stuck with Pine a text based mail reader, mostly because I used it on the server that received all the mail.  (And to totally geek out, there were times were I would telnet in to the POP3 port on the mainframe and read my mail by issuing the pop commands by hand.)

As for personal computers, I have had quite a few since my first one. My first computer only had a 40MB hard drive in it. It was a KLH brand 80386 SX 16 that I bought from Phar-Mor. I think I had it maxed out a 4MB of Ram which at the time was huge. I remember trying to play some game on it (I keep thinking it was SimCity, but may be wrong) and it needed more Video RAM cause it only came with 128K of video ram. So I had to buy more to up it to like I think 384K.

As a list of what I have had or still have, here goes:

  1. KLH 80386SX 16MHz – First, no longer have it, came with a 40MB hd, and a EGA 15inch monitor.
  2. AMD 80486DX4 100MHz – Used this to run Windows 3.11, Linux and later Solaris 2.6. It came with a 320MB hard drive. I later paid close to $300 for a 1.6GB hard drive for it. It had a VESA Local Bus video card and a Sound Blaster 16 sound card. No longer have this computer.
  3. Intel Pentium II 266MHz – Bought this in 1997 from a company called Vektron (who later went out of business, like all fly by night computer places back in the early days). It had 32MB of ram and a 500MB hard drive. It ran Windows 95, Windows NT, BeOS, Solaris and Linux. (I had bought bigger and more hard drives later, just can’t remember what all was in it.) I actually still have this machine, it’s most recent use was as a router for my home network running Solaris 10 with 3 NIC’s (one on Comcast, one on Verizon and one on my home network). The hard drive died in it a couple of years ago, so I turned it off, it is still sitting in a rack thought.
  4. Sun SPARCstation 2 – This was my first “workstation”. I got it second hand from a friend’s company. It was where I cut my teeth on Solaris. It ran Solaris 2.5 when I got it, and over the years I upgraded it to Solaris 7. Ironically it only had a 40MHz processor and 64 MB of ram. It had 2 huge external 800MB disk packs and a freakishly heavy 17 inch Sony monitor that used 13W3 connector with BNC ends. I still have this one, but the disk packs both died, so it hasn’t been on in years.
  5. Sun Ultra5 – 360MHz, 128MB of ram. One of the first “IDE” based lower end workstations from Sun. I still have this, but I think the power supply is bad, as I can’t get it to turn on :(. When it ran, I had Solaris 9 on it.
  6. SGI Indy – 2 of these 133MHz with 96MB of ram. One of the coolest “workstations” I ever owned. I believe they both still run, but haven’t been on in years. One ran IRIX 5.3 and the other ran IRIX 6.5
  7. Dual Intel Pentium III 933MHz – Bought this in probably 2001 I think. It is huge, it was a full tower with onboard IDE raid (which only works with Windows because of driver issues.). Right now it has 1.5GB of ram in it, ~2TB of disk and runs Solaris 10 with 7 zones running on it.
  8. IBM Thinkpad i1100, Celeron 500MHz. This one was given to me as a result of work being done for a company. It was my first laptop, and I still have it today. However it’s stats are very underwhelming by today’s point of view. The monitor is an LCD one, but not TFT, so that means there are all kinds of shadows and the picture isn’t crisp. It also only had a 5GB hard drive in it. Which means after installing Windows 2000 on it, there was only maybe a gig free. It also had no floppy drive, and no network ports. So I bought a Linksys WAP11 back in the day (probably in 2002 when I got this) for upwards of $300 so I could have wireless internet on it.
  9. ThinkPad A22p – 900MHz Pentium III. I bought this one as a replacement of the first. Side by side this one is HUGE, as it has a 15 inch display that runs at 1600×1200. It also had a 30GB hard drive (which was split in to 3 10GB chunks, one for Windows XP NTFS, One for Solaris 10 and one for FAT 32 to share files between the two OS’).
  10. AMD 3600+ – Got this one in 2005. It currently runs a combination of Windows XP and Windows 7. Has about 2.5 TB of disk on it.
  11. Sun X2100 – This server. Currently running Solaris 10, with a surprisingly small 160GB of disk with 4 zones on it.
  12. Apple MacBook Pro 2.0GHZ – This was one of the first Intel based Mac’s that was released in 2006. It had a Dual Core 2.0 GHz processor, 2GB of ram an a 100GB hard drive. It did have it’s issues (mostly battery and power adapter ones), but it ran solid for about 5 years. In the fall of 2011 the logic board “died” and it will no longer run in full “user” mode. (I think it is the graphics part of the board.) Still have it hoping for a price drop of replacement boards some day.
  13. Apple Mac Pro – Dual Quad Xeon 2.8GHz with 10 GB of ram. This is the best desktop I have ever had. It is fast and quiet. Right now I think I have close to 13GB of disk on it (both internal and external). I also dual boot it with MacOSX 10.8 and Windows 7 (for a couple of games)
  14. Apple MacBook Pro 2.8GHz iCore7 – the replacement for the one that died above. It is hands down probably 4 to 8 times faster than the 2.0 one that I had before.
  15. Sun V20z – Used to run VMware ESX 3.5 with a Sun T3 fibre connected Disk array. The V20z is fully loaded with processor (2) and ram (16GB). One loud machine…
  16. IBM X3550 – Dual Quad Xeon with 8GB of ram. Used to run VMware vSphere 5.0. Used it to play around with doing virtualization of my house servers. Unfortunately it is too loud to leave running 24×7, so it is only on when needed.
  17. HP XW8600 workstation – Dual Quad Xeon with 16GB of ram. This is my “production” VMware server at  home. It has 3 TB of disk it in and runs probably 11VM’s all the time. It was used to replace the noisy IBM one, and it is super quiet.

As for a list of operating systems I keep current with, it is many and with VMware it is possible to have “test” versions of everything sitting around which helps a lot. Basically the following is what I keep running:

  1. MacOSX 10.7 and 10.8
  2. Windows XP, 7, 8, 2008, 2008R2, 2012
  3. CentOS 6.3
  4. Solaris 10, 11
  5. OpenIndiana 151
  6. pfSense (freebsd)
  7. OpenBSD
  8. Ubuntu Linux

Well that is about enough nostalgia for tonight. Trying to think of other things to put back on the blog to start updating it more often. If you have any idea’s leave a comment (open for 30 days only to keep the spammers away..)

ZFS + PCA, goodbye UFS

ZFS has been around for a while now.. I have used it for some data partitions, but when Sun added the ability to use it as the root filesystem, I was a little hesitant to start using it there. Part of it was because, I know if I get a root disk that crashes and it is on UFS, I can get in to it pretty well. ZFS was different and I was never really comfortable about using it for root, until last night. I have been looking for a way to keep a lot of Solaris machines up to date with the Recommended and Security patches and doing it with UFS seemed to be taking for ever. Part of the problem I had with keeping them updated with UFS was the shear downtime it required to install the cluster in single user mode. Multiply that by X number of machines and it is a never ending chore to update them.

This weekend I started looking at the PCA tool, since I have seen a lot of people mention good things about it. So off to my test machine and I installed a new VM with Solaris 10 10/09 ( update 8 ) in it. After the install was finished using a ZFS root, I decided to set up a PCA proxy server on another machine. The purpose of the PCA Proxy server is that it will be the one with access to the Internet to download the patches from sunsolve. It was extremely easy to do this, (in fact I have it running in a zone on my main server.)

  1. Created a new plain zone (can be on anything, but I wanted to keep it seperate).
  2. Configure the apache2 instance on the machine, by copying the /etc/apache2/httpd.conf-example to /etc/apache2/httpd.conf
  3. Edit the httpd.conf and change the line that says “Timeout 300” to be “Timeout 1800”. You need to make it at least 1800, if not more depending on the speed of your Internet connection. At 22Mb/s 1800 was ok for me.
  4. Create a directory /var/apache2/htdocs/patches, make it owned by webservd:webservd and 755 as the permissions.
  5. Download and save a copy of pca in /var/apache2/cgi-bin and call it pca-proxy.cgi. Make it owned by webservd:webservd and 755 as the permissions.
  6. Create a file in /etc called pca-proxy.conf. In it place the following:
    xrefdir=/var/apache2/htdocs/patches
    patchdir=/var/apache2/htdocs/patches
    user=sunsolveusername
    passwd=sunsolvepassword
    
  7. In order to make the proxy run a little faster on the first use, I decided to download and “cache” the latest security and recommended patch cluster. (You don’t need to do this, but if the patches are missing the pca proxy server will download them. Considering my machine needed 156 patches, this was faster…) Once the recommended and security patches were downloaded, I placed them in a temp place and unzipped the cluster. Once the cluster is unzipped, I needed to make zip files of each patch (so that the pca client can download the zip file). To do this, I went in to tmp/10_x86_Recommended/patches and ran the following:
    for i in `cat patch_order`
    do
    zip -r $i $i
    done
    
  8. Once the zipping is done, move all the patch zip files in to the /var/apache2/htdocs/patches directory.
  9. Start up the apache2 service “svcadm enable apache2”
  10. Now it is time to configure the client, copy the pca script to the client machine and place it some place, I used /root.
  11. Next create a config file /etc/pca.conf in it with the following:
    patchurl=http://pca-host/cgi-bin/pca-proxy.cgi
    xrefurl=http://pca-host/cgi-bin/pca-proxy.cgi
    syslog=local7
    safe=1
    

    The first two lines tells pca where to find the patches and the patchdiag.xref file. The syslog line tells it to log all activity to local7 syslog facaility. The last line “safe=1” means: Safe patch installation. Checks all files for local modifications before installing a patch. A patch will not be installed if files with local modifications would be overwritten.

  12. Now that the config file is created, make sure that syslog is set to handle local7 info, I have mine set to local7.info going to /var/adm/local7.log. PCA will log the patch installation stuff to that log (i.e.:
    Apr 11 17:10:50 zfstest2 pca: [ID 702911 local7.notice] Installed patch 124631-36 (SunOS 5.10_x86: System Administration Applications, Network, and C)
    Apr 11 19:07:04 zfstest2 pca: [ID 702911 local7.notice] Failed to install patch 118246-21 (comm_dssetup 6.4-5.05_x86: core patch) rc=15

Now comes the part that makes ZFS worth using… We are going to create a new “boot environment” and then patch that environment”

  1. First we need to create a new BE;
    lucreate -n p20100411

    The p20100411 can be anything, I used today’s date since I patched the machine today.. Makes it easy to remember when the last time the machine was patched.

  2. Now we need to mount it
    lumount p20100411 /.alt.root 
  3. Now we can start patching;
     pca -i -R /.alt.root
  4. Because I cached most of the patches locally on my pca proxy, it should not take too long for it to download, unzip and install the patches in the alt root
  5. Once the patching is done, it will give you a summary line telling you how many patches were downloaded and installed:
    Download Summary: 156 total, 156 successful, 0 skipped, 0 failed
    Install Summary : 156 total, 156 successful, 0 skipped, 0 failed
    
  6. Now we need to unmount the alt root and activate it to boot:
    luumount p20100411
    luactivate p20100411
    
  7. Now just reboot the machine. You MUST use init or shutdown, if you don’t then it won’t boot in to the new boot environment. I use
    shutdown -g0 -i6 -y
  8. Depending on how long it takes for your machine to boot, when it comes back up it should be on the new ZFS file system:
    bash-3.00# df -h
    Filesystem             size   used  avail capacity  Mounted on
    rpool/ROOT/p20100411    49G   6.6G    38G    15%    /
    
  9. Now you can run that new patched system for how ever long it takes to verify your patches didn’t break anything. Once you are sure everything is ok, then you can delete the old install, in my case:
    ludelete s10x_u8wos_08a
    

    This should let you recover a little bit of space. In my case it was about 1.5 gig.

The only thing left is to set up a bunch of scripts to do “pca -l” about once a month to see what patches need installed and to log that. PCA has a lot of other functions than I went over here, in a couple of words, it seems to be kick ass. On top of that it is free! The ability to create new BE’s will definitely hope any one with the right amount of disk space be able to keep their system up to date.

One Tip, make sure you watch the output of the luactivate command. This is what is displayed:

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

     mount -Fzfs /dev/dsk/c1t0d0s0 /mnt

3. Run  utility with out any arguments from the Parent boot
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Bad Oracle, Leave Solaris free

I just read Ben Rockwood’s post about Solaris No Longer Free. All I can say is I am severely disappointed in how Oracle has pretty much killed Sun and it’s products. One of the best things that Sun ever did was allow people to use Solaris for free. The caveat was you only got the Security patches for free. About a month or so ago, Oracle decided that you couldn’t get any patches unless you had a support contract. Ok I can sort of see your position on that, but why the hell are you now going to start charging for the OS as well. You have taken one of the best OS’ in the world for servers and basically killed it. What you are going to do is push more people to Linux (eck, I hate linux..). I am not sure why a company that has the number one OS would ever push people away from its OS. Linux is still immature in many ways and can’t scale at all unless you want to scale out and use up more power/floor space.

I hope that the Solaris user community will come around like they did when Sun tried to kill Solaris X86, and let Oracle know what a bad idea this was.

Poor Man’s Network Traffic Meter

Set out tonight to find a way to log “network traffic” through the interfaces on my solaris box. What I was wanting was the actually amount of traffic going through the interfaces. First thought was to use netstat. But that only shows “packets” and the packets could be differing sizes. So I ended up using kstat. I wrote this simple little script to grab the interface names, and then use kstat to get the data out of the network module for each card:

#!/bin/ksh
#Get list of Ethernet Cards in machine:
MyHOST="`hostname`"
OS="`uname -r`"
if [ ${OS} == "5.10" ] ; then
   MyETHERS="`/usr/sbin/dladm show-dev | awk '{print $1}'`"
else
   MyETHERS="`/usr/sbin/ifconfig -a | awk '{print $1}' | grep \":\" | awk -F':' '{print $1}' | sort -u | grep -v \"^lo0\"`"
fi
COUNT=0
while [ $COUNT -lt 800 ]; 
  do
  for i in `echo $MyETHERS`
  do
    OBYTES="`/usr/bin/kstat -p -c net -n $i -s obytes64 | awk '{print $2}'`"
    RBYTES="`/usr/bin/kstat -p -c net -n $i -s rbytes64 | awk '{print $2}'`"
    SNAPTIME="`perl -e \"print(time());\"`"
    echo "${MyHOST},${i},${SNAPTIME},${OBYTES},${RBYTES}"
    OBYTES=
    RBYTES= 
    SNAPTIME=
  done
  sleep 10
  COUNT="`expr $COUNT + 1`"
done

You have to be root to run this, but that is only because of the dladm command I am using on Solaris 10. If you don’t want to run it as root, then comment out the if statement and just leave the line that uses ifconfig. When you run it, it will produce an output like this:

gonzo,elxl0,1252806095,37255837,715035
gonzo,rge0,1252806096,605012664015,863919572622
gonzo,elxl0,1252806106,37255837,715035
gonzo,rge0,1252806107,605012664377,863919573090

The output is formated as hostname, ethernet, time of the run, sending bytes, and receiving bytes. (The time is the epoch time.) The above script will only run 800 times, pausing 10 seconds between each run of the kstat. You can change how long it runs by changing the line:

while [ $COUNT -lt 800 ]; 

Just change the 800 to some other number. The second item to change is the “interval” time and that is controled by the :

sleep 10

You probably don’t want to run this every second. Every 10 is about right, as it will allow me to get the traffic with out much overhead.

The second script I did, was a little php script (but can be done in probably any language, but I use php for just about everything. This script takes output from the file you created above (just run the above script, redirect it to a file) and gives you a human readable output.

Note if you have more than one ethernet card active in your system, currently you will need to
“grep” out each card to it’s own file. If you have a bunch of machines, you should probably import the data from above in to a mysql db, and then modify this script to pull the info from it.

Here is the script to just parse one network card:

< ?php
date_default_timezone_set("EST");
$fp=fopen("Netstat.csv",r);
if ($fp) {
  $i=0;
  while (!feof($fp)) {
    $buffer=fgets($fp);
    if ($buffer) { 
      list($hostname&#91;$i&#93;,$ethernet&#91;$i&#93;,$time&#91;$i&#93;,$sending&#91;$i&#93;,$receiving&#91;$i&#93;) = explode(",",$buffer);
      $newtime=date('r',$time&#91;$i&#93;);
      if ($i != 0 ) {
        $TDIFF=($time&#91;$i&#93;-$time&#91;$i-1&#93;);
        $SDIFF=($sending&#91;$i&#93;-$sending&#91;$i-1&#93;)/$TDIFF/1024/1024;
        $RDIFF=($receiving&#91;$i&#93;-$receiving&#91;$i-1&#93;)/$TDIFF/1024/1024;
        printf("%s|%s|%s|%3.3f|%3.3f\n",$hostname&#91;$i&#93;,$ethernet&#91;$i&#93;,$newtime,$SDIFF,$RDIFF);
        $SDIFF="";
        $RDIFF="";
        $TDIFF="";
      }
      $i++;
    }
  }
}
fclose($fp);
?>

In the above, I named my redirected output to be Netstat.csv. What the above script outputs will look like this:

gonzo|rge0|Sat, 12 Sep 2009 15:44:38 -0500|0.000|0.000
gonzo|rge0|Sat, 12 Sep 2009 15:44:49 -0500|0.000|0.007
gonzo|rge0|Sat, 12 Sep 2009 15:45:04 -0500|6.677|0.065
gonzo|rge0|Sat, 12 Sep 2009 15:45:18 -0500|3.148|0.027
gonzo|rge0|Sat, 12 Sep 2009 15:45:41 -0500|5.377|0.076
gonzo|rge0|Sat, 12 Sep 2009 15:45:55 -0500|8.678|0.111
gonzo|rge0|Sat, 12 Sep 2009 15:46:16 -0500|9.499|0.117
gonzo|rge0|Sat, 12 Sep 2009 15:46:30 -0500|8.861|0.117
gonzo|rge0|Sat, 12 Sep 2009 15:46:46 -0500|9.183|0.120
gonzo|rge0|Sat, 12 Sep 2009 15:47:02 -0500|10.783|0.139
gonzo|rge0|Sat, 12 Sep 2009 15:47:15 -0500|7.103|0.093
gonzo|rge0|Sat, 12 Sep 2009 15:47:29 -0500|7.165|0.100
gonzo|rge0|Sat, 12 Sep 2009 15:47:44 -0500|6.995|0.095
gonzo|rge0|Sat, 12 Sep 2009 15:48:01 -0500|6.986|0.099
gonzo|rge0|Sat, 12 Sep 2009 15:48:15 -0500|5.678|0.069
gonzo|rge0|Sat, 12 Sep 2009 15:48:28 -0500|6.530|0.090
gonzo|rge0|Sat, 12 Sep 2009 15:48:53 -0500|3.477|0.046
gonzo|rge0|Sat, 12 Sep 2009 15:49:14 -0500|6.459|0.083
gonzo|rge0|Sat, 12 Sep 2009 15:49:31 -0500|7.754|0.105
gonzo|rge0|Sat, 12 Sep 2009 15:49:58 -0500|9.416|0.121
gonzo|rge0|Sat, 12 Sep 2009 15:50:10 -0500|10.854|0.139
gonzo|rge0|Sat, 12 Sep 2009 15:50:21 -0500|11.922|0.152
gonzo|rge0|Sat, 12 Sep 2009 15:50:31 -0500|12.556|0.165
gonzo|rge0|Sat, 12 Sep 2009 15:50:43 -0500|12.813|0.170
gonzo|rge0|Sat, 12 Sep 2009 15:50:54 -0500|14.783|0.188
gonzo|rge0|Sat, 12 Sep 2009 15:51:05 -0500|12.729|0.168
gonzo|rge0|Sat, 12 Sep 2009 15:51:16 -0500|12.018|0.148
gonzo|rge0|Sat, 12 Sep 2009 15:51:27 -0500|10.786|0.141
gonzo|rge0|Sat, 12 Sep 2009 15:51:38 -0500|13.566|0.167
gonzo|rge0|Sat, 12 Sep 2009 15:51:49 -0500|11.234|0.144
gonzo|rge0|Sat, 12 Sep 2009 15:52:01 -0500|12.914|0.165

The output is : hostname, ethernet, time of query,sending speed in Mbps, receiving speed in Mbps. As you can see from the above, I was copying some large amounts of data.

OpenVPN between Solaris and MacOSX

I decided to see if I could get a VPN connection working between my laptop (running MacOSX) and my home server running Solaris 10. It turned out to be pretty easy to do a simple config. I am using OpenVPN. To compile the software on my Solaris box I needed to download 3 items:

  1. Virtual Point-to-Point (Tun) and Ethernet (TAP) devices driver. I got the version 1.1 from http://vtun.sourceforge.net/tun/ in source code form.
  2. LZO version 1.08 compression software from : http://www.oberhumer.com/opensource/lzo/download/LZO-v1/
  3. OpenVPN software, I am using the version 2.1RC because I wanted the version to match what I am going to run on the Mac. It can be downloaded from http://openvpn.net/index.php/open-source/downloads.html

Once I got everything downloaded, just compile the LZO, Tun, and OpenVPN:
I decided to have everything related to the vpn installed in /opt/vpn. One thing to note, I tried using the new version 2.x of LZO, and OpenVPN would not find it, so I had to use Version 1 even though 2 is supposed to be supported. So I did the following to compile LZO:

gzip -d lzo-1.08.tar.gz
tar -xvf lzo-1.08.tar
cd lzo-1.08
./configure --prefix=/opt/vpn/lzo
make
sudo make install

Next was to compile TUN

gzip -d tun-1.1.tar
tar -xvf tun-1.1.tar
cd tun-1.1
./configure --prefix=/opt/vpn/tun
make
sudo make install

Only issue with tun was that it did not use the –prefix, it puts everything where it needs to be in /usr/kernel/drv on solaris.

Next is openvpn:

gzip -d openvpn-2.1_rc19.tar.gz
tar -xvf openvpn-2.1_rc19.tar
cd openvpn-2.1_rc19
./configure --prefix=/opt/vpn/openvpn --with-lzo-headers=/opt/vpn/lzo/include --with-lzo-lib=/opt/vpn/lzo/lib
make
sudo make install

Once that is installed I did the simple 1 to 1 vpn connection (static key) for just testing to see if it would work. So in the /opt/vpn/openvpn/sbin directory I did this:

cd /opt/vpn/openvpn/sbin
./openvpn --genkey --secret static.key

I then copy that key to my client via some “secure” means

Then created a server.conf that looks like this:

dev tun
ifconfig 10.8.0.1 10.8.0.2
secret static.key
cipher AES-256-CBC
keepalive 10 120

On my client (MacOSX) I downloaded Tunnelblick from http://code.google.com/p/tunnelblick/downloads/list and installed it. Next I copied that static.key from the server to the client and put it in ~/Library/openvpn. I also created a openvpn.conf in that directory that looked like this:

remote a.b.c.d
dev tun
ifconfig 10.8.0.2 10.8.0.1
secret static.key
cipher AES-256-CBC
route 10.0.0.0 255.255.255.0

In the above, a.b.c.d represents my public IP address for my solaris server.

Now when you start tunnelblick it will search that directory and find that config file and ask if you want to load it. But we are not quite ready to start yet. The next thing I had to do was forward port 1194 UDP off of my router to my OpenVPN server. I will leave this exercise to you. You will also need to make sure IP forwarding is enabled on the Solaris 10 server (because I only have 1 network card in it, but “two” different networks on the box. IP Forwarding will allow your remote machine to be able to see your local network. And since my OpenVPN server is not the router for the entire network, I had to add a static route on my router to say that 10.8.0.0 is available via the openvpn servers local network address, I.e. 10.0.0.1.

You should be able to start the openvpn server now:

/opt/vpn/openvpn/sbin/openvpn server.conf

Once it is started you can use tunnelblick to connect. Once you are connected, you should see that is is connected and the icon has changed from this:
Picture 3
to look like this:
Picture 2

You should also see a tun0 device show up:

ifconfig tun0
tun0: flags=8851 < up ,POINTOPOINT,RUNNING,SIMPLEX,MULTICAST > mtu 1500
	inet 10.8.0.2 --> 10.8.0.1 netmask 0xffffffff 
	open (pid 608)

You should now be able to see all your hosts on the “remote” network. Next up I am going to work on doing the pki infrastructure so I can hopefully link other clients both static and dynamic.

This make is really nice to be able to see your “home” network while you are away.