I finally got time again to start playing with Sun Ray 5 Early Access software. Now my current setup probably should not be used for any type of test more than simple single/dual user testing. But I did not want to test the software on the current working server. So I decided to install it in a VMWare image on my Mac Pro. The Mac Pro is more than suited to handle it and had plenty of free memory/processor/storage to use so there was no contention (I gave the VM 4 processors and 8 gig of ram)..
The kicker was getting VMWare Fusion to actually allocate the network cards the way I needed them. I gave the VM 2 nics (the Mac Pro has 2), however the only options that VMWare Fusion let you do is NAT, Host-Only, and Bridged. None of which will work if I want a private network for the Sun Ray's. To fix this you will need to go and edit some files that VMWare Fusion uses. What I had to do was the following:
1. Open up the Terminal app
2. Edit the file /Library/Application Support/VMware Fusion/boot.sh
3. Comment out the following line:
And then add 2 lines directly below that line, which tells vmware to bind the en0 physical device to the vmnet0 virtual device, same for en1 to vmnet2. Note you can not use vmnet1 or vmnet8 as those are for NAT and Host-only connections.
"$LIBDIR/vmnet-bridge" -d /var/run/vmnet-bridge-vmnet2.pid vmnet2 en1
Once done, do the following:
Now go in to your Mac System Preferences and configure the second network card for a private subnet (i.e. 192.168.128.0/24, and set the IP to be something like 192.168.128.254).
Now make sure that your VM is NOT started and is in a powered off state. Go in to the VM and under the settings for that VM add another network adapter, make sure it is selected as "Connected" it doesn't matter what the device is configured to as we will change it later to an option that is not shown in that list.
Now you need to change the .VMX file so that it can use the new network device. So go in to the directory where you have your VM's at and then cd in to the machine.vmwarevm directory (For example mine is called SolarisDev.vmwarevm)
Once in there you will need to edit the vmx file, mine is called SolarisDev.vmx. The first thing we are going to change is the ethernet0.connectionType property. Right now it could be any of the ones listed (host-only,bridged, nat). But we are going to change this to "custom":
Next find the entry for ethernet0.vnet, if it doesn't exist create it and make it look like the line below. If it does exist and doesn't match that below, make it match that:
Now we need to do the same for the ethernet1 entries. The only difference to what is above is vmnet0 changes to vmnet2. Once the changes are made you can save the file and start up your Solaris VM. Now what ever network is on your en0 connection on your Mac should be what is connected to the e1000g0 network on the Solaris side. I used the e1000g0 as the "public" side of the Sun Ray server. The e1000g1 interface will be what ever is connected on the en1 connection on the Mac side. I used this adapter for the private Sun Ray Lan.
You should be able to finish following the instructions on the Sun Ray wiki now and get everything configured.
To test the soft client, I set up LAN Connections on the Sun Ray Server:
I then installed the soft client in another VM on the same machine that only had access to the public network. You then can tell the soft client what the IP of the Sun Ray server is and it will connect. Pretty darn cool that the soft client works with minimal config.
This can probably be done on a MacBook Pro as well, if you use the wireless connection as the public side and the wired as the private side. Nice way to do a little demo in one computer.
For reference here is what my network section of the .vmx file looks like :
ethernet0.connectionType = "custom"
ethernet0.generatedAddress = "00:0c:29:f8:29:3b"
ethernet0.generatedAddressOffset = "0"
ethernet0.linkStatePropagation.enable = "TRUE"
ethernet0.pciSlotNumber = "32"
ethernet0.present = "TRUE"
ethernet0.virtualDev = "e1000"
ethernet0.vnet = "vmnet0"
ethernet0.wakeOnPcktRcv = "FALSE"
ethernet1.addressType = "generated"
ethernet1.connectionType = "custom"
ethernet1.generatedAddress = "00:0c:29:f8:29:45"
ethernet1.generatedAddressOffset = "10"
ethernet1.linkStatePropagation.enable = "TRUE"
ethernet1.pciSlotNumber = "35"
ethernet1.present = "TRUE"
ethernet1.virtualDev = "e1000"
ethernet1.vnet = "vmnet2"
ethernet1.wakeOnPcktRcv = "FALSE"
Sun Ray Software 4.2 Wiki: http://wikis.sun.com/display/SRSS4dot2/Home
Started installing the Sun Ray 5 Early Access software tonight. Some things I found so far that weren't where I thought they should have been in the wiki docs:
1. It requires that you have Java 1.6 on the machine. Solaris 10 Update 7 does not come with Java 1.6, so I had to download and install it.
2. It needed apache tomcat installed, this is for the admin gui.
Unfortunately I only got as far as getting it installed. I was testing it in a Solaris VM machine on my Mac, however I was having problems getting it to bind to the second ethernet interface. So that will be for later.
I went to a presentation today by a hardware vendor. Once again I am trying to understand why people think Blade server technology is such a great think to have. For example the vendor today had a blade system that was 10 RU high, and had slots for 10 blades. So basicly you have 10 1RU servers being mounted vertically in a 10 RU chassis. What does this actually buy you? Not to much of anything, you still have 10 independent servers to manage, you still had all the cables running to the chassis as you would if you had 10 1RU servers. The only thing you save is the number of power cords you need. I will say something about this vendor though, if I were to ever buy a Blade system, it would probably be theirs. So who was this vendor? Sun Microsystems. Some of the features that I think they have that other vendors don't are the following:
- SPARC, AMD Opteron, and Intel Xeon processor-based server modules. The SPARC modules are based off of the Niagara chip set.
- 4 hot swappable hard drives per blade. This is a MAJOR issue with other vendors as they don't provide hot-swappable hard drives, which means you have to take the blade out of service to replace the drives.
- Twice the memory support that most other vendors. (up to 64GB)
- Price - Just comparing list prices between the "top 3" shows that Sun has more bang for the buck.
I am not an advocate on blades, I think they are just a way for vendors to play buzz word bingo with people who don't know better.
One of the other cool things that was talked about is the Sun StorageTek 5800 aka Honeycomb. When they first started talking about it, it almost sounded like they were talking about WinFS that was supposed to be in Longhorn. But the more they talked about it, the more I saw that it was not WinFS, but a really cool and really fast data storage system. Wish I could win the lottery to buy one ..
Another server I would absolutly love to have at home is the Sun Fire X4500 aka Thumper, especially now that they are offering 1TB drives in it, which means it now holds 48TB of disk space. That is a lot of freaking space to have in a 5U format. If I only had the money, heck I would settle for the 24tb model. Add in Sun Virtual Desktop Infrastructure to the thumper and you have one hell of a house server. You could put Sun Rays hanging off of the thumper and each Sun Ray could have Solaris or Windows or Linux or what ever desktop you want. And because all the data is stored on the thumper in a ZFS file system it is automatically protected by raidz (assuming you set it up that way), and you could share all your file with the other people in your house.
And the last but not least coolest thing they talked about was the new Niagara 2 chips. Some notable features:
- up to 64 simultaneous threads at a time - which means you basicly have an E10K/F12K/F15K in a single 2 RU box
- One floating point unit per core. Whereas the Niagara 1 chips had one floating point for the CPU, the Niagara 2's have 1 per core, which makes the chip even faster.
- 10 GB Ethernet on the silicon - if gigabit ethernet was not fast enough for you, then you can have up to 2 10GB ethernet connections that are piped directly in to the CPU.
I can't wait to get some of the new servers in with the Niagara 2 chips in them, they are going to spank the Niagara one machines.
After reading ThinGuy's Blog: Are PC's Killing Health Care? I can't agree more... It got me to thinking when I was in the emergency room of a local hospital last summer. (Long story, but spent a while there) Anyways, while I was there (I have not been to the ER in ages and the last time I was everything was still done on paper), they popped down a little thing on the wall and he behind it was a "Windows Thin client". The nurse did nothing but b@#*h about how slow it was. I watched and it looked to be running a Wyse Client and using Windows from some place that was not local. I got to thinking about how a Sun Ray environment would work in this hospital. Here are some ideas I thought of while laying in that short bed (I am over 6'5") for 5 hours.
- Instead of having the paper charts, when you arrive, your are "assigned" a smart card and all your information follows you on that card no matter where you go (AKA HCHD, Hospital Chart Hot Desking). For example I had to end up going to X-Ray, and the X-Ray tech did not have the complete orders and started taking Chest X-Ray's instead of X-Ray's of my knee. (Later found out that they wanted both, but the doctor forgot to put the knee one on the order sheet, if he had seen my chart he would have known that the original reason I was there was for knee problems).
- The monitoring devices in the room (BP/Heart Rate/oxygen/etc) could be attached to the Sun Ray and therefore your info logged and displayed on the Sun Ray at a click of the button.
- Each patient could be given their own card for surfing the web, etc.. (if they are ambulatory enough to do this)
- By using the smart card to keep track of your stats, there is no paper to accidentally get "lost" or stolen (helps with HIPPA).
- Be a lot faster than the current Wyse Terminals they were using as they would not have to wait for it to boot.
- Security, there isn't a day go by that I don't read about some one losing some one else's information. I.E. VA Hospital, (which uses some Sun Rays in areas around where I am), This would eliminate all of these loses, if everyone was forced to use it.
- All Labs/X-Rays posted directly to the persons "card"
Granted some of the above would be a feat to pull off, but it can be done.
I think that using Sun Ray's is the coolest thing, especially now that I have it set up for all the people in my group to pull their card out of their Office Sun Ray and plug it in to their Home Sun Ray and everything is still there. (If I can just get the performance problems worked out it would be really killer, but something about the combination of Solaris 10, Sun Ray 4 is causing me some slowness, and I am not sure where it is exactly. )
Now if more people realize the benefits of using Sun Ray's over other "Chubby Clients" Sun Ray's would take over the world.
The other day I showed Chris how to install a new Sun Ray server. (this one is a 10 processor domain on a E25K, running Solaris 10, ZFS and everything new). Well we switched it over to be the primary Sun Ray server last night and noticed some slowness. So we decided to install
APOC Sun Desktop Manager on it to see if we could disable some stuff to make the JDS enviroment run a little faster. The install of the console and server seemed to go fine, but every time we tried to run the "svcadm enable apocd/udp" it would fail and go in to maintenance mode. It seems that when ever the install happened, the info for the apocd/udp was not populated in to the inetadm..
The first thing I did was look at the /var/adm/messages and saw this:
Oct 26 10:12:06 megatron inetd: [ID 702911 daemon.error] Property 'endpoint_type' of instance svc:/network/apocd/udp:default is missing, inconsistent or inval
Oct 26 10:12:06 megatron inetd: [ID 702911 daemon.error] Property 'isrpc' of instance svc:/network/apocd/udp:default is missing, inconsistent or invalid
Oct 26 10:12:06 megatron inetd: [ID 702911 daemon.error] Property 'wait' of instance svc:/network/apocd/udp:default is missing, inconsistent or invalid
Oct 26 10:12:06 megatron inetd: [ID 702911 daemon.error] Unspecified inetd_start method for instance svc:/network/apocd/udp:default
Oct 26 10:12:06 megatron inetd: [ID 702911 daemon.error] Invalid configuration for instance svc:/network/apocd/udp:default, placing in maintenance
Interesting, we then spent a while trying to figure out what was supposed to be in there. Running "inetadm -l network/apocd/udp" produced this:
Error: Required property name is missing.
Error: Required property endpoint_type is missing.
Error: Required property proto is missing.
Error: Required property isrpc is missing.
Error: Required property wait is missing.
Error: Required property exec is missing.
Error: Required property user is missing.
What I ended up doing was this:
inetadm -m network/apocd/udp endpoint_type=dgram
inetadm -m network/apocd/udp proto=udp
inetadm -m network/apocd/udp isrpc=FALSE
inetadm -m network/apocd/udp wait=TRUE
inetadm -m network/apocd/udp exec="/usr/lib/apoc/apocd inetdStart"
inetadm -m network/apocd/udp user="daemon"
inetadm -l network/apocd/udp
# svcs apocd/udp
STATE STIME FMRI
maintenance 11:21:41 svc:/network/apocd/udp:default
# svcadm disable apocd/udp
# svcadm enable apocd/udp
# svcs apocd/udp
STATE STIME FMRI
online 11:36:52 svc:/network/apocd/udp:default
So now apoc runs, but now only part of the config stuff that I set in the Desktop Manager actually works. For example, I got the splash screen not to show, but I can't get the default terminal to be dtterm instead of gnome-terminal (dtterm uses about 7meg of ram, whereas gnome-terminal uses about 78 meg..Take that and add about 20 users with about 10 or 15 terminal windows open and you have 2gig of ram for dtterm vs 23.4 gig of ram) So now we are trying to figure out some other performance enhancements. Thinking about putting a less intensive graphical environment on it for the people to use.
Any one have some good tips for speeding up a 10 x 1.2GHz UltraIII box with 16gb of ram running Solaris 10?