Backing up a Playstation 3

So in my efforts to get backups of all my devices/computers and have a copy off site, I ran in to a little problem with the PS3. I went to the local Best Buy and bought a couple Seagate portable 2.5 inch USB 3 hard drives. One was a 1TB one for my PS3 and one was a 2TB one for the PS4. The PS4 immediately started working and backed up. Excellent, however the PS3 had a myriad of �issues. The first issue is that the drive has to be formatted as FAT32 for the PS3 to be able to read/write to it. Well Windows 7 won’t let you format a drive as FAT32, it only supports NTFS and exFAT (the successor to FAT32). While the PS4 can read/write exFAT the PS3 is stuck on FAT32.

Luckily I have some some Mac’s and was able to format the hard drive as FAT32. Now the PS3 recognized it and I thought my problems with the backup were over. So I started the backup and then left it run. I came back a few hours later (as it said it was going to take 4 hours to do the backup) and saw the error “Unable to access the drive.” Well crap, so I thought it was a fluke and tried to start the backup again. This time it went for about 30 minutes or so and then died again with the same error. I tried it a few more times and each time it died at different parts. Since failure is not an option, but it was getting late, I decided to stop for the evening and pick backup the next day.

On the following day I started doing some google-fu and trying to see what other people were doing for backups on the PS3. Everything I had seen I had already done with the exception of one small sentence from one post on a message board. That sentence is what actually fixed my problem. As you would note from above I bought a �USB 3.0 drive (as most are 3 now days vs the 2.) Well this was the actual issue. It appears that the PS3 USB 2.0 ports don’t have enough voltage to power a portable hard drive such as the one I bought. If the drive had external power, I wouldn’t have had the issue. So the solution was to put a powered USB hub in between the PS3 and the USB hard drive. I did that, and presto about 4 hours later the backup was done.

 

Hopefully this will help other people. As for the formatting of the hard drive in FAT 32, if you don’t have a Mac laying around you can download GPartd (which is a Linux ISO) which you can boot and then format the USB drive. (It is available from�http://gparted.sourceforge.net/livecd.php.)

A logging we shall go

Today I have 2 quick tips for logging, one for Oracle WebGates with OAM and another for Convergence in Glassfish with OAM SSO.

 

If you use Oracle Access Manager and perhaps their Oracle Web Tier (webgate, etc) you may have found that the webgates (Oracle HTTP Server, aka Apache) don’t log usernames in the access logs if you are using SSO with OAM. This sort of sucks if you want to pull the webgate access logs in to a cool log program like Splunk. Well a quick way to fix this is make sure that you have the mod_log_config module loaded in your httpd.conf for the OHS server. Next either modify the common or combined CustomLog definition or create a new one where you replace the %u with a %{OAM_REMOTE_USER}i� (Or what ever you named your OAM User header variable.) What this will do is now log the OAM_REMOTE_USER header variable in the place of the %u (which is for http auth style usernames). The one caveat is it will only log that value if it exists. If it doesn’t exist it doesn’t get logged. So you may miss a couple of pages until the header is created, but everything after that should be logged.

 

The second tip is closely related to the first and is assuming that you are using a custom SSO module to single sign on a user in to the Convergence web mail application. When this happens, like the webgates, you won’t get a username field in the access logs (if you even have them enabled, as they aren’t by default.) To log the OAM_REMOTE_USER in Glassfish, go to the server and where the access logging is defined, add a %header.OAM_REMOTE_USER% in to the logging definition. It takes affect immediately and you don’t have to restart Glassfish.

 

I looked all over the interwebs for this, so hope this helps you out.

Moving VM’s between hosts

About a year ago I purchased a 1U IBM X3550 server to run VMware vSphere 5 on. While it was cool to have a server that had dual quad procs and 8 gig of ram in it, the noise it put off was too much for my family room. (Just think of half a dozen 1 inch fans running at 15,000RPM almost constantly.) Recently I have been spending more time in the family room and the noise has gotten to a level that it is almost impossible to do anything in the room with out hearing it. (Like watch tv, a movie, play a game, etc.) So I started looking at my favorite used hardware site, geeks.com, for a new “server”. Well it finally arrived today, an HP XW8600 workstation. It is another dual quad proc, however it has 16GB of ram, and 12 SATA ports and a larger case, and the best of all, almost absolutely quiet.

So with it installed, I needed to start moving the VM’s from the IBM Server to the HP Server. In an enterprise environment, this usually isn’t a problem as you usually have a shared storage (SAN) that each of the hosts connect to. Well in my little home lab I don’t have shared storage. I did try to use COMSTAR in Solaris 10 to export a “Disk” as an iSCSI target. While this would work, it was going to take forever to transfer 1TB of VM’s from one server to a VM running on my Mac and back to the new server.

So a googling I went, and what I found was a way easier way to copy the VM’s over. ovftool, which runs on Windows, Linux and Mac. What it does is allow you to export and import OVF files to a VMware host. The side benefit of that is that you can export from one and import to another all on one line.

So I downloaded the Mac version and started coping. The basic syntax is like this:


./ovftool -ds=TargetDataStoreName vi://root@sourcevSphereHost/SourceVM vi://root@destvSphereHost

So if one of my VM’s is called mtdew, and I had it thin provisioned on the source host and wanted it the same on the destination host, and my datastore is called “vmwareraid” I would run this:

./ovftool -ds=vmwareraid -dm=thin vi://root@ibmx3550/mtdew vi://root@hpxw8600

where ibmx3550 is the source server and hpxw8600 is the destination server. If you don’t specify the “-dm=thin” then when it is copied over, it will become a “thick” disk, aka us the entire space allocated when created. (I.E. a 50GB disk that only has 10GB in use would still use 50GB if the -dm=thin is not used.)

There are some gotchas that you will have to look out for:

  1. Network configs, I had one VM that had multiple internal network’s defined. Those were not defined on the new server, so there is a “mapping” that you have to do. I decided I didn’t need them on the new server so I just deleted them before I copied it over.
  2. VM’s must be in a powered off state. I tried them in a “paused” state and it did not want to run right.
  3. It takes time, depending on the speed of the network, disk, etc, it will take a lot of time to do this, and the VM’s have to be down while it happens. So definitely not a way to move “production” vm’s unless you have a maintenance window.
  4. It will show % complete as it goes, which is cool, but the way it does it is weird. It will show the % at like 11 or 12 and then I turn my head and all of the sudden it says it is completed.
  5. I did have some issues with a vm that I am not sure what happened to it, but when I try to copy it, I get an error: “Error: vim.fault.FileNotFound”… It may be due to me renaming something on the vm at some point in the past.

Hope this helps some other “home lab user”…

 

 

LDAP and BER size

I recently came across a unique problem that didn’t “stand” out until I got to thinking about a couple of different situations that I had tested this in. So the scenario is that I needed to create a static group of unique members in LDAP (Sun Directory Server Enterprise Edition 6.3.1 and/or Oracle Directory Server Enterprise Edition 11.1.5.0) that has a extremely huge amount of members in it. So I created the LDIF file with all 60,000+ uids in it and proceeded to run an ldapadd against the server with the file. Well it immediately would come back with:

adding cn=testgroup,ou=group,dc=sungeek,dc=net

However, when looking in the LDAP, the group never showed up. Also when you look at the access log on the server you would see something similar to this:

[07/Aug/2012:21:22:39 -0400] conn=3 op=-1 msgId=-1 – closing from 127.0.0.1:48160 – B1 – Client request contains an ASN.1 BER tag that is corrupt or connection aborted

Now some times, depending on the versions of LDAP server and ldapadd programs, I got a “broken pipe” right after the adding output.

As you can see from the output in the error log it is not very descriptive on what the actual error is. I know I spent about 6 hours looking in to it to figure out what was actually the problem. Well this morning I was poking around the cn=config docs and found this:

http://docs.oracle.com/cd/E19528-01/820-2495/nsslapd-maxbersize-5dsconf/index.html

What this document shows is the attribute nsslapd-maxbersize, which is:

Defines the maximum size in bytes allowed for an incoming message. This limits the size of LDAP requests that can be handled by Directory Server. Limiting the size of requests prevents some kinds of denial of service attacks.

The limit applies to the total size of the LDAP request. For example, if the request is to add an entry, and the entry in the request is larger than two megabytes, then the add request is denied. Care should be taken when changing this attribute.

So by DEFAULT it is set to 2MB. Well my LDIF file was over 3.5MB in size. Which means that it was too big for the addition. To change it, do an ldapmodify with this ldif:

dn: cn=config
changetype:modify
replace:nsslapd-maxbersize
nsslapd-maxbersize: ########

I changed mine to 6MB, or 6291456, to hopefully cover any sizable additions in the future. Once done I restarted the directory server and tested again, and everything was good. According to the docs, the max size you can make this attribute is 2GB in size, and a size of 0 means default to 2MB, or 2097152. I think Oracle needs to make the error that is in the access log a little more descriptive, like “hey your query/add is too big yo”.

Hope this helps some one..

 

 

How R-Studio for Mac saved my ass

I have an external Seagate Firewire 800 drive that I use on my Mac Pro that has over 700GB of VMware images on it. Pretty much anything I work on I have an image on there, everything from a Windows XP client to Microsoft Exchange servers, and Solaris, Linux and the such. I have had the drive for a couple of years and it has always been rock solid and fast too. (I bought it when Windows 7 screwed up my internal drives.)

Well today I  was wanting to run a VM off of that drive to test something, and noticed that the drive did not appear on my Desktop. Weird, it as plugged in, the light was flashing, but no icon. Hmm, where the hell did it go? So I unplugged it and plugged it back in. Still no go. So i tried switching power supplies, still no go. Then if I left it sit for a while I would get the error that it could not use the drive, or that it needed initialized. Holy crap, that isn’t good.

I popped up the command prompt, diskutil would list that there was a drive there, but no partitions on it. The gui Disk Utility would see the disk, and again no partitions and wouldn’t let me do anything with it. gpt wouldn’t let me read it. So I thought to my self, did Windows 7 screw the disk up again (it was working the other day when I had booted in to windows, but forgot to unplug it before doing so 🙁 ). So I booted in to Windows 7, it could see the drive but said it was unformatted. Double shit. So back to MacOSX, I went out searching for some data recovery programs. The first one was Data Rescue 3 while the graphics were gimmicky it didn’t even look like the demo version could even see 1 file on the drive. So I uninstalled it and started looking for another program.

In the past I have used the R-Studio for NTFS & FAT and both have worked wonders. I did a google search, and they now have a Mac version.  Now we are talking! So I downloaded the demo, and with in about 2 minutes of starting it, it showed me the entire disk and all the files that were on it. But since it was a demo it would only restore 10 files under 64kb.. So I bought it for $79.99. 2 minutes after buying it, it was busy restoring the files to another external 2TB USB drive. 6 hours later, 100% of my files were restored from the dead firewire drive, and my VM started up just like nothing had happen.  Needless to say it saved me hundreds of hours of reinstalling and setting up my VM environment. Now I just need to go get another drive to make a backup of this one.

 

So if you are ever needing to restore MacOS, HSFS, NTFS, FAT, UFS, EXT file systems, definitely check out r-tools technology and their R-Studio products. http://www.r-tt.com/  For $79.99 it was more than worth it!