Splunk 7.0 the good and bad

So I will preface this post by saying I love Splunk, it is the best log aggregation application out there.

So on with the post, and it must be a good one right? Anyways, Splunk released version 7.0 of their Splunk Enterprise product last week during their .Conf 2017 conference, which I was an FTR at. There were a few new features in it that were amazing, such as the new metrics index type which was blazingly fast. So like all “fanboys” of anything I decided to update my home server on Thursday night after the conference was over. This is where the fun began.

First when I started using Splunk years ago, it supported a myriad of operating systems for the servers. If you wanted Solaris, FreeBSD, AIX, HP-UX, MacOSX, Windows or Linux you were golden. However over the years that list has been pared back to now just Linux and Windows. (MacOSX is supported, but only for the free and trial editions. Basically used for development and home use, not for enterprise use.)

So now that Solaris is no longer supported, I needed to switch my home system from OpenIndiana (aka OpenSolaris) over to Linux. With that I spun up a new CentOS 7 VM on my home server, and copied over all my Splunk data from the Solaris one to the Linux one. I then removed the bin and lib directories (I use the tar installs and that is they only place machine specific binaries exist.) With that done, I untarred the Linux Splunk 7.0 over top my current directory and started it up. So far everything was good, until I tried to do a search. If it was a search for like the last 15 minutes it worked, but anything over that was dead because one of the hot buckets was corrupted. I am not sure if it happened during the transit or what. So off to the fsck commmand to try to fix them. An hour or so later it couldn’t fix some of them, so it was getting late and I just went to bed.

The next day when I returned home I tried to log in to my Splunk instance to see how it was doing, to my surprise I couldn’t even log in to it. It appeared that the linux host had crashed. I was dumbfounded as I hadn’t seen an actual kernel panic like that in a while. So I restarted the machine and started splunk back up and everything was working again.

A few days past and I went to check on it again, and once again it was dead. So now I am really curious. I ended up installing the crash utilities on the host and started going through the vmcore files. Yup each time it crashed it was splunkd that caused it. Unfortunately I don’t know much more than that as to what is actually causing it to happen. It appears to happen at random times.

The output of crash shows this:

KERNEL: /usr/lib/debug/lib/modules/3.10.0-693.2.2.el7.x86_64/vmlinux
DUMPFILE: /var/crash/127.0.0.1-2017-10-01-12:26:50/vmcore [PARTIAL DUMP]
CPUS: 8
DATE: Sun Oct 1 12:26:43 2017
UPTIME: 1 days, 17:12:55
LOAD AVERAGE: 0.00, 0.01, 0.05
TASKS: 330
NODENAME: splunk
RELEASE: 3.10.0-693.2.2.el7.x86_64
VERSION: #1 SMP Tue Sep 12 22:26:13 UTC 2017
MACHINE: x86_64 (3399 Mhz)
MEMORY: 3 GB
PANIC: "double fault: 0000 [#1] SMP "
PID: 1420
COMMAND: "splunkd"
TASK: ffff8800bae91fa0 [THREAD_INFO: ffff880000120000]
CPU: 3
STATE: TASK_RUNNING (PANIC)

First few lines of the “bt” output from crash:
PID: 1420 TASK: ffff8800bae91fa0 CPU: 3 COMMAND: "splunkd"
#0 [ffff8800bfac4d88] machine_kexec at ffffffff8105c4cb
#1 [ffff8800bfac4de8] __crash_kexec at ffffffff81104a32
#2 [ffff8800bfac4eb8] crash_kexec at ffffffff81104b20
#3 [ffff8800bfac4ed0] oops_end at ffffffff816ad2b8
#4 [ffff8800bfac4ef8] die at ffffffff8102e97b
#5 [ffff8800bfac4f28] do_double_fault at ffffffff8102b6e2
#6 [ffff8800bfac4f50] double_fault at ffffffff816b6908
[exception RIP: page_fault+13]
RIP: ffffffff816ac52d RSP: ffff880000122fc8 RFLAGS: 00010092
RAX: 0000000000000ff8 RBX: 0000000000000000 RCX: ffffffff816ac2ac
RDX: 00001fffffffffff RSI: ffffffff81a73118 RDI: 0000000000000000
RBP: ffff880000123098 R8: ffffffff81911167 R9: ffffea00002e7b80
R10: ffffea00002e7b80 R11: 0000000000000000 R12: ffffffff81a73118
R13: ffffffff81a73118 R14: ffff880000120000 R15: ffff88008665a580
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000
--- ---
#7 [ffff880000122fc8] page_fault at ffffffff816ac52d
#8 [ffff880000123048] spurious_fault at ffffffff816afd8e
#9 [ffff8800001230a0] __do_page_fault at ffffffff816b01ae
#10 [ffff880000123100] do_page_fault at ffffffff816b0325
#11 [ffff880000123130] page_fault at ffffffff816ac548
[exception RIP: spurious_fault+48]
RIP: ffffffff816afd8e RSP: ffff8800001231e8 RFLAGS: 00010002
RAX: 0000000000000ff8 RBX: 0000000000000000 RCX: ffffffff816ac2ac
RDX: 00001fffffffffff RSI: ffffffff81a73118 RDI: 0000000000000000
RBP: ffff880000123208 R8: ffffffff81911167 R9: ffffea00002e7b80
R10: ffffea00002e7b80 R11: 0000000000000000 R12: ffffffff81a73118
R13: ffffffff81a73118 R14: ffff880000120000 R15: ffff88008665a580
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000
#12 [ffff880000123210] __do_page_fault at ffffffff816b01ae
#13 [ffff880000123270] do_page_fault at ffffffff816b0325
#14 [ffff8800001232a0] page_fault at ffffffff816ac548
[exception RIP: spurious_fault+48]
RIP: ffffffff816afd8e RSP: ffff880000123358 RFLAGS: 00010002
RAX: 0000000000000ff8 RBX: 0000000000000000 RCX: ffffffff816ac2ac
RDX: 00001fffffffffff RSI: ffffffff81a73118 RDI: 0000000000000000
RBP: ffff880000123378 R8: ffffffff81911167 R9: ffffea00002e7b80
R10: ffffea00002e7b80 R11: 0000000000000000 R12: ffffffff81a73118
R13: ffffffff81a73118 R14: ffff880000120000 R15: ffff88008665a580
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000

Output from the “vm” command:
PID: 1420 TASK: ffff8800bae91fa0 CPU: 3 COMMAND: "splunkd"
MM PGD RSS TOTAL_VM
ffff88008665a580 ffff8800b87f4000 324076k 864244k
VMA START END FLAGS FILE
ffff8800a5efe5e8 563d22303000 563d248b7000 8000875 /splunk/splunk/bin/splunkd
ffff8800a5efe438 563d248b7000 563d24964000 8100873 /splunk/splunk/bin/splunkd
ffff8800a5efe288 563d24964000 563d249db000 8100073
ffff880097e90438 7f4f8d800000 7f4f8ea00000 8200073
ffff8800a1f6cd80 7f4f90400000 7f4f91c00000 8200073
ffff8800a1f6ca20 7f4f923f7000 7f4f923f8000 8100070
ffff8800a1f6c948 7f4f923f8000 7f4f925f8000 8100073
ffff8800a1f6c6c0 7f4f925f8000 7f4f925f9000 8100070
ffff8800a1f6c798 7f4f925f9000 7f4f927f9000 8100073
ffff8800a1f6cca8 7f4f927f9000 7f4f927fa000 8100070
ffff8800a1f6c870 7f4f927fa000 7f4f929fa000 8100073
ffff88009ce66948 7f4f935ff000 7f4f93600000 8100070
ffff88009ce66a20 7f4f93600000 7f4f93a00000 8100073
ffff88009466aca8 7f4f93a00000 7f4f94800000 8200073
ffff88009ca98e58 7f4f949cd000 7f4f949e3000 8000075 /usr/lib64/libresolv-2.17.so
ffff88009ca98d80 7f4f949e3000 7f4f94be3000 8000070 /usr/lib64/libresolv-2.17.so
ffff88009ca99008 7f4f94be3000 7f4f94be4000 8100071 /usr/lib64/libresolv-2.17.so
ffff88009ca990e0 7f4f94be4000 7f4f94be5000 8100073 /usr/lib64/libresolv-2.17.so
ffff88009ca98f30 7f4f94be5000 7f4f94be7000 8100073
ffff88009ca98bd0 7f4f94be7000 7f4f94bec000 8000075 /usr/lib64/libnss_dns-2.17.so
ffff88009ca98af8 7f4f94bec000 7f4f94deb000 8000070 /usr/lib64/libnss_dns-2.17.so
ffff88009ca98ca8 7f4f94deb000 7f4f94dec000 8100071 /usr/lib64/libnss_dns-2.17.so
ffff88009ca991b8 7f4f94dec000 7f4f94ded000 8100073 /usr/lib64/libnss_dns-2.17.so
ffff88009ca98798 7f4f94ded000 7f4f94df9000 8000075 /usr/lib64/libnss_files-2.17.so
ffff88009ca986c0 7f4f94df9000 7f4f94ff8000 8000070 /usr/lib64/libnss_files-2.17.so
ffff88009ca98948 7f4f94ff8000 7f4f94ff9000 8100071 /usr/lib64/libnss_files-2.17.so
ffff88009ca98a20 7f4f94ff9000 7f4f94ffa000 8100073 /usr/lib64/libnss_files-2.17.so
ffff88009ca98870 7f4f94ffa000 7f4f95000000 8100073
ffff8800a1f6ce58 7f4f95000000 7f4f95600000 8200073
ffff880036735cb0 7f4f957ef000 7f4f957f0000 8100070
ffff880036735878 7f4f957f0000 7f4f959f0000 8100073
ffff880036735a28 7f4f959f0000 7f4f959f1000 8100070
ffff880036734af8 7f4f959f1000 7f4f95bf1000 8100073
ffff880036735d88 7f4f95bf1000 7f4f95bf2000 8100070

So now that means I will definitely hold off upgrading my production servers as if this is happening on my personal one, then I can only imagine what would happen to larger instances. It could also be a result of me being a fanboy and installing the .0 release of software, which any good admin will tell you “just say no to .0”.

WBOY, Comcast and Nexstar

So a lot has been in the “news” lately about how Monongalia and Preston Counties in WV will be losing WBOY-TV on March 14th, 2017. Now both Comcast and WBOY have pointed fingers at each other as to why the retransmission will cease on that date. WBOY states that it’s parent owner, NexStar Media Group, is not the reason it is being dropped. Also Comcast has been a little light on the information it has provided as why it is dropping it as well.

The interesting part about it is most people (or I should say all Comcast subscribers in Mon and Preston County) should have received a letter from Comcast stating that the station will be dropped on 3/14. I received a letter, however it was not for WBOY. But for another Nexstar Station WTEM-TV in Elmira NY. The premise was the same, the station is not in the receivers DMA (Designated Market Area) and Comcast states “our business agreement with the station’s owner to cary this out-of-market broadcast station has ended.”

What this reminds me of, and everyone has experienced it with cable and satellite services, is that the owner of a station, and it doesn’t have to be broadcast stations but could also be a “cable type station”, believes that the price that the cable/satellite provider is paying them for their station isn’t what they believe it is valued at. This happens all the time, and most recently Dish Network was going to drop WBOY from DISH the Friday before the Super Bowl. So my personal opinion is now that Nexstar owns WBOY / WTEM they asked Comcast for a increase in the retransmission fees, and Comcast said no because it (the station) was not a primary DMA station. So they dropped the contract and the viewers are left “hurt”.

What makes it more difficult with Mon/Preston County they are in the Pittsburgh DMA. (Which I actually like because I actually prefer the Pittsburgh Broadcast stations, they seem more professional and have less mishaps like running a commercial in the middle of program when there shouldn’t be one.) But I also see the viewers point of view too. WBOY has a small studio in Morgantown and do local news for the area. Whereas the Pittsburgh channels may only do a story about the area if it is something major.

Another issue to add to it is, while WBOY is a broadcast channel, it is a weaker signal. In addition it is on VHF, which with a digital signal, propagation is severely limited especially since their power output (effective radiated) is only 12.25kW vs. say WPXI in Pittsburgh (the NBC affiliate for the Pittsburgh DMA) which broadcasts on UHF 48 at a effective radiated power of 1,000kW. Given that north central West Virginia is a dissected plateau it means you need a antenna really high and very directional to pick up channel 12 or cable/satellite provider to provide you the signal. (I also live in a valley, which means the only station I can even begin to pick up is WNPB the PBS affiliate that is in Morgantown.) So viewers who want to keep WBOY only have a couple of solutions. The first would be to see if they can receive the signal, but according to this graphic, that seems a little hard for the Morgantown area for OTA reception:

graphic from tvfool.com

As you can see by the time it hits Morgantown it is very fringe reception. Also you may not have the ability to put an antenna outside to pick it up. So the second option would be Satellite, however those people would fall back in to the Comcast scenario, since Mon and Preston are part of the Pittsburgh DMA, the satellite provider will only give you those channels and none of the off-market channels. So you are left with not seeing the channel at all.

Now some people in the counties have been trying to get Mon and Preson reclassified in to the Clarksbug DMA. (I for one don’t want to see that as I like the Pittsburgh channels better…) I however don’t see this happening as the Pittsburgh stations would then fight they are losing a major source of viewership.

So the interesting part about this area, is there are 3 “major” TV markets that are in very close proximity. The largest is the Pittsburgh DMA which includes stations (major broadcast) KDKA (CBS), WTAE(ABC), WPXI(NBC), WQED(PBS),WNPB(PBS)[technically in Pittsburgh, but also listed in Clarksburg/Weston], WPGH(FOX), WPNT(MyNetwork). The other two are smaller markets. Clarksburg/Weston has 3 broadcast stations, WDTV (CBS), WVFX(FOX/CW) and WBOY (NBC/ABC). The third is the Wheeling WV/Steubenville OH which has 2 broadcast channels, WTRF (CBS/ABC/MyNetwork) and WTOV (NBC/FOX/MeTV)

Now when I lived at home with my parents we are pretty much in the middle of all three of these markets. I actually had several antenna’s in the attic that could pick up a lot of these stations, so we had several of each broadcast network. This is where I also became a video junkie and loved doing late night DXing of stations. Now my parents can only get the Pittsburgh stations on cable, even though the area could technically receive stations from all 3 markets.

But the whole reason for this post is my speculation of what actually happened vs what is being told to the customers. In short I believe Nexstar asked Comcast to provide them more money to be able to retransmit WBOY on the Mon and Preston Counties Comcast system. Comcast said, no you are a secondary so we will just drop you. And that was it, no more negotiation, no more looking out for the best interest of the viewers of the station, just drop it. While I seldom watched WBOY, there were times that I did when major events were going on in the state and the Pittsburgh channels were not covering it. It also begs to ask that if it really was a issue with the station being out of the primary DMA, then why is Comcast keeping WDTV and WVFX since they are out of the Clarksburg/Weston Market as well. Sure people will have to switch to one of those for local news, but I guess it is better than losing all 3 stations.

I doubt that Comcast will change their mind, even though they make billions of dollars a year. But in the reality I am surprised that the stations have been carried for as long as I have been here (over 23 years).

Backing up a Playstation 3

So in my efforts to get backups of all my devices/computers and have a copy off site, I ran in to a little problem with the PS3. I went to the local Best Buy and bought a couple Seagate portable 2.5 inch USB 3 hard drives. One was a 1TB one for my PS3 and one was a 2TB one for the PS4. The PS4 immediately started working and backed up. Excellent, however the PS3 had a myriad of  issues. The first issue is that the drive has to be formatted as FAT32 for the PS3 to be able to read/write to it. Well Windows 7 won’t let you format a drive as FAT32, it only supports NTFS and exFAT (the successor to FAT32). While the PS4 can read/write exFAT the PS3 is stuck on FAT32.

Luckily I have some some Mac’s and was able to format the hard drive as FAT32. Now the PS3 recognized it and I thought my problems with the backup were over. So I started the backup and then left it run. I came back a few hours later (as it said it was going to take 4 hours to do the backup) and saw the error “Unable to access the drive.” Well crap, so I thought it was a fluke and tried to start the backup again. This time it went for about 30 minutes or so and then died again with the same error. I tried it a few more times and each time it died at different parts. Since failure is not an option, but it was getting late, I decided to stop for the evening and pick backup the next day.

On the following day I started doing some google-fu and trying to see what other people were doing for backups on the PS3. Everything I had seen I had already done with the exception of one small sentence from one post on a message board. That sentence is what actually fixed my problem. As you would note from above I bought a  USB 3.0 drive (as most are 3 now days vs the 2.) Well this was the actual issue. It appears that the PS3 USB 2.0 ports don’t have enough voltage to power a portable hard drive such as the one I bought. If the drive had external power, I wouldn’t have had the issue. So the solution was to put a powered USB hub in between the PS3 and the USB hard drive. I did that, and presto about 4 hours later the backup was done.

 

Hopefully this will help other people. As for the formatting of the hard drive in FAT 32, if you don’t have a Mac laying around you can download GPartd (which is a Linux ISO) which you can boot and then format the USB drive. (It is available from http://gparted.sourceforge.net/livecd.php.)

OpenSSL and Solaris 10

So if you are still running Solaris 10 and haven’t looked at the patches recently, Oracle bundled in OpenSSL 1.0.1 as a patch. While this was awesome to see an updated version, now that everyone should only be running TLSv1.2 on their websites, there are some issues. The main issue is that the GCC version that is supplied with Solaris 10 for what ever reason has /usr/sfw/include and /usr/sfw/lib hardcoded in to the gcc binary as the first Include and Library path. While I can understand why it was probably done to begin with, it makes compiling any software that needs an updated version of OpenSSL completely impossible.

For example say you wanted to compile a new *AMP stack, and wanted the latest version of Apache with SSL enabled. Well, it won’t work. It will by default use the old 0.9.7 OpenSSL libraries and include files. I spent days on this, even compiled my own version of OpenSSL and tried to link against it, still wouldn’t work and kept linking against the 0.9.7 ones..

So how to fix this? Well you could build your own version of GCC, which in and of itself is not an easy task. You also can’t remove the OpenSSL 0.9.7 libraries as there is a lot that depends on it. So if you read the patch notes for the new OpenSSL they give some “recipes” on ways to maybe possibly fix it during compile time. Which for me did not work. So what I ended up doing was this:

 

  1. In /usr/sfw/include I moved the openssl directory to .openssl. The new 1.0.1 includes are in /usr/include/openssl.
  2. In /usr/sfw/lib I removed the symlink from libssl.so and libcrypto.so. Also did this in /usr/sfw/lib/amd64. This allows those apps that are using the actual libssl.so.0.9.7 to still run, but compile time stuff can’t find them.

Once I did that, I went back to the apache directory and did the compile and yippie, it was against the /usr/lib/libcrypto.so.1.0.1 and /usr/lib/libssl.so.1.0.1, which also meant that I could limit Apache to only use TLSv1.2.

 

So If this helps just one person, great.. This was something that took me a few  weeks to figure out. I also should have noted that if I had fully read the readme in the patch that introduced OpenSSL it would have probably went a little faster… If you are worried about breaking stuff, once you get your compile done, you can put the symlinks back and move the openssl directory back to it’s original place.

N.B. I am not sure if the gcc in Solaris 11 has the same quirk.

Cash for Cache

I decided to build a new VMware host for my home “lab” last week to replace the HP workstation I had been using. (The real motive was to turn the HP workstation in to a large NAS since it has 12 SATA ports on it, but more on that later.) So off to part out my new server. What I ended up purchasing was the following (Prices as of 3/24/2015 in USD):

 

The plan was to set this system up with VMware vSphere 6 and then migrate everything from my VMware 5.1 system to this. So I began building it as the parts arrived last friday night. Everything was going swimmingly until I forgot that the LSI2308 SAS/SATA RAID card doesn’t have any cache. What I found was that the 2 480GB SSD drives in a RAID 1 on that card were fast, extremely fast, as in I could boot a Windows 7 or Windows 2012R2 VM in about 3 seconds. However the 2 2TB SATA drives that I made a RAID 1 on there were slow as hell. (Same as the issue I was having with the HPXW8600 system.) I had originally thought it was just the RAID rebuilding, so I left it at the RAID bios over night rebuilding the array.

Well after leaving it at 51% completed and going to bed, waking up 8 hours later and it was only at 63%, I knew that I would never be able to use the SATA drives as a hardware mirror on that device. So I powered it down and disconnected them from the LSI2308 and moved them over to the Intel SATA side of the motherboard. This is where things get interesting, as I really wanted to have a large 2TB mirrored datastore for some of my test vm’s that I didn’t run 24×7 (the ones I do are on the SSD RAID 1.) In order to achieve this I had to do some virtualization of my storage…

The easiest way I could get the “mirrored” datastore to work was to do the following:

  1. Install FreeNAS vm on the SSD drive (pretty simple a small 8GB disk with 8GB of ram, which would leave me 24GB of ram for my other VM’s.)
  2. On each of the 2TB disks, create a VMware datastore, I called them nas-1 and nas-2, but it can be anything you want.
  3. Next create a VMDK that takes up nearly the full 2TB  (or smaller in my case, I created two 980GB VMDK’s per each 2TB disk.)
  4. Now present the VMDK’s to the FreeNAS VM.
  5. Next create a new RAID 1 volume in FreeNAS using the 2 disks (or 4 in my case) presented to it.
  6. Create a new iSCSI share of the new RAID 1 volume.

Now comes the part that gets a little funky. Because I didn’t want the iSCSI traffic to affect my physical 1GB on the motherboard I created a new vSwitch but didn’t assign any physical adapters to it. I then created a VMkernel Port on it and assigned the local vSphere host to it with a new IP in a different subnet. I then added another ethernet (e1000) card to the FreeNAS VM and placed it in that same vSwitch and assigned it an IP in the same subnet as the vSphere host.

With the networking “done”, it is now time to add the iSCSI software adapter:

  1. In the vSphere Client, click on the vSphere host, and then configuration
  2. Under Hardware, select Storage Adapter, then click Add in the upper right.
  3. The select the iSCSI adapter and hit ok. You should now have another adapter called iSCSI Software Adapter, in my case it was called vmhba38.
  4. Click on the new adapter and then click Properties
  5. Next I clicked on the Dynamic Discovery tab and clicked Add.
  6. In the iSCSI Server address I ended the IP address I made on the FreeNAS box on the second interface (the one on the “internal vSwitch”)
  7. Click ok (assuming you didn’t change the port from 3260)
  8. Now if you go back and click Rescan All at the top, you should see your iSCSI device.
  9. Now we just need to make a datastore out of it, so click on Storage under the Hardware box
  10. Then Add Storage…
  11. Then follow through adding the Disk/LUN and the naming stuff.

You should now have a new iSCSI datastore on the 2 disks that were not able to be “hardware” mirrored. Using HD Tune in a Windows 7 VM on that datastore I got this:

HD Tune running in Windows 7

As you can see, the left side of the huge spike was actually the writing portion of the test, which got drowned out by the read side of the test. Needless to say the cache on the FreeNAS makes it read extremely fast. As an example a cold boot of this Windows 7 VM took about 45 seconds to get to the login screen from power on. However a reboot is about 15 seconds or less..

Now on the FreeNAS side here is what the CPU utilization looked like during the test:

FreeNAS CPU usage

You can see that is barely touched the CPU’s while the test was running. So lets look at the disk’s to see how they dealt with it:

FreeNAS disks

It looks like the writes were averaging around 17MB/s, which for a SATA/6Gbps drive is a little slow, but we are also doing a software raid, with cacheing being handled in memory on the FreeNAS side. The reads looked to be about double the writes, which is expected in a RAID 1 config.

The final graph I have from the FreeNAS is the internal network card:

FreeNAS Network

Here we can see the transfer rates appear to be pretty close to that of the disk side. This is however on the e1000 card. I have yet to try it with the VMXNET3 driver to see if I get any faster speeds or not.

While the above may not show very “high” transfer speeds, the real test was when I was transferring the VM’s from the HP box to the new one. Before I created the iSCSI datastore and was just using the straight LSI2308 RAID1 on the 2x 2TB disks, the write speed was so bad that it was going to take hours to move a simple 10GB VM. After making the switch, it was down to minutes. In fact the largest one I moved, was 123GB in size and took 138 minutes to copy using the ovftool method.

So why did I title this post Cash for Cache, quite simple, if I had more cash to spend on a RAID controller that actually had a lot of cache on it, and a BBU, I wouldn’t have had to go the virtualized FreeNAS route. I should also mention that I would NEVER recommend some one doing this in a production environment as their is a HUGE catch 22. If you only have one vSphere host and no shared storage, when you power off the vSphere side (and consequently the FreeNAS VM) you will lose the iSCSI datastore (which would be expected). The problem is when you power it back on, you have to go and rescan to find the iSCSI datastore(s) after  you boot the FreeNAS vm back up. Sure you could have the FreeNAS boot automatically, but I have not tested that yet and to see if vSphere will automatically scan the iSCSI again to find the FreeNAS share.

 

Looking to the future, if SSD’s drop in price to where they are about equal to current spindle disks, I will likely replace all the SATA hard drives with SSD drives and then this would be the fastest VMware server ever.