Raspberry Pi’ing

Recently I decided I needed a better way to monitor the temperature and humidity in various parts of my house. The main reason was the thermostat for the house is located in a hallway that is more closed in than anything. So while the thermostat may show that it is 75 degrees in the house the rest of the house my only have been 70 degrees or less. After the winter we have had, I also needed a good way to monitor the humidity in the house. The only was I was able to do it was with a little Oregon Scientific thermometer I bought at Target. But the problem with this was it was only for one room, didn’t seem to be very accurate and I had no way of logging the values over a time period.

In comes the Raspberry Pi, along with a DHT22 temperature/humidity sensor and Splunk, I can now monitor, record and graph in realtime the temp and relative humidity in various parts of the house (and the outside).

What I got was this:

  1. 3 x Raspberry Pi 2 Canakits from Amazon.com
  2. 5 x DHT22 Digital Sensors from Amazon.com
  3. 1 x DHT11 Digital Sensor from Amazon.com

Now the DHT11, was what I purchased in the first round along with just one of the Raspberry Pi’s. It is not as sensitive as the DHT22’s, but since it was just for the original test it was ok for what I needed. The second round I bought the other two Raspberry Pi’s and the 5 DHT22 sensors.

What I intend to do is use some of the pre-existing CAT5 runs through the house to wire the DHT22’s in to and then have the other end of the CAT5 runs connect in to a Raspberry Pi in the Garage. This way I can do multiple sensors on one device versus having a device in every room.

 

Some of the benefits of getting the Raspberry Pi Canakits I got are:

  1. A clear case is included with the correct cut outs for the raspberry pi.
  2. A USB wifi dongle is included, and the drivers are pre-loaded in the OS.
  3. It comes with a pre-loaded 8GB microSD card.
  4. It comes with a miniature breadboard with a 40 pin cable and breakout board that plugs perfectly in to the breadboard.
  5. Comes with various resistors and led’s and pushbuttons.
  6. Has a HDMI cable included, which made it easy to hook in to my monitor
  7. Various jumper cables for the breadboard

 

Overall, I would say that the total time to get a base monitor up and running is a few minutes. But this is based of me already having Splunk, the network, dhcp, dns, etc already set up. So I am going to detail the basic steps I used to get it up and running:

  1. Unbox the raspberry pi, place the heatsinks on the two “large” chips on the top side, and then place it in the clear case.
  2. Hook up the HDMI, keyboard, mouse, and WIFI dongle.
  3. Insert to the microSD card
  4. Hook up the USB power cable and watch it boot NOOBS.
  5. Once it is booted, select the Raspbian to install. This probably takes the longest of all the steps to do, as it is expanding the operating system on to the microSD card.
  6. Once this is done, it will reboot and bring up a text based config. I set the hostname, enable ssh, set the timezone and finally set the locale.
  7. At the login prompt, you can log in with the userid pi and the password of raspberry.
  8. Next to set up the network, if you are using the ethernet, then it should already have an IP address if you have DHCP running on your network. If you are using the WiFi dongle, then edit the /etc/wpa_supplicant/wpa_supplicant.conf �file as root and put the following in it:
     network={
    ssid="YOURWIRELESSSSID"
    psk="YOURWIRELESSPASS"
    }

    Where YOURWIRELESSSSID is the SSID of the AP you want to connect to and the PSK value is the password for that SSID/AP. (If you are doing MAC�filtering, you can get the MAC address by running ifconfig -a as root and look at the wlan0.

  9. Once you save the file in the item above, issue the following commands:
    wpa_action wlan0 stop
    ifup wlan0
    ifconfig -a
    
  10. By now if everything is working correctly you should have a IP address and network connectivity. You can use wpa_cli status to verify the network connectivity.
  11. Now that the network is up and running I needed to download some software:
    sudo su -
    apt-get update
    apt-get upgrade
    apt-get install python-dev
    git clone git://github.com/adafruit/Adafruit-Raspberry-Pi-Python-Code.git
    wget http://www.airspayce.com/mikem/bcm2835/bcm2835-1.42.tar.gz
    
  12. Now that we have the software downloaded it is time to do some little compiling:
    tar -zxvf bcm2835-1.42.tar.gz
    cd bcm2835-1.42
    ./configure
    make
    make install
    

    That should install the driver for the bcm2835 chip.

  13. Next we need to do the python code setup:
    cd Adafruit-Raspberry-Pi-Python-Code
    cd Adafruit_DHT_Driver_Python
    python ./setup.py install
    
  14. At this point the code should be done. You can now power down (shutdown -h now) the Raspberry Pi and hook in the DHT22 sensors. (Make sure to disconnect the power before connecting the 40 pin cable.
  15. The way I hooked the sensor in for testing was to connect the 40 pin cable to the Raspberry Pi and the other in to the breakout board which was attached to the mini breadboard. Once that was done I hooked a jumper from 3.3 V to the first pin on the DHT22. Then placed a 10K resistor between another 3.3V connection and the second pin. In addition a jumper was ran from GPIO4 to the second pin of the DHT22. The third pin is left unconnected and the forth pin is connected to Ground. I will post a picture later.
  16. Once everything is connected, power the Pi back up and log in and switch to the root account.
  17. Next to see if everything is working change in to the Adafruit-Raspberry-Pi-Python-Code/Adafruit_DHT_Driver_Python directory.
  18. Then run python ./Adafruit_DHT.py 22 4. The 22 is the type of the sensor, so if you are using a DHT11 use 11, if a DHT22 use the 22. The number 4 is the GPIO port that the sensors data pin is connected to. Once you run it you should see something like this:
    root@rpi2:~/Adafruit-Raspberry-Pi-Python-Code/Adafruit_DHT_Driver_Python# python ./Adafruit_DHT.py 22 4
    using pin #4
    Temp = 20.2999992371 *C, Hum = 40.4000015259 %
    
  19. In the above, we can see that the Temp is 20.29C and the Humidity is 40.40%. If you want the Temp outputted as Fahrenheit, like I did, make a copy of the Adafruit_DHT.py file (for a backup) and then add a new line at line 37 with the following:
    tf = (( t * 9 ) / 5.0 ) +32;
    

    Then on line 39, you will want to change the *C to *F, and then in the format(t,h) you will want to change the t to a tf, so the line would look like this now:

    print("Temp = {0} *F, Hum = {1} %".format(tf,h))
    
  20. Now if you re-run, it will look like this:
    root@rpi2:~/Adafruit-Raspberry-Pi-Python-Code/Adafruit_DHT_Driver_Python# python Adafruit_DHT-f.py 22 4
    using pin #4
    Temp = 68.1800006866 *F, Hum = 40.0 %
    
  21. Now that we have the data being output in the format we like, the only thing left was to log it. What I did was create a shell script that is run by cron every minute (* * * * *) and it outputs the values to a log file called /var/log/temp+humid.log. This log file is then pulled in by Splunk for graphing and other fun stuff that will be another post.
  22. The script I wrote looks like this:
    #!/bin/bash
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    export PATH
    RESULTS="`python /root/TempLogger/Adafruit_DHT-f.py 22 4 | grep Temp `"
    TEMP="`echo ${RESULTS} | awk '{print $3}'`"
    HUMID="`echo ${RESULTS} | awk '{print $7}'`"
    DATE="`date \"+%Y-%m-%d %H:%M:%S\"`"
    echo "${DATE} ROOM=FamilyRoom TEMP=${TEMP} RH=${HUMID}" >> /var/log/temp+humid.log
    
  23. The output that gets logged looks like this:
    2015-03-17 21:44:02 ROOM=FamilyRoom TEMP=68.1800006866 RH=39.7000007629
    

 

Some times, and I haven’t figured out why yet, it will log null values for the TEMP and RH. I need to add some more checking in to the script to make it more robust, but for now it is working.

In the next post I will cover what I do with the data in Splunk, and how I get the outside temps from the local airport and add them to Splunk as well.

So you want to be an IT Superstar?

Today is one of those days that I have to wonder why I took a career in Information Technology (IT)… You see, I have been doing IT for almost 20 years now and it is not like how the commercials on ITT Tech, or any of those other “tech” trade schools. The commercials make it look like it is just a easy 9 to 5 job, where everything is so cool and collect.

What I am going to tell you is it is the exact opposite. You will work all types of hours, some times days on end with out sleep when something dies. You will have unrealistic expectations assigned to your projects by people who more than likely have never even touched a computer or know how anything works on it, other than to send an email or do an Excel spread sheet. You will also probably give up one weekend a month for the famous “patching day” which can be at any time your management decides they want to be. And because they love to do it, it is usually at like 1AM on a sunday morning, which means you lose the entire weekend because you are trying to get sleep and rested up to work that one 8 hour shift that is not your normal work time.

Once you get past all that stuff, unless you are eager to learn on your own time, you can probably kiss any further training to the sky. In the days now of tight budgets and very high work loads, your best bet at training is some computer based training of “what’s new in Windows 7”, or something totally unrelated to your actual job.

So now that we have talked about that, what provoked me to say this stuff? Well one company, Microsoft. Today was one of those days where I needed to patch some Windows 2008 Servers because of the monthly release of “security” patches because Microsoft and other vendors are in this mode of getting shit out as fast as possible and not checking the code. So as normal, I approved the 7 or 8 patches for the July cycle in WSUS, so far so good. The part that blows is that the patches applied and the servers said, hey I need to reboot. This was no big surprise because how often have you applied a Windows patch and not had to reboot? So off to reboot the servers, and this is where this shit hit the fan. All of the sudden the server went in to a boot loop. In the off chance that you can catch the blue screen of death in the fraction of a second that it was on the screen, you would see that it mentioned something about an error 0x000007b and that you may have a virus.

Well, I can guarantee you that the machines don’t have virus’ on them. So investigating the error further it appears that the 0x7b is an error that says that the OS can’t find the hard drive. Which is ironic because it has booted off of it to get that far. This then starts the oh-shit moment. Luckily this was only 1 of 2 Active Directory servers. I spent a while trying to get it to boot buy following all these different articles. To no avail I could not get it to boot up.

The biggest thing that pissed me off was Microsoft used to have a boot mode where you could step through each driver as it was loading and say whether to load it or not. Unfortunately, I can’t find that any where in the F8 menu or any of the other google foo searches. So I tried each of the safe mode options, which each BSoD. I tried Debug Mode, BSoD. I tried to have it log the startup to the ntbtlog.txt, nope, doesn’t even write to it. So now I am extremely pissed, to the point where I just said F@#K it, and started a reinstall of Windows 2008R2 (the environment this was in I could do it). But before I did it I tested the other AD server, yup, it bit the dust too.

Luckily reinstalling W2K8 doesn’t take terribly long. �However it is a pain in the ass getting an entire environment set back up because one patch blew up your servers. So while I was reinstalling these two servers, I decided to test another less critical server on a different network. Guess what it died too with the same error. So now I am thinking about how bad this could have been if I were doing some heavily used servers. �(Once again this stuff isn’t shown in the “tech school” commercials.)

So how do you go forward from this, well there are 2 different type of “tech” people. Those who go home, and start testing every single possibility in their own private lab. Then there are those who don’t give a F and wait for other people to fix their problems as they don’t have the first clue how to fix stuff if a reboot doesn’t fix it.

Can you guess which type of a tech person I am? If you guessed the former, you are correct. First thing I did when I got home from work is created a new W2K8R2 VM and started the OS installing and trying to get it up to the patch level I had the machines at work. But because this is windows that takes FOREVER with all the reboots and waiting for it to “see” the patches offered to it.

The group in the later (those who don’t care and wait for others to fix it) really start to make me mad now days. Now I can say that I spend a lot of my own free time doing a lot of stuff to teach my self practically everything I know about IT, as when I went through school, none of this stuff was taught (Shit, I am a UNIX person, but bought a Microsoft TechNet subscription just to learn as much as I can about Windows Server, etc). But some “IT” people seem to get pissed when I make the notion that they need to learn this stuff on their own at home. It is almost the “how dare you ask me to do something on my free time to better my self when I can sit here and do nothing.” Well that is the only way you are going to better your self, and learn from your mistakes with out affecting something at your work that may affect something with your pay …

 

As I said at the beginning I have been doing IT for close to 20 years now. In that time I have had my hands on the following:

  • Every version of SunOS/Solaris from 4.1.1 up to the current (11)
  • Every version of Microsoft Windows from 3.11 through Server 2012
  • IBM AIX 3.1.2 through 6
  • VM/ESA
  • OpenVMS
  • SGI IRIX
  • Various distributions of Linux (and this is one of my huge pet peeves, but that is for another post)
  • Every version of MacOS from 7 through the current 10.9
  • Practically every version of VMware from the original VMware workstation 1.0 on Linux, to vSphere 5.1 to VMware fusion 6.
  • BeOS
  • OS2/Warp
  • Novell Netware

And that is just Operating systems, some of which don’t even exist any more. The hardware side is so numerous that is hard to even keep track of, but lets just say I got in to computers when an 80286 8MHz was considered fast and bleeding edge, not to mention a Commodore 64, and Atari 800.

 

So what is the moral of this post? Really think if you want to get in to IT, and do you have the thirst for learning and teaching yourself. If you don’t have that and don’t want to spend some times hours a night learning how stuff works, or if spending an entire weekend at work on a nice summer day doing patches is not your thing, please don’t take that type of job. IT is almost like a dedication and devotion, if you don’t have the time to do it, you probably shouldn’t start it.

Joyent SmartOS network monitoring

My free trial period of my Smart Machine ended, so now I was trying to find a way to monitor my bandwidth usage on my Smart machine. There isn’t a “easy” way of doing (like logging in to the portal to look at your account) so I devised a way to do it on my own.

The first part of it will be discussed in this post, and I will do another about how to actually view the results.

First off the easiest way I have found to “watch” network traffic is using the kstat command. On my SmartMachine, I have 2 network interfaces, one that has the public interface on it, and one that has the private interface on it. For my purposes I am only currently watching “net1” which is the external interface.

So the small script I have runs every 10 minutes, and logs the information in to a MySQL table. That table is defined like this:

CREATE TABLE `vmnet` (
`interface` char(10) DEFAULT NULL,
`time` bigint(20) DEFAULT NULL,
`obytes` bigint(20) DEFAULT NULL,
`rbytes` bigint(20) DEFAULT NULL,
`htime` datetime DEFAULT NULL,
KEY `tidx` (`time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

The columns are as follows:

  • interface: which interface we are getting the stats from, right now everything just says net1. But if I were to add net0 it would fit right in.
  • time: time in seconds since the epoch
  • obytes: bytes leaving the interface
  • rbytes: bytes received on the interface
  • htime: human readable time. (Yes i realize I am storing the time twice, and that I can do everything with just time, but what the heck, it is just an extra little storage ;-)…

 

Now that the table in the DB is defined, set the permissions on it. In my case I created a database just for the “netstats” �and there is just the one table in it called vmnet. I created 2 users that have access to the vmnet table. One just for writing the data in from the script, and another for reading the data for part 2 of this.

 

Now for the script, it is pretty simplistic:

#!/bin/bash
#Use kstat to grab interface stats
#Define the interface to look at:
INTF="net1"
VALUES="`kstat -c net -n ${INTF} | egrep \"(obytes64|rbytes64)\"`"
SNAPTIME="`perl -e \"print(time());\"`"
OBYTES="`echo ${VALUES} | grep obytes64 | awk '{print $2}'`"
RBYTES="`echo ${VALUES} | grep rbytes64 | awk '{print $4}'`"
echo "insert into vmnet values ('${INTF}',${SNAPTIME},${OBYTES},${RBYTES},NOW());" | /opt/local/bin/mysql -uUUUUUU -pPPPPPPPPP netstats

 

In the most�simplest�form, the script runs the kstat command on the requested interface ${INTF} and then uses egrep to grab the obytes64 and rbytes64. It then takes those to values and creates a sql insert and piles that in to mysql command where UUUUUU is the username and PPPPPPPP is the password for the insert use on the netstats database.

I then run this every 10 minutes. And what you end up with is data in the table that looks like this:

+-----------+------------+------------+------------+---------------------+
| interface | time       | obytes     | rbytes     | htime               |
+-----------+------------+------------+------------+---------------------+
| net1      | 1388373702 | 3123241114 | 3977125001 | 2013-12-29 22:21:42 |
| net1      | 1388374200 | 3123381303 | 3977326242 | 2013-12-29 22:30:00 |
| net1      | 1388374457 | 3140146411 | 3977725426 | 2013-12-29 22:34:17 |
| net1      | 1388374800 | 3140170245 | 3977843340 | 2013-12-29 22:40:00 |
| net1      | 1388375400 | 3140526526 | 3978051264 | 2013-12-29 22:50:00 |
+-----------+------------+------------+------------+---------------------+

Next time I will show how to take the data and make something out of it:

graph of network traffic
netstat output

Comcast vs TiVO Roamio

As most of the world knows TiVO released their new DVR called the Roamio. Which in all shapes and forms appears to be the most awesome DVR yet. With the ability to do 6 tuners and stream live TV to the TiVO Mini, it alone will save me hundreds of thousands of dollars in stupid hardware rental fees from Comcast. So before I put down nearly $1000 for the new Roamio and a lifetime subscription I decided to ask Comcast if their Morgantown, WV system would support it. (I had seen some people on the interwebs saying there were issues with some cable systems not supporting all 6 tuners yet.)

So the first place I went was to twitter to ask them (@comcastcares) if they supported it and if there were any hoops I had to jump through to switch it from my Premiere to the Roamio. Well they wrote back and said to contact [email protected]. So I sent them an email asking about the support for the Roamio and whether it was required for a tech to come out to do the install.

So I got the typical boiler plate email back saying they would review my concerns.. Pretty typical.

Today I got a call from their Executive relations group while I was at work. So I called them when I got home and here is roughly how the conversation went:

ER: Hello, I was calling to address the email you sent us.

ME: Ok, well I am looking at getting a Roamio and was wanting to make sure it was supported before buying it.

ER: Well does it support 3 cable cards? We only have cable cards that support 2 tuners, so if it doesn’t have 3 cable card slots then it probably won’t work.

ME: No it only has one slot for a M-CARD. (Thinking to my self, yeah if I had to get 3 cards that is extra money to you.)

ER: Oh, ok. well we didn’t even know that TiVO had a DVR that did 6 tuners. (Thinking well, they have had a 4 tuner one for a couple of years now.) I have some calls in to our warehouses to verify if we have a cable card that supports that many tuners, but right now the only thing we support is 2 tuners.

ME: Ok, well from what I was reading it is just a firmware issue.

ER: Hmm, hmm, hmm, ok, ok, ok (don’t have a clue what he was doing) {he then repeats about checking with the warehouse people}

 

He then addressed my issue with doing a self install and said yes you can do it, but you have to call them to activate it. (Which I knew but was confirming it again.)

I then brought up the issue where the website says that for each customer owned piece of equipment you should get a $2.50 credit to your bill. I told him I had 2 TiVO’s and therefore I should see a $5.00 credit on it. To which he explained that I do get the credit but it isn’t reflected on the bill. He then told me that the cable card fee is actually the same as the other box fees ($9.95), but they subtract the $2.50 from it (which is the “cost of the box”)� to make it $7.45 (which is the cost of the “service”).� I told him that the Comcast website doesn’t say that and even the paper that comes with the bill doesn’t show that the Cable cards are $9.95..

The funny thing was that I told him that my friend sees the $2.50 credit on his bill. He immediately said “well different parts of the country does billing a different way.” I sort of laughed and said “well, he lives 2 miles away from me. So your hypothesis doesn’t work.” He couldn’t figure out why mine didn’t show it but others did.

He ended the call with saying he would call me back once he hears back from the warehouse and whether they would or ever support a TiVO with 6 tuners. I said “well I sure hope you do as it is going to save me hundreds of dollars a year in rental fees.” He didn’t really say anything after I said that.

 

So long story short, TiVO has released something that is far superior to anything Comcast could ever offer their own customers. So now they are going to probably give out false information to make sure that customers don’t purchase the new Roamio. Just another reason why Comcast is evil, and making billions a year from people from hardware rental fees. Shit I have had 2 Scientific Atlanta 3100 standard def boxes since 2001. The interface is slow, they put ads on the guide screen, and I have paid probably close to $1,500 in rental fees on them since then.

New server

So the server that I bought back in April of 2006 to host this site died Wednesday September 18th, 2013.. I am not sure exactly what happened, �but found it unresponsive around 22:00. I went over to where it was hosted and it was still running, but the ethernet card lights were both on solid. After trying to get it to boot and show something on both the video card or the serial port for about an hour, I finally turned it off and got a screw driver out and removed it from the rack.

I had been expecting this day for a while, since the server was 7+ years old. So I brought it home and left it on the floor. The next night I tried to boot it and see if I could get in to it. No go, something was hosed in it. As soon as I plugged in the power the fans all went to 100% and no output on the video again. Great… So I pulled one of the drives out, and attached it to a SATA/USB adapter and mounted it to a Solaris VM on my Mac. Awesome, all the data was still there. After spending close to 8 hours copying the data off, there was a hunt for a new place to host my site.

The three “ideas” I had were the following:

  1. Joyent – They run servers running SmartOS (nee Solaris). So this would be my primary choice, cause hey, I love Solaris, and really hate Linux.
  2. Amazon Web Services – They only support Linux and Windows. So I would have to switch to Linux or Windows (not really wanting to do that)
  3. Host it at home and upgrade my cable modem to a business class one.

 

So I set out to look at the cost. Both the Joyent and AWS were pretty close for the “same” amount of “hardware”. Comcast Business class was going to be WAY more than hosting it some place else.. Now it was between Joyent and AWS.

Free Trials Away….

Amazon Web Services will let you use a one of their “micro” instances for free for a year. So I decided to set one up and see how it would go. I chose to do a SUSE Linux instances, since they didn’t support Solaris. About 15 minutes after clicking the “go” button, I had a SUSE “VM” on the Internet and root access to it.

While the Amazon VM was being provisioned I went to Joyent.com and decided to sign up for one of their free 2 month trials. Unfortunately it wasn’t as smooth as the Amazon sign up. While doing the registration process, it requires a phone to call to give you a PIN number to type in to finish the registration (I assume to stop hackers from spawning machines automagically). Well I put in the phone number and it called, but it only rang barely once and then hung up. It then changed the status page to an “invalid account” and locked it so I couldn’t do anything.

I tried calling them, and they said I had to submit a support request through the Internet. I did and some emails went back and forth, and then it was time to go to bed. The next day I received an email saying that the account had been updated and to try to log in. I also received an email from an account exec asking how it was going. (More than I received from Amazon…)

After work I logged in and tried to create my first “SmartMachine”. Well that sort of failed since I had not finished the registration part the night before. So I added my CC number to the billing info, but it still would not let me create one as it said I had no billing info set… Ha! I logged out and back in and it was much better, it let me pick the size of machine I wanted to create and a way it went. About 10 minutes later I had a root account on a zone on the machine.

So the work began on trying to get my site back up and running between the AWS SUSE VM and the Joyent SmartOS Zone. Surprisingly the SmartOS machine I had picked, had Apache, PHP, MySQL, etc already installed. BUT PHP did not appear to have been compiled with MySQL support. So I just decided to do my own compilation of Apache+PHP+MySQL.

As you can see, it is all up and working now.

So here is my quick comparison of Joyent (standard 64) and AWS (micro T1) given the 1 day of use now:

  1. Easy of signing up:
    • Amazon: Pretty painless. No issues that I had to contact some one for.
    • Joyent: Minor issue, and it may not be their fault, but it did take extra time to get it fixed
  2. OS Selection:
    • Amazon: They have a variety of Linux (7 different Distro’s) and Windows (2003,2008,2012) instances. However neither would be my first choice of OS for my site. Decided on SUSE Linux in the end.
    • Joyent: They offer 3 different OS’. Linux, Windows and SmartOS. SmartOS is a fork of Solaris when it was “open sourced”. Therefore I chose SmartOS, as I would much prefer it over Linux.
  3. Speed of provisioning
    • Amazon: Roughly 15 minutes from start to when I had root access
    • Joyent: Roughly 15 minutes from start to when I had root access.
  4. Processors:
    • Amazon: 1 Processor (Intel Xeon CPU E5-2650 @ 2.00GHz)
    • Joyent: 24 Running at 2.4GHz, 1vCPU
  5. Memory:
    • Amazon: 658Mb
    • Joyent: 2GB
  6. Disk Space:
    • Amazon: 10GB
    • Joyent: 66GB
  7. Networking:
    • Amazon: 15GB out
    • Joyent: First GB out free, each additional up to 10TB is $0.120

 

Right now the cost of both of these VPS’s is roughly around $47 a month. But will see how that works out with the network costs..

I will update in a month after seeing how they both perform.