Workstation, Netlogon, DFS Namespace won’t start?

So while trying to disable SMB version 1 on my windows machines, I thought what better way than to do it through the registry GPO settings, all machines in one fell swoop. After creating the appropriate registry keys for the machines I thought everything was good. That was until I rebooted. All of the sudden the Workstation service, Netlogon service and DFS Namespace (on my AD Server) failed to start. Nothing I did would start them.

They always gave an error about not being able to start because the group failed to start. I debugged this for days, and finally thought that it was just the one AD box had become corrupted. So I seized all the roles by my other server and then built a new AD and added it to the forest. Well as soon as I rebooted the new FSMO master, it started having the problems the first one did.

By now I was mad. So what was the issue, well, when I created the Registry keys and pushed them I mistakenly set mrxsmb to disabled, instead of mrxsmb10. So on the machine that was broke, I pulled up regedit and set mrxsmb to be enabled and mrxsmb10 to be disabled. I then removed the entries out of the GPO registry entry and then rebooted the machine. This time it booted and Workstation, Netlogon and DFS Namespace all started.

This is the link to use to learn how to disable SMB v1: https://support.microsoft.com/en-us/help/2696547/detect-enable-disable-smbv1-smbv2-smbv3-in-windows-and-windows-server

CentOS 7 + PostgreSQL 10 + Patroni

Lately I have been doing a lot of learning about PostgreSQL and how it can be used to replace other databases, such as Oracle RDBMS, MySQL, etc. One of the things that I have been looking to most recently is how to make PostgreSQL highly available. In Oracle you would use RAC, however in PostgreSQL you have streaming replication, but that only leaves for a single master server and technically an unlimited number of slaves. In reality most web based applications these days are 90% read and about 10% write so having tons of slaves for read-only queries is awesome, but what if your master goes down?

That is where Patroni comes in. Patroni is a framework that handles the auto failover of the master instance of PostgreSQL between multiple servers. However Patroni alone won’t do this for you, you will need some other software as well. The other two pieces of software that I used was etcd and haproxy. etcd will be the quorum system and haproxy will be the “load balancer” so that your applications only have to have one hostname to point to.

As a small example setup, I used 3 machines running PostgreSQL 10, 1 machine running etcd and one machine running haproxy. What the rest of this post will be about is setting up the different machines and software on them.

 

How to setup Centos 7 + PostgreSQL 10 + Patroni

All Machines Software Installs:

The first thing to do is install CentOS 7, fully patch it and record the IP address of each machine if they are on DHCP. I used the minimal install so there wasn’t a lot of extra software on the machines.

 

PostgreSQL 10 Servers:

For all the PostgreSQL servers the following packages will need to be installed: gcc, python-devel, epel-release

yum install -y gcc python-devel epel-release

After you have those installed (in particular the epel-release one) you can install the following : python2-pip

yum install -y python2-pip

Next you need to add the PostgreSQL Yum Repo from: https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

yum install -y https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

Once you have that RPM installed you can install the following packages: postgresql10-server postgresql10

yum install -y postgresql10-server postgrseql10

 

etcd server:

The following packages needs to be installed on the etcd server: gcc,python-devel, epel-release

yum install -y gcc python-devel epel-release

After epel is installed, you can then install etcd:

yum install -y etcd 

 

haproxy server:

The following packages are needed on the haproxy server: epel-release and then haproxy.

yum install -y epel-release
yum install -y haproxy

 

Software Configuration

Postgresql Servers

Now that the software has been installed, it is time to configure the components.  (Everything here is ran as root)

We will start with the PostgreSQL Servers, mine are named pg01 (10.0.2.124), pg02 (10.0.2.125), and pg03(10.0.2.126) respectively.  The following needs to be done on each of the 3 servers:

First some pip items:

pip install --upgrade setuptools
pip install patroni
pip install python-etcd
pip install psycopg2-binary

Next we are going to create a systemd service for patroni. So edit the file /etc/systemd/system/patroni.service to contain the following:


[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target

[Service]
Type=simple

User=postgres
Group=postgres

ExecStart=/bin/patroni /etc/patroni.yml

KillMode=process

TimeoutSec=30

Restart=no

[Install]
WantedBy=multi-user.targ

Next create a /etc/patroni.yml file. This file will control the startup/shutdown, etc of the postgres instance. Here is an example of mine from my pg01 server. In it, you will need to replace IP addresses for your own servers and usernames and passwords.

 

scope: postgres
name: pg01

restapi:
    listen: 10.0.2.124:8008
    connect_address: 10.0.2.124:8008

etcd:
    host: 10.0.2.128:2379

bootstrap:
    dcs:
        ttl: 30
        loop_wait: 10
        retry_timeout: 10
        maximum_lag_on_failover: 1048576
        postgresql:
            use_pg_rewind: true

    initdb:
    - encoding: UTF8
    - data-checksums

    pg_hba:
    - host replication replicator 127.0.0.1/32 md5
    - host replication replicator 10.0.2.124/0 md5
    - host replication replicator 10.0.2.125/0 md5
    - host replication replicator 10.0.2.126/0 md5
    - host all all 0.0.0.0/0 md5

    users:
        admin:
            password: admin
            options:
                - createrole
                - createdb

postgresql:
    listen: 10.0.2.124:5432
    bin_dir: /usr/pgsql-10/bin
    connect_address: 10.0.2.124:5432
    data_dir: /data/patroni
    pgpass: /tmp/pgpass
    authentication:
        replication:
            username: replicator
            password: PASSWORD
        superuser:
            username: postgres
            password: PASSWORD
    parameters:
        unix_socket_directories: '.'

tags:
    nofailover: false
    noloadbalance: false
    clonefrom: false
    nosync: false

etcd server

Now lets move on to the etcd server. The only thing on the etcd server that needs edited is the /etc/etcd/etcd.conf file. Here is what I changed in mine:

ETCD_LISTEN_PEER_URLS="http://10.0.2.128:2380"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://10.0.2.128:2379"
ETCD_NAME="etcd0"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.2.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.2.128:2379"
ETCD_INITIAL_CLUSTER="etcd0=http://10.0.2.128:2380"
ETCD_INITIAL_CLUSTER_TOKEN="cluster1"
ETCD_INITIAL_CLUSTER_STATE="new"

Some of the above lines may be commented out, if so uncomment them and replace the values.

HAProxy Server

Now to config the HAProxy Server. Replace the /etc/haproxy/haproxy.cfg with the following:

 

global
        maxconn 100
        log     127.0.0.1 local2

defaults
        log global
        mode tcp
        retries 2
        timeout client 30m
        timeout connect 4s
        timeout server 30m
        timeout check 5s

listen stats
        mode http
        bind *:7000
        stats enable
        stats uri /

listen postgres
        bind *:5000
        option httpchk
        http-check expect status 200
        default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
        server postgresql_pg01_5432 10.0.2.124:5432 maxconn 100 check port 8008
        server postgresql_pg02_5432 10.0.2.125:5432 maxconn 100 check port 8008
        server postgresql_pg03_5432 10.0.2.126:5432 maxconn 100 check port 8008

 

N.B. The logging setup in the haproxy config requires that you set up rsyslog to allow logging to local files and to local2.*

 

Starting it all up

So assuming that I haven’t missed anything above, and there are no typo’s on my or your part, we can start starting things up. But first some notes. Something in SElinux breaks some of this and I haven’t had enough time to look in to it, so disable SElinux for now (setenforce 0). In addition I did not add any ports to the firewalld, as this was just a test setup, so I disabled the firewall on all 5 machines. You can leave it enabled and then using the configs above add the appropriate ports to be allowed between the appropriate machines.

So easy starts first:

Start the etcd server:

systemctl start etcd

Start the haproxy server:

systemctl start haproxy

Now for the PostgreSQL servers there are a few things to do before we start them. On each PostgreSQL host do the following:

mkdir -p /data/patroni
chown -R postgres:postgres /data
chmod -R 700 /data

Now we can start Patroni and see if everything works. When you start Patroni, it will start PostgreSQL in the background and create the first database and set the username and password for the postgres user to what you specify in the /etc/patroni.yml file.

systemctl start patroni

On the first host that you start Patroni on if you run systemctl status patroni you should see that it is running and active. In the /var/log/messages you should see something like :

Sep 2 19:53:08 pg01 patroni: 2018-09-02 19:53:08,628 INFO: Lock owner: pg01; I am pg01
Sep 2 19:53:08 pg01 patroni: 2018-09-02 19:53:08,636 INFO: no action. i am the leader with the lock

When you start the two slaves you should see something similar to this in the /var/log/messages:

Sep 2 19:54:17 pg02 patroni: 2018-09-02 19:54:17,828 INFO: Lock owner: pg01; I am pg02
Sep 2 19:54:17 pg02 patroni: 2018-09-02 19:54:17,829 INFO: does not have lock
Sep 2 19:54:17 pg02 patroni: 2018-09-02 19:54:17,836 INFO: no action. i am a secondary and i am following a leader

Now if you switch to the web interface of HAProxy you should see something like this:

haproxy web interface

What this image shows is that the pg01 server is the active master server. If you were to shut down pg01, then one of the other two will become the new master and remain the master until you either promote another server or the new master goes down.

 

Now that everything is running, you would point your clients to the haproxy address on port 5000.

Closing Comments

While this describes how to setup an HA Environment for PostgreSQL, there are 2 single points of failure with this example setup. One is the etcd server, which should be clustered, the other is the haproxy which should be clustered as well. In addition I didn’t cover setting up read-only slaves which I may do at some point in the future.

Making in house HD Channels

For the longest time I have been looking for a modulator that would do HD signals. I have used the standard def modulators for probably a good 25+ years and always loved making my own “cable system” in the house with various channels for different things. However with the advent of HD TV, the SD modulators were just not going to cut it for a good HD picture.

In recent years I have had a security DVR that was outputting to a SD modulator that could be viewed on any TV in the house. While it was “ok”, I always wanted the HD version of it. So one night while I was thinking of running HDMI cables from the security dvr to every TV in the house, I stumbled on VeCOAX HD Modulators from Pro Video Instruments which are $495.

Previously when I had searched for HD modulators for either ATSC or QAM the only ones that were even sort of “cheap” where from a company called ZeeVee. However, they were still a little more expensive than what I was wanting to pay with the cheapest that I saw was like around $1,200USD. So I just put up with the SD modulators until I found the VeCOAX ones.

VeCOAX has a modulator called the MiniMod-2 which will take one HDMI source and put it on any ATSC or CATV QAM channel you would like. It supports any frequency in the normal TV/CATV bands and supports ATSC to mix with OTA channels or QAM to mix with CATV channels. It also supports PSIP so you can add a 4 character label to the channel and make the channel appear as any other channel. For example, I have my modulator on CATV Channel 14, but the PSIP says it is channel 1-1.

Initially I tried to use 1080p output, but all of my TV’s (2 Samsung’s and 1 Sony) had some issues. Either the input to the modulator from the security DVR was not a clean signal or the TV’s tuners just couldn’t handle it. So there was artifacts at the top of the screen and after a few hours or more the channel would just scramble and be un-viewable until I reset the modulator.

What I ended up having to do was set the source to be 720p and then set the modulator to attenuate the signal some since it was also over-powering the tuners in the TV’s. Once I did that, the signal has been stable for a few weeks or more now.

Now the next thing to test is hooking it or another one up to a TiVO to see if it can send the TiVO signal through out the house as well. Then I may also try to do some HAM Amateur TV with it since i can set the frequency to anything.

Hey Comcast thanks for nothing

Recently I found out that Comcast has decided to drop all 1080i content and scale all HD content to 720p. Why on earth would you do this? Only thing I can think of is to compress more channels / internet bandwidth in to the same space that is currently being used.

What makes me mad the most is that even the premium channels, such as HBO, Showtime, Cinemax, those were all scaled down to 720p. So now the only channels that are “still” in 1080i format is the local CBS and NBC stations.

I believe this happened sometime late last year, as I started noticing some of the channels were not looking at “good” as they used to be, but never looked to see that they were switched from 720p from 1080i. What this means is that Comcast has basically said you don’t need any HD TV above a 720p model as anything else is just going to have to be up-converted.

What is even more funny, is that I started twitting @Comcast on Twitter to ask them, they didn’t know and would only offer help if I gave them my name and phone number. But since they have done it to everyone, it was pointless to keep the conversation going on.  I wouldn’t be surprised if in the future, they come out with a new price tear that was for “Full HD”.

Needless to say if I could get the same thing with Dish Network, I would go, but then Comcast being the ass they are would increase my internet by $20 a month because I dropped the cable side.

Just say no to Western Digital MyCloud Home

Today I was looking at consolidating some of my various Western Digital (WD) NAS devices so  I picked up one of the WD MyCloud Home 8TB “NAS” at BestBuy. TL;DR It is not a NAS device in the sense of other NAS devices they previously sold, so if you are looking for SMB, NFS, etc, it won’t do it.

 

Full story: Thinking that I could consolidate a few 2TB WD Live and WD MyCloud Devices on to this one 8TB device I picked it up and another 8TB USB 3 drive to connect to it. (Previous incarnations of the WD MyCloud devices have a USB port on the back that you can use to either extend the storage or “mirror” the NAS drive to it for “backups”. So I did a quick check of the specs and the new WD MyCloud Home device had a USB port. But on the box it doesn’t say it is for coping files TO the device, not “mirroring” like the previous ones.

So I got home, unboxed the new 8TB MyCloud Home device and put the MAC address in my dhcp server (yes I do IT stuff for a job, so I have static reservations for specific devices.) Then plugged in the “NAS” and it powered on. The first thing I noticed was that when I went to the Web interface on it, it just printed up a JSON error, which I thought was weird. But since I hadn’t read any of the instructions that came with it, and there was not that much that did come with it. I tried a few other URL’s and all of them gave the same error. I then pulled out the little one piece instruction card and it said I had to go to WD’s web site to set the device up. Also after reading some more, it requires a connection to WD at all times. This was a BIG no-no for me. The previous devices never needed this so why now?

Well it seems that WD has completely re-branded their MyCloud line of products. The new “Home” brand requires that you have a 64-bit version of Windows or MacOS and you have to use their software to talk to the device. So gone are the days of doing a CIFS/SMB/NFS style mount of the disk from any operating system. By the time I read this I was getting pretty mad because I had just spend over $300 for a device that I assumed would operate like the previous incarnations of the MyCloud/Live devices. Well I quickly boxed it back up and took it back to BestBuy and asked for my money back which they did do.

 

So if you are wanting the device to act like the old ones, and be available via SMB or NFS, etc, you need to buy one that doesn’t have “Home” in the name or go with the “Pro” or Expert line supposedly. But I think from now on I will be looking at a different vendor. I understand they want to make it easy for non-technical people. But for those who know what they are doing, they should be able to interact with the device the way they want to.