GlusterFS - Replicate a volume over two nodes

When you are using a load balancer with two or more backend nodes(web servers) you will probably need some data to be mirrored between the two nodes. A high availability solution is offered by GlusterFS.

Within this article, I am going to show how you can set volume replication between two CentOS 7 servers.

Let's assume this:

  • node1.domain.com - 172.31.0.201
  • node2.domain.com - 172.31.0.202

First, we edit /etc/hosts of each of the servers and append this:

172.31.0.201     node1.domain.com     node1
172.31.0.202     node2.domain.com     node2

We should now be able to ping between the nodes.

PING node2.domain.com (172.31.0.202) 56(84) bytes of data.
64 bytes from node2.domain.com (172.31.0.202): icmp_seq=1 ttl=64 time=0.482 ms
64 bytes from node2.domain.com (172.31.0.202): icmp_seq=2 ttl=64 time=0.261 ms
64 bytes from node2.domain.com (172.31.0.202): icmp_seq=3 ttl=64 time=0.395 ms

--- node2.domain.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.261/0.379/0.482/0.092 ms

Installation:

Run these on both nodes:

yum -y install epel-release yum-priorities

Add priority=10 to the [epel]section in  /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
priority=10
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

Update packages and install:

yum -y update
yum -y install centos-release-gluster
yum -y install glusterfs-server

Start glusterd service, also enable it to start at boot:

service glusterd start
systemctl enable glusterd

You can use service glusterd status and glusterfsd --version to check all is working properly.

Remember, all the installation steps should be executed on both servers!

Setup:

On node1 server run:

[root@node1 ~]# gluster peer probe node2
peer probe: success.
[root@node1 ~]# gluster peer status
Number of Peers: 1

Hostname: node2
Uuid: 42ee3ddb-e3e3-4f3d-a3b6-5c809e589b76
State: Peer in Cluster (Connected)

On node2 server run:

[root@node2 ~]# gluster peer probe node1
peer probe: success.
[root@node2 ~]# gluster peer status
Number of Peers: 1

Hostname: node1.domain.com
Uuid: 68209420-3f9f-4c1a-8ce6-811070616dd4
State: Peer in Cluster (Connected)
Other names:
node1
[root@node2 ~]# gluster peer status
Number of Peers: 1

Hostname: node1.domain.com
Uuid: 68209420-3f9f-4c1a-8ce6-811070616dd4
State: Peer in Cluster (Connected)
Other names:
node1

We need to create now the shared volume, and this can be done from any of the two servers.

[root@node1 ~]# gluster volume create shareddata replica 2 transport tcp node1:/shared-folder node2:/shared-folder force
volume create: shareddata: success: please start the volume to access data
[root@node1 ~]# gluster volume start shareddata
volume start: shareddata: success
[root@node1 ~]# gluster volume info

Volume Name: shareddata
Type: Replicate
Volume ID: 30a97b23-3f8d-44d6-88db-09c61f00cd90
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/shared-folder
Brick2: node2:/shared-folder
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

This creates a shared volume named shareddata, with two replicas on node1 and node2 servers, under /shared-folder path. It will also silently create the shared-folder directory if it doesn't exist.If there are more servers in the cluster, do adjust the replica number in the above command. The "force" parameter was needed, because we replicated in the root partition. It is not needed when creating under another partition.

Mount:

In order for the replication to work, mounting the volume is needed.  Create a mount point:

mkdir /mnt/glusterfs

On node1 run:

[root@node1 ~]# echo "node1:/shareddata    /mnt/glusterfs/  glusterfs       defaults,_netdev        0 0" >> /etc/fstab
[root@node1 ~]# mount -a
[root@node1 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   38G  1.1G   37G   3% /
devtmpfs                         236M     0  236M   0% /dev
tmpfs                            245M     0  245M   0% /dev/shm
tmpfs                            245M  4.4M  240M   2% /run
tmpfs                            245M     0  245M   0% /sys/fs/cgroup
/dev/sda2                       1014M   88M  927M   9% /boot
tmpfs                             49M     0   49M   0% /run/user/1000
node1:/shareddata                 38G  1.1G   37G   3% /mnt/glusterfs

On node2 run:

[root@node2 ~]# echo "node2:/shareddata    /mnt/glusterfs/  glusterfs       defaults,_netdev        0 0" >> /etc/fstab
[root@node2 ~]# mount -a
[root@node2 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   38G  1.1G   37G   3% /
devtmpfs                         236M     0  236M   0% /dev
tmpfs                            245M     0  245M   0% /dev/shm
tmpfs                            245M  4.4M  240M   2% /run
tmpfs                            245M     0  245M   0% /sys/fs/cgroup
/dev/sda2                       1014M   88M  927M   9% /boot
tmpfs                             49M     0   49M   0% /run/user/1000
node2:/shareddata                 38G  1.1G   37G   3% /mnt/glusterfs

Testing:

On node1:

touch /mnt/glusterfs/file01
touch /mnt/glusterfs/file02

on node2:

[root@node2 ~]# ls /mnt/glusterfs/ -l
total 0
-rw-r--r--. 1 root root 0 Sep 24 19:35 file01
-rw-r--r--. 1 root root 0 Sep 24 19:35 file02

This is how you mirror one folder between two servers. Just keep in mind, you will need to use the mount point /mnt/glusterfs in your  projects, for the replication to work.


How To Install A .pxf Windows SSL Certificate On Your Linux Web Server

Windows uses .pfx for a PKCS #12 file. PFX stands for Personal eXhange Format. This is like a bag containing multiple cryptographic information. It can store private keys, certificate chains, certificates and root authority certificates. It is password protected to preserve the integrity of the contained data.
In order to install it on our apache/nginx web server we need to convert it PEM.

Upload first the .pfx to your linux server. You will need OpenSSL installed.

On Centos run:

yum install openssl

On Ubuntu run:

sudo apt-get update
sudo apt-get install openssl

To decript the .pfx use:

openssl pkcs12 -in cert.pfx -out cert.pem

You will be prompted for the password that was used to encrypt the certificate. After providing it, you will need to enter a new password that will encrypt the private key.
The .pem file resulted will contain the encrypted public key, the certificate and some other information we will not use.Copy the key from inside and paste it to a new .key file.
Also copy the certificate from the .pem and put it in a new .cert file.

Remember to copy the whole blocks, including the dashed lines.
The private key file is still encrypted, so we have to decrypt it with:

openssl rsa -in cert.key -out cert.key

You will now be prompted for the password you set to encrypt the key. This will decrypt the private key file to itself.

To install the certificate to Nginx, you will need to import your .key and .cert in Nginx configuration file like this:

ssl_certificate /path/to/your/cert.pem;
ssl_certificate_key /path/to/your/cert.key;

For Apache use:

SSLCertificateFile /path/to/your/cert.pem;
SSLCertificateKeyFile /path/to/your/cert.key;

Mysqldump Through a HTTP Request with Golang

So, in a previous post I explained how one can backup all databases on a server, each in its own dump file. Let's take it to the next level and make a Golang program that will let us run the dump process with a HTTP request.

Assuming you already have Go installed on the backup server, create first a project directory in your home folder for example. Copy the mysql dump script from here and save it as dump.sh in your project folder. Modify ROOTDIR="/backup/mysql/" inside dump.sh to reflect current project directory.

We will create a Golang script with two functions. One will launch the backup script when a specific HTTP request is done. The other one will put the HTTP call behind a authentication, so only people with credentials will be able to make the backup request.

package main

import (
	"encoding/base64"
	"fmt"
	"log"
	"net/http"
	"os"
	"os/exec"
	"strings"
)

var username = os.Getenv("DB_BACKUP_USER")
var password = os.Getenv("DB_BACKUP_PASSWORD")

func BasicAuth(w http.ResponseWriter, r *http.Request, user, pass string) bool {
	s := strings.SplitN(r.Header.Get("Authorization"), " ", 2)
	if len(s) != 2 {
		return false
	}
	b, err := base64.StdEncoding.DecodeString(s[1])
	if err != nil {
		return false
	}
	pair := strings.SplitN(string(b), ":", 2)
	if len(pair) != 2 {
		return false
	}
	return pair[0] == string(user) && pair[1] == string(pass)
}
func handler(w http.ResponseWriter, r *http.Request) {
	if BasicAuth(w, r, username, password) {
		cmd := exec.Command("bash", "dump.sh")
		stdout, err := cmd.Output()
		if err != nil {
			log.Fatal(err)
		}
		fmt.Fprintf(w, string(stdout))
		return
	}
	w.Header().Set("WWW-Authenticate", `Basic realm="Protected Page!!! "`)
	w.WriteHeader(401)
	w.Write([]byte("401 Unauthorized\n"))
}

func main() {
	http.HandleFunc("/backup", handler)
	http.ListenAndServe(":8080", nil)
}

This uses DB_BACKUP_USER and DB_BACKUP_PASSWORD that you will have to set as environment variables. Just append this to your ~/.bashrc file

export DB_BACKUP_USER="hello"
export DB_BACKUP_PASSWORD="password"

Now run source ~/.bashrc to load them.

Build the executable with go build http-db-backup.go where http-db-backup.go is the name of your Go file. Now you need to run the executable with sudo, but while preserving the environment: sudo -E ./http-db-backup

Now if you open your browser and open http://111.222.333.444:8080/backup (where 111.222.333.444 is your backup machine IP) the backup process will start, and you will get the output of the dump.sh in your browser when backup finishes.

We can furthermore add another function to list the directory in browser, so you can download the needed backup or backups.

func lister(w http.ResponseWriter, r *http.Request) {
	if BasicAuth(w, r, username, password) {
		http.FileServer(http.Dir(".")).ServeHTTP(w, r)
		return
	}
	w.Header().Set("WWW-Authenticate", `Basic realm="Protected Page!!! "`)
	w.WriteHeader(401)
	w.Write([]byte("401 Unauthorized\n"))
}

All you need to do is to add http.HandleFunc("/", lister) to your main() and navigate to http://111.222.333.444:8080/ . You will be able to navigate the backup directory to download the dump files.


Mysqldump Command - Useful Usage Examples

One of the tasks a sysadmin will always have on their list is backing up databases. These backups are also called dump files because, usually, they are generated with mysqldump command.

I am going to share a few tricks on mysqldump that will help when handling servers with many relatively small databases.

 

The most simple way to backup databases would be using mysqldump command with the the --all-databases attribute. But I find that having each database saved in its own file more convenient to use.

Lets first suppose that you need to run a script that alters in databases, and that you just need a simple way to have a rollback point, just in case. I used to run something like this before:

for i in \
`ls /var/lib/mysql/`; \
do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > $i.sql; done

where *** is your root password. The aditional parameters --skip-lock-tables --skip-add-locks --quick --single-transaction  assure availability and consistency of dump file for InnoDB databases (the default storage engine as of MySQL 5.5.5).

Mysql stores databases in folders using same name as database name in /var/lib/mysql. The command picks database names from the listing of /var/lib/mysql folder and exports to files using same name adding the .sql.

There are 2 issues with the above command:

  1. It will try to execute a dump for every file/folder listed in /var/lib/mysql. So if you have error logs or whatever other files it will create .sql dumps for them too. This will send just directory names as database names to export:
    for i in \
    `find /var/lib/mysql/ -type d | sed 's/\/var\/lib\/mysql\///g'`;\
    do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > $i.sql; done

    I find this to be hard to type and prefer to use one I will explain in point 2, since it also covers this.

  2. When database names have characters like - the folder name will have @002 instead. If that is the case, you can use something like:
    for i in \
    `mysql -u root -p*** -e 'show databases'`;\
    do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > $i.sql;done

    This picks database names to export form mysql show databases command.

But, one time I had to export databases with /  in their names. And there is no way to export as I showed above, since / can't be used in file names since it is actually a markup for directories.  So I did this:

for i in \
`mysql -u root -p*** -e 'show databases'`;\
do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > `echo $i | sed "s/\//_/g"`.sql;done

This wil replace / with _ for the dump file names.

For all of the above, we could (for obvious reasons) not use root mysql user.  We could also run the backing up from a different location. In order to do this, we would need to create a mysql user with the right privileges on the machine we want to back up.

create user 'backupuser'@'111.222.333.444' identified by 'backuppassword';

grant select, show view, trigger, lock tables, reload, show databases on *.* to 'backupuser'@'111.222.333.444';
flush privileges;

where 111.222.333.444 is the ip of the remote machine.

Now you can issue mysqldump command from the other machine like this:

for i in \
`mysql -u backupuser -pbackuppassword -e 'show databases'`;\
do mysqldump -u backupuser -pbackuppassword -h 444.333.222.111 --skip-lock-tables --skip-add-locks --quick --single-transaction $i > `echo $i | sed "s/\//_/g"`.sql;done

where 444.333.222.111 is the ip of the machine we want to backup.

Lets take it to the next step , and put all our knowledge in a shell script.

#!/bin/bash
echo "Starting the backup script..."
ROOTDIR="/backup/mysql/"
YEAR=`date +%Y`
MONTH=`date +%m`
DAY=`date +%d`
HOUR=`date +%H`
SERVER="444.333.222.111"
BLACKLIST="information_schema performance_schema"
ADDITIONAL_MYSQLDUMP_PARAMS="--skip-lock-tables --skip-add-locks --quick --single-transaction"
MYSQL_USER="backupuser"
MYSQL_PASSWORD="backuppassword"

# Read MySQL password from stdin if empty
if [ -z "${MYSQL_PASSWORD}" ]; then
 echo -n "Enter MySQL ${MYSQL_USER} password: "
 read -s MYSQL_PASSWORD
 echo
fi

# Check MySQL credentials
echo exit | mysql --user=${MYSQL_USER} --password=${MYSQL_PASSWORD} --host=${SERVER} -B 2>/dev/null
if [ "$?" -gt 0 ]; then
 echo "MySQL ${MYSQL_USER} - wrong credentials"
 exit 1
else
 echo "MySQL ${MYSQL_USER} - was able to connect."
fi

#creating backup path
if [ ! -d "$ROOTDIR/$YEAR/$MONTH/$DAY/$HOUR" ]; then
    mkdir -p "$ROOTDIR/$YEAR/$MONTH/$DAY/$HOUR"
    chmod -R 700 $ROOTDIR
fi

echo "running mysqldump"
dblist=`mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -h $SERVER -e "show databases" | sed -n '2,$ p'`
for db in $dblist; do
    echo "Backuping $db"
    isBl=`echo $BLACKLIST |grep $db`
    if [ $? == 1 ]; then
        mysqldump ${ADDITIONAL_MYSQLDUMP_PARAMS} -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -h $SERVER $db | gzip --best > "$ROOTDIR/$YEAR/$MONTH/$DAY/$HOUR/`echo $db | sed 's/\//_/g'`.sql.gz"
        echo "Backup of $db ends with $? exit code"
    else
        echo "Database $db is blacklisted, skipped"
    fi
done
echo 
echo "dump completed"

This will also compress the dump files to save storage.

Save the script as backup-mysql.sh somewhere on the machine you want backups saved, ensure you have the mysql user with the right credentials on the server hosting the mysql. You will also need mysql installed on the backup server. Executesudo chmod 700 backup-mysql.sh. Run the script with sudo sh backup-mysql.sh . After making sure it works properly, you can also add it to your crontab, so that it runs on a regular schedule.


How to Save a File in Vim After Forgetting to Use Sudo

Many of you probably experienced this. You edit a file in Linux with Vim or Nano or whatever else text editor you might be using and when you try to save, you realise you forgot to launch the editor with sudo privileges, and file can't be written. Exiting without saving and editing again with sudo privileges is not always an option, especially if you lose a lot of work.

Solutions

There are some workarounds. You can open a new terminal and grant the right permissions on the file with:

sudo chmod 666 <filename>

Now you can go back to your editor and saving will be possible. Don't forget to change back the right permissions for your file.

Also, you could save in a new file and then copy the newly created file over the old one.

But these are workarounds. Vim editor actually offers a solution.

In normal(command line) mode of Vim you can issue:

:w !sudo tee %

And that works like magic. For the geeks, here is how the "magic" works:

  • :w - writes the contents of the buffer
  • !sudo - pipes it to the stdin of sudo
  • tee % - sudo executes tee that writes to "%" file
  • % - Vim expands "%" to current file name

So Vim actually makes magic happen.