Monday, December 16, 2019

Replacing Single Line Spacing with Double Line Spacing in VIM

Thanks to this post, I learned on how to replace blank lines in my text document into 2 blank lines, to make it neat and clear.

The original document, with single line spacing


My objective is to replace all single blank lines representing new lines, into 2 new lines instead. To do that in vim, I just need to press "escape" to enter command mode and type the command below, which is ":%s/^$/\r/g", where
%s is to replace all occurrence of the selection in a line
^$ for single blank lines
\r carriage return, which will print double blank lines



The result should be like below 


Save and quit as usual, and you get yourself a neater document with double line spacing. Enjoy :)

Tuesday, October 22, 2019

Upgrading ubuntu server 16.04 to 18.04

The process is very simple, but quite time consuming, and a fast internet will help speed up the process.

To start, upgrade all packages to latest, and reboot if necessary
$ sudo apt update -y
$ sudo apt upgrade -y
$ sudo reboot

Once rebooted, and all packages are updated to the latest version, issue a 
$ sudo do-release-upgrade

Answer yes(y) to all questions, and answer "Keep the local version currently installed" for all questions asking to change any configuration to avoid any issue with currently installed applications

Once completed, press y to restart

Login and check your current version
$ cat /etc/os-release


Thursday, October 17, 2019

SSH Too many authentication failures error

I encountered this error while trying to ssh into one of my client's machine, one day.

$ ssh web
Received disconnect from 192.168.0.36 port 22:2: Too many authentication failures
Disconnected from 192.168.0.36 port 22

After searching around, I found an article that showed that, I can solve this issue just by adding one flag to my ssh command
$ ssh web -o IdentitiesOnly=yes 
sam@192.168.25.36's password: 

Now we're talking. I found out that the reason of this behavior is, I accidentally offered too many private keys to the server, causing the server to reach its maximum MaxAuthRetries, and terminate the connection. You can see this by using -v (for verbose) while doing ssh.

To solve this issue for each session, just add IdentitiesOnly=yes option to your ssh command
$ ssh -o IdentitiesOnly=yes web

To make it permanent, edit ~/.ssh/config file, and add below lines
$ cat >> ~/.ssh/config <<EOF
Host web
  IdentitiesOnly yes
EOF

And you are good to go :)

Tuesday, October 1, 2019

Adding GPU as Resource for Slurm

To make a GPU part of resource that can be managed by Slurm, create /etc/slurm-llnl/gres.conf file with definitions of GPUs available on the node. GRES stands for generic resources, and need to be declared so that slurm can manage it.


Below example is for a node with nvidia tesla v100 gpu. Name - name of the resource, can be gpu, nic or mic
Type - arbitrary string identifying the type of device
File - Fully  qualified pathname of the device files associated with a resource
Cores - specific cpu core numbers, which can use this resource
$ sudo cat /etc/slurm-llnl/gres.conf
Name=gpu Type=v100 File=/dev/nvidia0 Cores=0,1


Add GresTypes and gres resources in slurm.conf.
The format for gres resources is grestype:optional-type:number-of-resource
$ sudo cat /etc/slurm-llnl/slurm.conf
...
GresTypes=gpu
NodeName=mynode CPUs=12 RealMemory=64091 Sockets=1 CoresPerSocket=6 ThreadsPerCore=2 State=UNKNOWN Gres=gpu:v100:1
...


Restart slurm services to have the changes take effect
$ sudo systemctl restart slurmd


Check the availability of the gres
$ scontrol show node

Installing Slurm Workload Manager & Job Scheduler on Ubuntu 18.04

Enable universe repository
$ echo "deb http://archive.ubuntu.com/ubuntu bionic universe" | sudo tee -a /etc/apt/sources.list

Update package list
$ sudo apt update

Install slurm-wlm
$ sudo apt install slurm-wlm -y

Install slurm documentation. This is useful to generate slurm.conf using configurator.easy.html page
$ sudo apt install slurm-wlm-doc -y

Get a machine with a web browser, and open /usr/share/doc/slurm-wlm-doc/html/configurator.easy.html to easily generate slurm.conf.

You can also access the configurator online at https://slurm.schedmd.com/configurator.easy.html, but depending on your slurm version, the online version might not be suitable.

Fill up the form, some of the information can be retrieved using command
$ slurmd -C

Some of the configuration that I changed from the default
- Make sure the hostname of the system is ControlMachine and NodeName
- State Preservation: set StateSaveLocation to /var/spool/slurm-llnl
- Process tracking: use Pgid instead of Cgroup
- Process ID logging: set this to /var/run/slurm-llnl/slurmctld.pid and /var/run/slurm-llnl/slurmd.pid

Once done, click submit, and copy the generated config file to /etc/slurm-llnl/slurm.conf. Below is my sample config, with only one node
# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=myserver
#ControlAddr=
#
#MailProg=/bin/mail
MpiDefault=none
#MpiParams=ports=#-#
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
#SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
#SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
#SlurmdUser=root
StateSaveLocation=/var/spool/slurm-llnl
SwitchType=switch/none
TaskPlugin=task/none
#
#
# TIMERS
#KillWait=30
#MinJobAge=300
#SlurmctldTimeout=120
#SlurmdTimeout=300
#
#
# SCHEDULING
FastSchedule=1
SchedulerType=sched/backfill
SelectType=select/linear
#SelectTypeParameters=
#
#
# LOGGING AND ACCOUNTING
AccountingStorageType=accounting_storage/none
ClusterName=cluster
#JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
#SlurmctldDebug=3
#SlurmctldLogFile=
#SlurmdDebug=3
#SlurmdLogFile=
#
#
# COMPUTE NODES
NodeName=myserver CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=myserver Default=YES MaxTime=INFINITE State=UP
DebugFlags=NO_CONF_HASH

Create slurm spool directory
$ sudo mkdir /var/spool/slurm-llnl
$ sudo chown -R slurm.slurm /var/spool/slurm-llnl

Create slurm pid directory
$ sudo mkdir /var/run/slurm-llnl/
$ sudo chown -R slurm.slurm /var/run/slurm-llnl

Start and enable the slurm manager on boot
$ sudo systemctl start slurmctld
$ sudo systemctl enable slurmctld

Start slurmd and enable on boot
$ sudo systemctl start slurmd
$ sudo systemctl enable slurmd

If somehow slurmcrld or slurmd failed to start, run the applications interactively with debug options, to check for any errors. If there is any error, adjust slurm.conf accordingly.
$ sudo -u slurm slurmctld -Dcvvv
$ sudo slurmd -Dcvvv

Check slurm ndoes using scontrol command
$ scontrol show node

Wednesday, September 25, 2019

3 Ways To Locate Your php.ini File

Sometimes php installation comes with a bunch of php.ini files, that we do not know which one is actually being used by our currently running php. Below are 3 ways to locate your php.ini file


1. Using "php --ini" command
$ php --ini
Configuration File (php.ini) Path: /etc
Loaded Configuration File:         /etc/php.ini
...


2. Running phpinfo()
$ php -r 'phpinfo();' | grep ini
...
Configuration File (php.ini) Path => /etc
Loaded Configuration File => /etc/php.ini
...


3. Using php -i command
$ php -i | grep ini
Configuration File (php.ini) Path => /etc
Loaded Configuration File => /etc/php.ini
...


Sunday, September 15, 2019

Quickly testing live usb using qemu-system

You will need qemu for this. Install qemu-system for x86 machine (intel based)
$ sudo apt install qemu-system-x86

Attach your usb, check the device name using lsblk
$ lsblk

Test run your live usb using qemu-system-x86_64 command. Let's say your usb device is /dev/sdb
$ sudo qemu-system-x86_64 -hda /dev/sdb

If you get kernel panic in qemu when you run the above command, assign higher memory to the command, because qemu by default only assign 128MiB
$ sudo qemu-system-x86_64 -m 512 -hda /dev/sdb 

Wednesday, August 14, 2019

Using socks proxy to tunnel apt command in ubuntu 18.04

This is useful when we have a machine (m1) that does not have an internet connection to do apt update/upgrade, but we have another machine (m2) that is able to access internet, and accessible via ssh from the m1.

Create a dynamic tunnel (socks5 proxy) from m1 to m2, using port 8888
$ ssh myuser@m2 -fN -D 8888

Set apt to use the socks proxy created above
$ echo "Acquire::http::proxy "socks5h://localhost:8888";"  | sudo tee -a /etc/apt/apt.conf.d/12proxy

Run apt command as usual in m1, and your apt command will be tunneled via the socks5h proxy
$ sudo apt update
...
0% [Connecting to SOCKS5h proxy (socks5h://localhost:8888)] [Connecting to SOCKS5h proxy (socks5h://localhost:8888)]
...

Once you are done, do not forget to kill the tunnel
$ kill `pidof ssh`

and remove the proxy option from apt
$ sudo sed -i 's/Acquire/#Acquire/' /etc/apt/apt.conf.d/12proxy 

Tuesday, August 13, 2019

Installing NFS server and client on ubuntu 18.04

Assuming the nfs server's ip address is 10.20.30.40 and the client is 10.20.30.50.

NFS Server

Install nfs-kernel-server
$ sudo apt update; sudo apt install nfs-kernel-server -y

Start nfs service
$ sudo systemctl start nfs-server

Create the export directory
$ sudo mkdir /sharing

Change permission and ownership of export directory
$ sudo chown nobody.nogroup /sharing
$ sudo chmod 777 /sharing

Allow the export directory to be accessed by client. The options are rw for read write, sync for server to write any change to disk from applying and no_subtree_check to prevent subtree checking
$ echo "/sharing 10.20.30.50(rw,sync,no_subtree_check)" | sudo tee -a /etc/exports

Export the above setting to NFS table of exports
$ sudo exportfs -a

Check if the NFS table of exports has been updated
$ sudo exportfs 
/sharing           10.20.30.50

NFS Client

Install nfs-common
$ sudo apt update; sudo apt install nfs-common

Mount the nfs export directory
$ sudo mount 10.20.30.40:/sharing /mnt

Check if client can write to the export directory and the file written, appeared in the server
$ mount | grep nfs
$ touch /mnt/newfile
$ rm /mnt/newfile

To make the mount point permanent, append it to /etc/fstab
$ echo "10.20.30.40:/sharing /mnt nfs4 defaults 0 0" | sudo tee -a /etc/fstab

Test mount from /etc/fstab
$ sudo umount /mnt; sudo mount -a



Friday, June 14, 2019

Access Your Android Phone's Storage, using Commands

There are a lot of apps for android, that enable file sharing over wifi. Some of them are free, and some of them are paid. But in this article, I will be sharing a kind of geek way on how to share your android phone's internal storage with your linux machine.

Install an app called termux from the play store. Download the app here: https://play.google.com/store/apps/details?id=com.termux . This app will add terminal emulation to your android
Once installed, allow termux to access to storage. You can do that by going to Settings --> Apps & Notifications --> Termux --> Permissions, and enable Storage

Optional: to make your typing experience better, I recommend this keyboard app: https://play.google.com/store/apps/details?id=org.pocketworkstation.pckeyboard

The http server of our choice is, the python's simple-http server. To use the simple-http, first you need to install python. Run below in your termux console
$ pkg install python -y

Get your phone's ip address. This can be easily done in termux
$ ip a | grep wlan0

Run http.server, to share your phone's internal storage over the wifi. /storage/emulated/0 is the default root directory for android phone's internal storage
$ python -m http.server /storage/emulated/0 

In your PC/laptop/other android, just use a browser to access the phone, via port 8000

Thursday, June 13, 2019

Accessing Android Phone via ssh

Sometimes, for whatever reason, you need to access your phone from your laptop. This can be done by installing openssh inside your phone.

First, you need an app called termux, to enable terminal emulator inside android. The app can be downloaded here, in google play store.

Once installed, fire up termux, and install openssh 
$ pkg install openssh

You need password to login via ssh, so set a password for your current user
$ passwd

You also need to know, what is the current username
$ whoami

Run sshd. By default sshd in termux runs on port 8022
$ sshd

Get the ip address of your android phone
$ ip addr show wlan0

Connect to your phone (replace 192.168.43.200 with the output of the ip addr command), using standard ssh client on linux/mac terminal, or putty on windows
$ ssh u0_a211@192.168.43.200 -p 8022 

In order to browse the whole internal storage of your phone, you need to enable storage permission for termux. That can be done by running:
$ termux-setup-storage

and clicking allow



Friday, May 31, 2019

Update time and date using chrony

Chrony is an application for a machine to sync with network time protocol servers.

To install chrony
# yum install chrony -y

Then, make sure to include some legit ntp servers in /etc/chrony.conf. In this case, we are using centos ntp pool servers.
# cat /etc/chrony.conf
...
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
...

Start chronyd (chrony daemon)
# systemctl start chronyd

Chrony will gradually update the system clock to follow the ntp servers, once chronyd is started. But if you want to chrony to update the time quickly, just use below command
# chronyc makestep

To start chronyd on boot:
# systemctl enable chronyd

To check if the time has been synced (make sure one of the server has ^* on the left hand side)
# chronyc sources -v
...
^* time.cloudflare.com           0   6     0     -     +0ns[   +0ns] +/-    0ns
...

Friday, May 24, 2019

Installing Joomla 3.9.6 on Centos 7 with httpd, php 7.3 and mysql 8.0

MYSQL

Install mysql 8.0 repository
# rpm -Uvh https://repo.mysql.com/mysql80-community-release-el7-3.noarch.rpm

Install mysql-server 8.0
# yum install -y mysql-community-server

Start mysql server
# systemctl start mysqld

Secure mysql server installation, answer yes to all question in the mysql_secure_installation procedure
# grep password /var/log/mysql.log
# mysql_secure_installation

Change mysql default authentication plugin to mysql_native_password. Refer here for more information
# cat >> /etc/my.cnf <<EOF
[mysqld]
default-authentication-plugin=mysql_native_password
EOF

Restart mysql
# systemctl restart mysqld

Create database for joomla
# mysql -u root -p
mysql> create database joomla;
mysql> create user joomla@localhost identified by 'MyJoomla123!';
mysql> grant all privileges on joomla.* to joomla@localhost;


PHP

Install epel and remi repository
# yum install epel-release -y
# rpm -Uvh https://rpms.remirepo.net/enterprise/remi-release-7.rpm

Install php73 and required components
yum --enablerepo=remi-php73 install php php-zlib php-xml php-json php-mcrypt php-mysqlnd -y


HTTPD

Install httpd server
# yum install -y httpd

Start httpd
# systemctl start httpd


JOOMLA

Download joomla source code
# yum install -y wget
# wget https://downloads.joomla.org/cms/joomla3/3-9-6/Joomla_3-9-6-Stable-Full_Package.tar.gz

Create a directory for joomla in httpd's root directory
# mkdir /var/www/html/joomla

Extract the code into the directory in root directory
# tar -xvf Joomla_3-9-6-Stable-Full_Package.tar.gz -C /var/www/html/joomla

Give proper owner to joomla directory
# chown -R apache.apache /var/www/html/joomla

Restart httpd
# systemctl restart httpd

Browse to http://your.ip.add.ress/joomla, to access the installation wizard. Fill in your site's preferences, and click Next

Fill in database details, as per MYSQL section above, and click Next


Fill in ftp configurations, if applicable, and click Next

Click Install

Joomla is now installed

Copy the code in Notice, and paste it in a new file called /var/www/html/joomla/configuration.php


Remove the installation older
# rm -rf /var/www/html/joomla/installation
Click on Site button to view your joomla main page, and click on Administrator button to view your joomla administrator's site.

Tuesday, May 7, 2019

Install openshift origin 3.11 cluster on a single virtualbox VM running CentOS 7

The minimum requirements for openshift origin (OKD) 3.11 is at least 16GB memory, but since my machine does not have that much capacity, I just use 8GB memory, and exclude all hardware checks in my inventory file

For openshift installation to run smoothly, you need a proper, separate DNS server. Refer to my previous post, on how to setup a very easy DNS server. The DNS can be installed in another VM with probably 512MB memory.

Prepare a VM, with:
- 8GB memory
- 50GB hardisk
- 1 vcpu
- bridged network

Install centos 7 on the VM

Since we are going to use ansible, passwordless ssh is necessary, even though it is just only one machine
# ssh-keygen
# ssh-copy-id localhost

Update the operating system, and install base packages
# yum update -y; yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct -y; reboot

Install epel repository, and disable the repo by default
# yum install -y epel-release; sed -i 's/enabled=1/enabled=0/' /etc/yum/repos.d/epel.repo

Install ansible and pyOpenSSL
# yum install -y --enablerepo=epel ansible pyOpenSSL

Install docker
# yum install -y docker-1.13.1

Install, enable and restart NetworkManager
# yum install NetworkManager -y
# systemctl enable NetworkManager
# systemctl start NetworkManager

Clone the openshift-origin repository in github. This repository will provide required playbooks and configuration files
# cd
# git clone https://github.com/openshift/openshift-ansible
# cd openshift-ansible
# git checkout release-3.11

Generate a hashed password for your first user
# openssl passwd -apr1 typeyourpasswordhere

Prepare your inventory file. You can refer here for the meaning of each options in below inventory file. Make sure that every hostname used in this file is DNS resolvable 
# cat > ~/openshift-ansible/inventory.ini <<EOF
[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin': '$apr1$qpJB3Cls$PN7/HlUNqBXikBl.jnrHF.'}

openshift_public_hostname=console.local.my
openshift_master_default_subdomain=apps.console.local.my
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

[masters]
osmaster.local.my openshift_schedulable=true 

[etcd]
osmaster.local.my 

[nodes]
osmaster.local.my openshift_schedulable=true openshift_node_group_name="node-config-all-in-one"
EOF

Run the prerequisites.yml playbook. This playbook will install required for openshift installation
# cd ~/openshift-ansible
# ansible-playbook -i inventory.ini playbooks/prerequisites.yml

Run the deployment playbook to deploy your openshift cluster
# ansible-playbook -i inventory.ini playbooks/deploy_cluster.yml

Once installation is complete, verify your installation by checking on the nodes
# oc get nodes

and logging in to openshift webconsole which in this case is https://console.local.my:8443, providing the username and password as per in your inventory.ini file

Monday, May 6, 2019

Postgresql replication on CentOS

I will use 2 machines to do this. For the sake of practicing, even 2 containers will do. The IP addresses are:
- master 10.10.10.1
- slave 10.10.10.2

Install postgresql repo on both machines
# yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

Install postgresql on both machines, in this example, I am using postgres 9.6
# yum install -y postgresql96-server

Initialize both postgres
# su - postgres
$ /usr/pgsql-9.6/bin/initdb -D /var/lib/pgsql/9.6/data/

On master, put in below config
# su - postgres
$ cat >> /var/lib/pgsql/9.6/data/postgresql.conf <<EOF
wal_level = hot_standby
max_wal_senders = 1 # number of slave servers
wal_keep_segments = 100
synchronous_standby_names = 'pgslave'
EOF

On master, create a user for replication, called replica
$ psql -c "create user replica replication;"

Allow slave to access master as replica
$ cat >> /var/lib/pgsql/9.6/data/pg_hba.conf <<EOF
host    replication     replica          10.10.10.2/32         trust
EOF

Restart postgres on master server
# systemctl restart postgresql-9.6

On slave server, stop postgresql
# systemctl stop postgresql-9.6

Clear slave server postgresql data directory
# mv /var/lib/pgsql/9.6/data/ /var/lib/pgsql/9.6/data-old
# sudo -u postgres mkdir /var/lib/pgsql/9.6/data

Copy data from master
# su - postgres
$ pg_basebackup -D /var/lib/pgsql/9.6/data -h 10.10.10.1 -U replica --verbose

Create recovery.conf in slave server
$ cat > /var/lib/pgsql/9.6/data/recovery.conf <<EOF
standby_mode=on
trigger_file='/tmp/promotedb'
primary_conninfo='host=10.10.10.1 port=5432 user=replica application_name=pgslave'
EOF

Turn hot_standby to on, on slave server
$ sed -i 's/#hot_standby\ =\ off/ hot_standby\ =\ on/'/var/lib/pgsql/9.6/data/postgresql.conf

Start postgres on slave
# systemct start postgresql-9.6

To check replication status, run below in master server
# su - postgres
$ psql -c "select client_addr, state, sent_location, write_location,flush_location, replay_location from pg_stat_replication;"

Test your replication by adding data/database into master server, and check whether the data/database is replicated to slave.

If a master is down, you need to promote the current slave to master, to allow it to be writable
# su - postgres
$ /usr/pgsql-9.6/bin/pg_ctl promote -D /var/lib/pgsql/9.6/data/
  

Friday, May 3, 2019

Setup easy DNS server using dnsmasq on CentOS 7

Install dnsmasq
# yum install dnsmasq -y

Put in upstream dns server in /etc/resolv.conf. In this case, I want to use opendns as my upstream dns server.
# cat >> /etc/resolv.conf <<EOF
nameserver 208.67.222.222
EOF

For dns records, just use /etc/hosts
# cat >> /etc/hosts <<EOF
192.168.0.99 mydns.local
192.168.0.100 myportal.local
192.168.0.101 myworkspace.local
EOF

With just these 2 settings, you are good to go. Start dnsmasq, and your dns server should be able to resolve those 3 domains.
# systemctl start dnsmasq

Allow on firewall
# firewall-cmd --add-service dns
# firewall-cmd --add-service dns --permanent

Test with dig
# dig +short @localhost myportal.local
192.168.0.100

Test from other machine
# dig +short @192.168.0.99 myworkspace.local
192.168.0.101


It can even forward to upstream DNS
# dig +short @192.168.0.99 www.google.com
216.58.196.36

Tuesday, April 30, 2019

How to install phpmyadmin for mysql 8 in Centos 7

To install mysql 8 on centos 7, please follow here.

Install epel-release
# yum install -y epel-release

Install phpmyadmin, httpd and php
# yum install -y phpmyadmin httpd php 

Change the ip allowed to access phpmyadmin by changing 127.0.0.1 to your network's ip (in my case, I want to allow all machine with 192.168.0 ip to be able to access the phpmyadmin page)
# sed -i 's/127.0.0.1/192.168.0/g'  /etc/httpd/phpMyAdmin.conf

Start httpd
# systemctl start httpd

Allow http on firewall
# firewall-cmd --add-service http
# firewall-cmd --add-service http --permanent

Change your mysql user's password to use mysql_native_password
# mysql -u root -p
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'MyNewPassword!123';

You should be able to access phpmyadmin, by pointing your browser to the server's ip address, followed by /phpmyadmin

And the you can login into phpmyadmin by supplying mysql username and password



How to install mysql 8 in Centos 7

Install mysql yum repo
# yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm

Install mysql-community-server
# yum install -y mysql-community-server

Start mysql server
# systemctl start mysqld

Get the default mysql root password from mysqld.log
# grep 'temporary password' /var/log/mysqld.log

Login to the mysql console, and change the root password
# mysql -u root -p
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewP@ssword!123';
mysql> exit

If you are using centos internal firewall, allow access to mysql port
# firewall-cmd --add-service mysql
# firewall-cmd --add-service mysql --permanent


Wednesday, April 10, 2019

Create a mini lab for practicing ansible using docker

To practice ansible, you need to have at least 2 machines. I suggest using containers rather than VM, since containers can be quickly spawned, and are light on the resources.

First, make sure you have docker Community Edition installed. If not, follow the install guide here.

Check your docker version
# docker version

Start the docker engine
# sudo systemctl start docker 

In this exercise, we will use ubuntu as our base operating system. So, run a container using the ubuntu image from docker hub. The options are -i for interactive, -t to allocate pseudo TTY and -d to run the container in the background
# docker run -it -d --name="ansible-master" ubuntu

The ubuntu image does not come with ssh, which is needed for ansible, so we need to install that, together with vim text editor 
# docker exec -it apt update; apt install vim openssh-server -y

Change the root password
# docker exec -it ansible-master passwd 

Permit root login for ssh
# docker exec -it ansible-master /bin/bash
ansible-master: # cat >> /etc/ssh/sshd_config <<EOF
PermitRootLogin yes
EOF

Start ssh
ansible-master: # service ssh start; exit

Create an image based on ansible-master. This image will be used later to create ansible-client1 container
# docker commit -m "ubuntu with vim and openssh-server" ansible-master myubuntu:2019041001

Run a container called ansible-client1 from the image created above
# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
myubuntu            2019041001          17f43a3ef384        10 minutes ago      265MB
# docker run -d -it --name="ansible-client1" myubuntu:2019041001

Start ssh service on ansible-client1
# docker exec -it ansible-client1 service ssh start

Try to ssh into both machines. Get the ip address of the containers using "docker inspect" command
# docker inspect ansible-client1 | grep -w IPAddress
            "IPAddress": "172.17.0.3",
                    "IPAddress": "172.17.0.3",
# docker inspect ansible-master | grep -w IPAddress
            "IPAddress": "172.17.0.2",
                    "IPAddress": "172.17.0.2",
# ssh root@172.17.0.2
# ssh root@172.17.0.3

Install ansible on ansible-master
# docker exec -it ansible-master apt install ansible -y

Check ansible version
# docker exec -it ansible --version

Create ssh-key without password
# docker exec -it ssh-keygen

Transfer the key to ansible-client1
# docker exec -it ssh-copy-id 172.17.0.3

Edit /etc/ansible/hosts to include all nodes
# docker exec -it ansible-master /bin/bash
ansible-master: # cat >> /etc/ansible/hosts <<EOF
localhost
ansible-client1 ansible_host=172.17.0.3

[all]
localhost
ansible-client1
EOF


Test ansible using ping module
# docker exec -it ansible-master -m ping all
localhost | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
ansible-client1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Congratulations, now you have your own mini ansible lab, using docker. You can add more clients as you wish later.

Tuesday, April 9, 2019

Troubleshooting sshd fail to start

I encountered this issue, and journalctl just does not give enough information on what has happened to make ssh failed to start. So after searching around, I found that I can check the /etc/ssh/sshd_conf for any syntax error, just by running it with extended test (-T) flag. What this flag do is, check the validity of /etc/ssh/sshd_config, throw out error if any, and exit. So to check the issue in the configuration file, just run:

# /usr/sbin/sshd -T

Another way is, you can also start sshd manually with debug flag (-d), and it will throw out any error that stopping it from starting:

# /usr/sbin/sshd -d


Monday, April 8, 2019

/boot Keeps Filling Up On Kernel Update

This is an issue I encountered in one of my friend's ubuntu 16.04 box. He tried to do kernel update, but the /boot keeps filling up with old initramfs image files, making the update process failed. Then I found a post here, that says that if /var/lib/initramfs-tools is not being cleaned up from old kernel files, /boot will keep on being filled up with old initramfs images. So to clean it up:

# uname -r 
4.15.0-46-generic
# cd /var/lib/initramfs-tools
# rm `ls | grep -v 4.15.0-46`

Once cleaned up, update your current initramfs, using:

# update-initramfs -u -k all

where -u is to generate initramfs for current kernel, and '-k all' to generate initramfs for kernel version newer than current kernel.

Once that done, you can safely reboot your machine. It will be rebooted using the latest kernel.

Friday, March 22, 2019

Rename and Remount LVM Logical Volume

A usual scenario where rename and remount of LVM logical volume are needed, is when you install a box with a CentOS, and use the default LVM based partitioning scheme. This scheme will take 50G for your / partition, and the rest will be allocated to your /home, which is not practical. In this example, we will be remounting logical volume originally mounted to /home, to /var/lib/elasticsearch, renaming it along the way.

The original partition scheme 
# df -Th
Filesystem                   Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root          xfs        50G  3.0G   48G   6% /
/dev/sda1                    xfs      1014M  179M  836M  18% /boot
/dev/mapper/cl-home xfs       965G   42G  924G   5% /home

Stop the service that might be using the partition that we are going to change
# systemctl stop elasticsearch

Rename the original directory to other name
# mv /var/lib/elasticsearch /var/lib/elasticsearch-old

Create a new directory to replace the one we already renamed above
# mkdir /var/lib/elasticsearch

Unmount /home
# umount /home

Check the name of volume group and logical volume
# lvs

Rename the logical volume to the one we desired. This is totally optional. You can use -t flag to test out the renaming process before proceeding
# lvrename -t cl home elasticsearch
# lvrename cl home elasticsearch

Change /etc/fstab to reflect on the new logical volume name and mount point. Test it out without the  -i option, before permanently make the change using the -i option
# sed 's/cl-home/cl-elasticsearch/g;s/\/home/\/var\/lib\/elasticsearch/g' /etc/fstab
# sed -i 's/cl-home/cl-elasticsearch/g;s/\/home/\/var\/lib\/elasticsearch/g' /etc/fstab

Mount it
# mount -a

Check to see if your new directory and logical volume mounted properly
# df -Th
Filesystem                   Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root          xfs        50G  3.0G   48G   6% /
/dev/sda1                    xfs      1014M  179M  836M  18% /boot
/dev/mapper/cl-elasticsearch xfs       965G   42G  924G   5% /var/lib/elasticsearch

Move the content of /var/lib/elasticsearch to /home
# mv /var/lib/elasticsearch/* /home

Move the content of /var/lib/elasticsearch-old to /var/lib/elasticsearch
# mv /var/lib/elasticsearch-old/* /var/lib/elasticsearch

Remove /var/lib/elasticsearch-old
# rmdir /var/lib/elasticsearch-old

Set proper permission for /var/lib/elasticsearch
# chmod 750 -R /var/lib/elasticsearch
# chown -R elasticsearch.elasticsearch /var/lib/elasticsearch

Start your service
# systemctl start elasticsearch

Done.

Thursday, March 7, 2019

Run an Application with GUI with Docker

There are a few ways to achieve this, like vnc and ssh X forwarding. But in this post, I will show how to run firefox by sharing the .Xauthority file with the docker instance.

First, create a Dockerfile in a directory called docker that use an ubuntu image from dockerhub, and install the firefox application

$ mkdir docker
$ cd docker
$ cat > Dockerfile
FROM ubuntu
RUN apt update && apt install -y firefox
CMD ["/usr/bin/firefox"]

Press ctrl-d to save the file


Next, build an image using the above Dockerfile

$ sudo docker build --tag=firefox-app docker/


Then, run the image, with additional options of --network and --env, and sharing our host .Xauthority file with the container that we are going to run 

$ sudo docker run --network=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --name=firefox1 firefox-app


You should be able to get a working firefox, running from a docker container, separated from the current firefox that you have in your current machine

Credit: https://medium.com/@SaravSun/running-gui-applications-inside-docker-containers-83d65c0db110

Tuesday, February 26, 2019

Install vncserver on Ubuntu 18.04 desktop

In this guide, we will be using tigervnc server

Install the vncserver
$ sudo apt update; sudo apt install tigervnc-standalone-server tigervnc-xorg-extension -y

Setup a password for vncserver
$ vncpasswd

Once you have provided the password, make sure a passwd file is created
$ ls ~/.vnc/
passwd

Run below command to put some settings in ~/.vnc/xstartup, so that gnome will be started when vnc is used
$ cat > ~/.vnc/xstartup <<EOF
#!/bin/sh
# Start Gnome 3 Desktop 
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
dbus-launch --exit-with-session gnome-session &
EOF

Press ctrl-d to save the above file

Start a vncserver instance
$ vncserver

List the display to get the port number to connect to, in this example the port number would be 5901, since the display number is 1
$ vncserver -list 

TigerVNC server sessions:

X DISPLAY # PROCESS ID
:1 5375


In the client machine, connect with using vncviewer like below, replacing the x.x.x.x with your server's ip number
$ vncviewer x.x.x.x:5901


To terminate the vnc session, use below command to kill the first instance of the vncserver
$ vncserver -kill :1

Friday, February 15, 2019

Install xrdp on Ubuntu 18.04 desktop

Install xrdp
$ sudo apt-get -y install xrdp

Next, one may adjust the configuration file:
$ sudo nano /etc/xrdp/xrdp.ini

Set encryption level to high, and save xrdp.ini:
encrypt_level=high

Allow just RDP through the local firewall:
$ sudo ufw allow 3389/tcp

Create a polkit configuration file:
$ sudo nano /etc/polkit-1/localauthority.conf.d/02-allow-colord.conf 

and put below settings into the file:

polkit.addRule(function(action, subject) {
if ((action.id == “org.freedesktop.color-manager.create-device” || action.id == “org.freedesktop.color-manager.create-profile” || action.id == “org.freedesktop.color-manager.delete-device” || action.id == “org.freedesktop.color-manager.delete-profile” || action.id == “org.freedesktop.color-manager.modify-device” || action.id == “org.freedesktop.color-manager.modify-profile”) && subject.isInGroup(“{group}”))
{
return polkit.Result.YES;
}
});


Restart xrdp
$ sudo systemctl restart xrdp

Logout all user from your desktop

Connect using remote desktop client. For linux, you can use remmina.

P/S: If you are still unable to login using remote desktop client, check whether you have xorgxrdp package installed. If not, install it and restart xrdp. Then try to connect again

$ sudo apt install xorgxrdp -y
$ sudo systemctl restart xrdp


Tuesday, January 15, 2019

Monitor the progress of mysql import

Mysql command does not have any means of monitoring the progress of sql file import. One method that I found useful in monitoring mysql import, is by using pipeviewer(pv) command. To install pipeviewer in a centos box:

Install epel repository
# yum install epel-release -y

Install pv
# yum install pv -y

To use the pv command to monitor mysql import progress

# pv mydatabase.sql | mysql -u myusername -p mydatabase

You will get progress bar showing how much data has been imported from the sql file to mysql.