Thursday, December 22, 2022

How to Install Podman on Ubuntu 22.04

Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. 

Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine.

Some of the advantages of podman over docker
1. Podman is daemonless
2. Podman is fully compatible with docker, thus one can run docker images without any modification even from
3. Most podman commands can be run as a regular user, without requiring additional privileges.

To install podman on ubuntu 22.04:

1. Update apt database 
$ sudo apt update

2. Install podman
$ sudo apt install podman

3. Check podman version
$ podman -v

4. Test podman
$ podman run

5. If you get someting like below, your podman installation is successul and it is able to pull and run image from dockerhub

Saturday, December 3, 2022

Extending XFS Partition Based LVM by Increasing the Disk Size

This is the original disk layout before the disk extension

We can see that the LVM is built on /dev/sda2, which is a partition, and not a disk. The disk, /dev/sda is now extended by 100G on the VM manager level. After a reboot, we can now see that the disk size is now 130G. 


To include this new space into existing LVM, create a new partition on the new disk space using a tool called cfdisk. Please refer to this post on how to use cfdisk to create a new partition.

The first step is to become root. If you prefer using sudo, please append sudo on each of these commands.

Once we can see that the partition is available, create a new physical volume (PV) using the new partition. 
# pvcreate /dev/sda3 

We should see the message as per below

Then, check what is the name of our volume group (VG)
# vgs 

Extend our VG by including the new PV
# vgextend centos /dev/sda3 

Check what is our logical volume (LV) path
# lvdisplay | grep Path 

Extend our LV to use the whole free space available on the VG
# lvextend -l +100%FREE /dev/centos/root

Check that our LV's size is now expanded 
# lvs 

Determine what is our filesystem type, at the same time we can see that our filesystem is not aware of the new size 
# df -Th / 

Grow xfs filesystem to suit the new partition size
# xfs_growfs /  

We can see that the filesystem is now grown to the full size as per our LV
# df -Th / 


Monday, November 14, 2022

Creating Partition Using cfdisk

Manipulating and partitioning hardisk can be a daunting task, especially for new sysadmin and if the disk has already contain data. Luckily, there is a tool that make this task easier, and that tool is called cfdisk.

First, we need to identify our disk name. This can be achieved by running lsblk
$ lsblk

Above is a sample output of lsblk. We can see that our disk name is sda (append /dev/ to the name for full path), and it has 130GB size. We can also see that only around 30GB has been used. We should have 100GB worth of free space for our new partition.

To start using cfdisk, just run it against the path of our disk, in this case /dev/sda
$ sudo cfdisk /dev/sda

We should be able to see this interface

Move the cursor using the down arrow key on our keyboard, so that the cursor lies on the 107374.19MB line

Make sure your cursor is at "New" button, and press Enter.

Choose "Primary" and press Enter.

We want to use the whole free space, so just leave the size as it is, and press Enter.

We now have a partition ready. To actually write the partition to the partition table, move your cursor using the right arrow key on the keyboard until it reached the "Write" button, and press Enter.

We will get  a warning message. Type "yes" and press Enter to confirm this action.

We will get this message once the partitioning is complete.

Quit cfdisk by choosing "Quit" and press Enter

Eventhough we have created the partition, but sometimes the partition table is not automatically updated. Run "partprobe" to inform the OS of the partition table change.
$ sudo partprobe

Use "lsblk" to see that the new partition has been created
$ lsblk

Tuesday, November 1, 2022

Extending LVM By Adding New Disk To a Virtual Machine

In previous post, we have covered the way to make LVM and filesystem aware of the disk size increase that happen in the virtual machine layer. 

There is another way to increase LVM volume capacity, that is to add new disk into the system. 

1. First, shutdown the virtual machine

2. Add a new virtual disk size into the virtual machine manager

3. Start the virtual machine

4. Check the new disk availability for LVM
$ sudo lvmdiskscan

5. To be able to use the new disk, it needs to be converted into physical volume (PV)
$ sudo pvcreate <path to the new disk>

6. Check list of PVs. Memorize the PV name
$ sudo pvs

7. Extend the volume group using the new PV
$ sudo vgextend <VG name> <PV name>

We can get the name of the VG by using this command. This command also will show how much free space is now available in the VG.
$ sudo vgs

8. Increase the size of logical volume, to use 100% of the free space available inside VG, using this command
$ sudo lvextend -l +100%FREE <LV PATH>

We can get the full  path of LV using this command
$ sudo lvdisplay | grep Path

9. Now, make the filesystem aware that the partition size has been increased. The command to do this differ from filesystem to filesystem, but we include 2 of the most used (based on our experience) filesystem
For xfs:
$ sudo xfs_growfs <mountpoint>

For  ext4:
$ sudo resize2fs <mountpoint>

10. Verify that the mountpoint is now increase in size
$ df -Th

Saturday, October 15, 2022

Extending Virtual Disk in a Linux Virtual Machine Using LVM

To increase a disk size in a virtual machine with Linux operating system configured with LVM, below are the steps (these steps were tested using virtualbox):

1. Power off the virtual machine

2. Increase the virtual disk size in the virtual machine using the virtual machine manager

3. Restart the virtual machine

4. Even though the virtual disk has been increased in size, but LVM is not aware of the change. To make LVM aware of the change, run pvresize command
$ sudo pvresize <pv name>

You can get the physical volume (PV) name by running "pvs" command
$ sudo pvs

5. Once the physical volume (PV) has been resized, run "vgs" to see the new volume group (VG) size
$ sudo vgs

6. Resize logical volume (LV) to make the new size available for us to use. Use below command to resize LV to use 100% of the free space available in VG
$ sudo lvextend -l +100%FREE <LV path>

You can get the LV path by running lvdisplay
$ sudo lvdisplay | grep Path

7. Now the partition is aware of the size change, but not the filesystem. Check the filesystem type and mountpoint using below command
$ df -Th

8. Depending on the filesystem type, extend your filesystem to suit the new partition size. 
For xfs:
$ sudo xfs_growfs <mountpoint>

and for ext4:
$ sudo resize2fs <mountpoint>

9. Verify that your filesystem is now using the new size
$ df -Th

Saturday, October 1, 2022

Splitting video using linux command line

To easily split (or cut some part) of video using command line, a tool called ffmpeg can be used.

Why use command line? Lighter on resource. Video editing is a high resource activity, and by using command line, we can reduce the resource used on our machine while doing video splitting, especially if we are using low resources machine. 

To install ffmpeg, on an debian based machine, just run below command
$ sudo apt -y install ffmpeg

To do the splitting, just use below command
$ ffmpeg -i mymovie.mp4 -ss 00:01:00 -t 00:00:30 myeditedmovie.mp4

-i is for which video to edit
-ss is for where in the video that you want to start
-t is for the duration of the output video

So in the above example, you will get an output called myeditedmovie.mp4, which start from the first minute of the original video, and will last 30 seconds.

Thursday, September 1, 2022

Testing SSL Certs Using Apache on Docker

Sometimes we have a need to test our SSL, before we deploy it to production. If we have a development or staging environment, then we can test it there. But if we do not have that, we can always rely on trusty old docker to test the ssl in our own machine. Please follow along to learn how to do it.

- have docker installed 

1. Put our ssl cert, intermediate cert (if we have one) and key into our current directory. Rename them as server.crt, server.key and server-ca.crt

2. Prepare a configuration file like below inside our current directory, and save it as https.conf
Listen 443
<VirtualHost _default_:443>
  DocumentRoot "/usr/local/apache2/htdocs"
  ErrorLog /proc/self/fd/2
  TransferLog /proc/self/fd/1
  SSLEngine on
  SSLCertificateFile "/ssl/server.crt"
  SSLCertificateKeyFile "/ssl/server.key"
  SSLCertificateChainFile "/ssl/server-ca.crt"
3. Run a container based on the httpd image from dockerhub, and mount the current folder with our ssl key and certs into /ssl in the container
docker run -dit --name apache -v ${PWD}:/ssl httpd
4. Copy /usr/local/apache2/conf into /ssl, so that we can edit it inside our host machine
docker exec -it apache cp /usr/local/apache2/conf/httpd.conf /ssl
5. Enable ssl in apache config by adding these lines into httpd.conf. We can just edit the file in our host machine, since text editor is not installed by default inside the apache image. The first 2 lines are to enable ssl support for apache, and the last line is for apache to include any files that end with .conf into its configuration
LoadModule ssl_module modules/
LoadModule socache_shmcb_module modules/
Include conf/extra/https.conf
6. Copy the edited httpd.conf file back into its original location
docker exec -it apache cp /ssl/httpd/conf /usr/local/apache2/conf
7. Create a symlink from /ssl/https.conf into /usr/local/apache2/conf/extra/
docker exec -it apache ln -s /ssl/https.conf /usr/local/apache2/conf/extra
8. Test the configuration file
docker exec -it apache httpd -t
9. If no error was found from the above command, restart the container
docker restart apache
10. Open a new terminal, and get the ip address of the container
docker inspect apache | grep IPAddress
11. Put the ip address and your hostname inside your machine's /etc/hosts
echo "" | sudo tee -a /etc/hosts 

12. Try to access the above domain using a web browser, and check the ssl cert information

13. If the ssl certs are working fine inside docker, you can be sure that it will work just fine in your production server

Wednesday, August 31, 2022

Connect to Fortinet VPN using Openfortivpn

Fortivpn does offer 2 clients for linux, one is for redhat family and the other installer is for ubuntu/debian family. You can download the installers from here

But for those who wanted to used opensource vpn client to connect to Fortinet VPN, we can use openfortivpn. Please follow below steps to connect using openfortivpn

1. Install openfortivpn
$ sudo apt install openfortivpn

2. We can connect just by using openfortivpn with some options, like below
$ sudo openfortivpn myvpnserver.local:10443 -u vpnuser -p mypass 
-u : please provide username
-p : please provide password
myvpnserver.local:10443 : please provide vpn server address and port

3. We can also use a configuration file with content like below
host = myvpnserver.local
port = 10443 
username = vpnuser
password = mypass

save the above file as myvpn.config and connect using below command so that openfortivpn can use the configuration inside the file to connect to vpn
$ sudo openfortivpn -c myvpn.config 

4. We can get all the configuration for the file, by referring to the manual page of openfortivpn. We can access the manual by running below command
$ man openfortivpn

Saturday, August 20, 2022

Another Way To Check UDP Port to a Linux Server

In a previous post, I have shared a way to check for udp port allowance to a linux server using netcat and ngrep.

I have found out an even easier way to accomplish this, just by using netcat, without the need to install additional software like ngrep.

To do this, first we need to setup a netcat to listen to the udp port, in the target machine. For example, we wanted to test udp port 10000 allowance, just run below command on the target machine
$ nc -klu 10000
The command will hang there, waiting for a connection to be sent to it.

In the client machine, just use netcat to send some text over to the target machine, like below (assuming the ip address of the target machine is
$ echo "testing udp" | nc -u
If the udp port is not blocked, we will see the "testing udp" text printed on the terminal in the target machine, where we listen for 10000 udp, like example below

Tuesday, August 9, 2022

Run A Mysql Query From Command Line

To run a mysql query directly from command line, without entering the interactive mode, use -e flag, like below

$ mysql -u user -p -e 'show tables;' mydbname

In the above example, the output would be, a list of tables inside mydbname, displayed on the command line, after you have put in the mysql user password.

Sunday, July 31, 2022

Test UDP Port to Linux Server

To test if a udp port is allowed to a linux server, and not blocked by any firewall, we need ngrep on the server side, and nc (netcat) on the client side.

First, install ngrep on the server
$ sudo apt install ngrep -y

And start to watch the udp 10000 traffic (for example)
$ ngrep -q "accessible" udp port 10000

In the client side, we need to install netcat-openbsd 
$ sudo apt install netcat-openbsd

We are now going to test port 10000 udp in the server, from the client (the "yes, accessible" message could be anything, as long as it contains accessible keyword)
$ echo "yes, accesible" | nc -u server-ip 10000

If the port is opened (not blocked by any firewall), you will get message like below in the server terminal
U client-ip:39062 -> server-ip:443 #1
  yes, accessible...    

If the port is blocked, you won't get any output on the server terminal

Friday, July 1, 2022

Change Docker Data Location

The default location that docker use to store all the components of docker, such as images and containers is /var/lib/docker.

In some linux installation, sometimes the / or /var directories are not that big, and we would like to have our docker save all the images and containers in another directory.

To set docker to use other directory:

1. Create the new directory (let's say we are using /data/docker )
$ sudo mkdir /data/docker

2. Stop docker daemon
$ sudo systemctl stop docker

3. Create a file called /etc/docker/daemon.json (Edit if the file is already exist)
$ sudo touch /etc/docker/daemon.json

4. Edit and put in below content into the file
        "data-root": "/data/docker"

5. Save and exit the editor

6. Start docker
$ sudo systemctl start docker

7. Verify that docker is running
$ sudo systemctl status docker

Thursday, June 16, 2022

Getting IP Geolocation Information Using Curl

The tool that we are going to use is just curl. We need to access the url of the website that will provide the geolocation information of the IP address. Let's get to it.

The first provider, is by using To get the geolocation of an ip from
$ curl

For example, to get the geolocation information of google dns, we can type:
$ curl

And we should be getting some output like this:
    "ip": "",
    "network": "",
    "version": "IPv4",
    "city": "Mountain View",
    "region": "California",
    "region_code": "CA",
    "country": "US",
    "country_name": "United States",
    "country_code": "US",
    "country_code_iso3": "USA",
    "country_capital": "Washington",
    "country_tld": ".us",
    "continent_code": "NA",
    "in_eu": false,
    "postal": "94043",
    "latitude": 37.42301,
    "longitude": -122.083352,
    "timezone": "America/Los_Angeles",
    "utc_offset": "-0800",
    "country_calling_code": "+1",
    "currency": "USD",
    "currency_name": "Dollar",
    "languages": "en-US,es-US,haw,fr",
    "country_area": 9629091.0,
    "country_population": 327167434,
    "asn": "AS15169",
    "org": "GOOGLE"

We can also specify which information we want to be displayed specifically:
$ curl

The second provider is Similar to the above example, we can just use curl like below to get the geolocation information of a certain ip address:
$ curl

For example, to get the information of the ip, we can issue this command:
$ curl

and we should get output like this
  "ip": "",
  "hostname": "",
  "anycast": true,
  "city": "Mountain View",
  "region": "California",
  "country": "US",
  "loc": "37.4056,-122.0775",
  "org": "AS15169 Google LLC",
  "postal": "94043",
  "timezone": "America/Los_Angeles",
  "readme": ""

Like the above example, we can also specify what information we wan to be shown, by using:
$ curl

And we should get something like this:

Saturday, June 4, 2022

Running Mongodb Replication Using Docker

For a proper mongodb replication, we are going to start 3 containers for this exercise.

First, start the first container, we will call it mongorep1. We need to set it so that it has hostname, configured to listen to all interfaces, and set a replSet for it called myrepl
docker run -dit --name mongorep1 --hostname mongorep1 mongo:6 --bind_ip_all --replSet myrepl

Once running, we need to get the ip address of mongorep1
docker inspect mongorep1 | grep -w IPAddress
            "IPAddress": "",

Then, we will start the second container. We need to feed the ip address of the first container to the second container as hosts so that mongo will not have issue setting up the replication
docker run -dit --name mongorep2 --hostname mongorep2 --add-host mongorep1: mongo:6 --bind_ip_all --replSet myrepl

Start the third and final container, with command almost similar to the second container.
docker run -dit --name mongorep3 --hostname mongorep3 --add-host mongorep1: mongo:6 --bind_ip_all --replSet myrepl

Once we have all the nodes running, access mongosh on the first container, and initiate replica set
docker exec -it mongorep1 mongosh
test> rs.initiate()

Add the other node into the replicaset
myrepl [direct: secondary] test> rs.add("")
myrepl [direct: primary] test> rs.add("")

Check the status of the replica set, make sure the first node is the primary node, and the other 2 are the secondary nodes
myrepl [direct: primary] test> rs.status()


  members: [


      _id: 0,

      name: 'mongorep1:27017',

      health: 1,

      state: 1,

      stateStr: 'PRIMARY',



      _id: 1,

      name: '',

      health: 1,

      state: 2,

      stateStr: 'SECONDARY',



      _id: 2,

      name: '',

      health: 1,

      state: 2,

      stateStr: 'SECONDARY',


Check if the other node is lagged in terms of replicating data
myrepl [direct: primary] test> db.printSecondaryReplicationInfo()



  syncedTo: 'Mon Dec 19 2022 15:48:01 GMT+0000 (Coordinated Universal Time)',

  replLag: '0 secs (0 hrs) behind the primary '





  syncedTo: 'Mon Dec 19 2022 15:48:01 GMT+0000 (Coordinated Universal Time)',

  replLag: '0 secs (0 hrs) behind the primary '


We can test the replication, by adding data into the first node, and check if the data is being replicated into the second and third node. 
docker exec -it mongorep1 mongosh
myrepl [direct: primary] test> use mynewdb
myrepl [direct: primary] mynewdb> db.people.insertOne( { name: "John Rambo", occupation: "Soldier" } ) 
Now access mongosh in the second node and view the data, The data should be similar to the mongorep1
docker exec -it mongorep2 mongosh
myrepl [direct: secondary] test> show dbs
myrepl [direct: secondary] test> use mynewdb
myrepl [direct: secondary] test> db.people.find()
    _id: ObjectId("63a08880e1c97fba6959ec15"),
    name: 'John Rambo',
    occupation: 'Soldier'

If you encounter this error:

MongoServerError: not primary and secondaryOk=false - consider using db.getMongo().setReadPref() or readPreference in the connection string

Run below command to enable read on the secondary nodes
myrepl [direct: secondary] test> db.getMongo().setReadPref("secondary")

Do the same for the third node, the data should also be the same.

docker exec -it mongorep3 mongosh
myrepl [direct: secondary] test> use mynewdb
myrepl [direct: secondary] test> db.people.find() 
    _id: ObjectId("63a08880e1c97fba6959ec15"),
    name: 'John Rambo',
    occupation: 'Soldier'

Friday, May 27, 2022

Excellent pdf editor in ubuntu

One of the field that I found linux is quite lacking is, in editing pdf. But a few weeks ago, A friend of mine recommended an excellent tool, called xournal++ (or xournalpp). This is actually a tool to do journalling, but the pdf editing feature is so good, it beats all the tools I previously used.

This application is available not just in Linux, but in Windows and MacOS as well. To install xournal++ in ubuntu, just follow the steps below.

Installing using snap

First, make sure you have snapd installed. If you do not have snap, you can install it by running
$ sudo apt install snapd -y

Then, install xournal++ using snap
$ sudo snap install xournalpp

Installing using apt (for ubuntu 22.04 and above)

If you are not a fan of snap, worry not, xournal++ is also available in the ubuntu repository for ubuntu 22.04 and above. Please folllow below steps to install xournalpp using apt.

Install xournalpp
$ sudo apt install xournalpp -y

Installing using apt (for older ubuntu)

Lets say your are using an older version of ubuntu (for example 20.04), worry not, just download the deb package from the release page, and install it using apt.

Browse the release page at 

Click on the "Tags" tab, and choose which release that you are interested. In this example we will choose v1.1.3. Click on the v1.1.3 tag.

Get the download link from the list of assets. Choose the one suitable for your version of operating system. 

Download the deb file
$ wget

And install it using apt
$ sudo apt install ./xournalpp-1.1.3-Ubuntu-focal-x86_64.deb -y
Once installed just launch xournal++ from your application launcher, 

or you can also launch it from terminal by running
$ xournalpp

Thursday, May 12, 2022

Change metadata in PDF file using exiftool

To change the metadata in PDF files, use a command line tool called exiftool. This tool can manipulate metadata in many file types, but in this post we will focus on changing the metadata in a pdf file.

To install this tool in ubuntu, run below command
$ sudo apt install libimage-exiftool-perl -y

Then, use the exiftool command to list out all the metadata in a pdf file
$ exiftool mypdf.pdf

Some details like below will be shown
ExifTool Version Number         : 11.88
File Name                       : mypdf.pdf
Directory                       : .
File Size                       : 1 MB
File Modification Date/Time     : 2022:12:08 07:46:39+08:00
File Access Date/Time           : 2022:12:08 07:46:43+08:00
File Inode Change Date/Time     : 2022:12:08 07:46:39+08:00
File Permissions                : rw-rw-r--
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.3
Linearized                      : No
Page Count                      : 15
XMP Toolkit                     : Image::ExifTool 11.88
Title                           : mypdf.pdf
Producer                        : Nitro PDF PrimoPDF
Create Date                     : 2022:09:30 16:57:06-08:00
Modify Date                     : 2022:09:30 16:57:06-08:00
Creator                         : PrimoPDF
Author                          : andre

To see just one tag, we can specify it when running exiftool. Let's say I want to see just the Author
$ exiftool -Author mypdf.pdf
Author                          : andre

To change the value of the tag, just provide the new value to the tag in the exiftool command. Let's say I want to change the author from andre to john
$ exiftool -Author=john mypdf.pdf

We can verify that the change has been implemented
$ exiftool -Author mypdf.pdf
Author                          : john

Once we are satisfied with the change, delete the original backup file that exiftool created prior to changing the metadata
$ rm mypdf.pdf_original

Sunday, May 1, 2022

Synchronize commands across panes in tmux

One of the neat feature of tmux is, it has the ability to synchronize commands typed in one pane to all pane in the same window in tmux. This trick will help you run command across multiple terminals with just one time typing. 

One example, that this might come in handy, is if you have 4 servers to be updated, you can just fire up a tmux, split the window into 4 panes, and ssh to each server in each pane. Set synchronize pane option, and you just have to run the command once, and the command will be repeated in all panes.

To use this, we need a windows splitted into at least 2 panes.

First, start a tmux session
$ tmux

Once inside tmux, do a horizontal split by pressing 
ctrl-b "

To turn on syncrhronize-pane
ctrl-b :

Then type
setw synchronize-panes on

Now you can type in one pane, and the command will be repeated in other panes as well.

To turn off synchonize-panes mode, type
ctrl-b :

Then type
setw synchronize-panes on

Good luck

Monday, April 4, 2022

Changing the boot order in linux

The standard linux system use grub (Grand UNified Bootloader) to manage its booting process. To change the boot order in linux, there is one file that you need to change which is /etc/default/grub.

To check which number your operating system resides, just run grub-reboot, and press double tab after the command to get the list. The list started from 0, so if your operating system of choice is at location 3, the number is 2.

1. Open /etc/default.grub with 
sudo nano /etc/default/grub
2. Change this line to suit your need
3. Save and exit

4. Update grub
sudo update-grub
That's all, try rebooting your machine and see if grub actually follow the configuration that you have setup.

Wednesday, March 30, 2022

Installing postgresql 9.6 on RHEL/CentOS 7 without repository

Postgres has released the final version of postgresql 9.6 on November 2021, and this version is no longer supported by So installing out of support software in production server is not recommended.

But for anyone who still wanted postgresql 9.6 on CentOS 7, here is how you can install it (the official pgrepo do not allow any installation of postgresql version less than 10)

1. Using your browser, browse to the postgresql download page at

2. Search for your version and architecture, in my case I needed version 9.6 for a centos 7 x86_64 machine. So my url would be

3. Download the necessary package, usually 1 package for the client, 1 package for the libs and one for the client (optional).
wget -c
wget -c 
wget -c

4. Install the packages. If any additional packages are needed, just download it from the repo url above.

sudo yum install ./postgresql96-libs-9.6.22-1PGDG.rhel7.x86_64.rpm ./postgresql96-libs-9.6.22-1PGDG.rhel7.x86_64.rpm ./postgresql96-server-9.6.22-1PGDG.rhel7.x86_64.rpm

5.  Initialize the database

sudo /usr/pgsql-9.6/bin/postgresql96-setup initdb

6. Enable the database startup on boot, and start the service

sudo systemctl enable --now postgresql-9.6 

Friday, March 25, 2022

Running singularity without installing using docker

Singularity is another container platform, similar to docker. It is widely used in high performance computing world, due to better security and portability.

But many of us are already familiar with docker, since that is the most widely used container technology. To try to learn singularity, the easiest way is to use docker that we already have inside our machine and launch singularity from there. 

We can run singularity image from by running below command
docker run --privileged --rm --version
singularity-ce version 3.10.0
In order to download image from docker and convert it into sif, we can use this
docker run --privileged --rm -v ${PWD}:/home/singularity pull /home/singularity/alpine_latest.sif docker://alpine
Once downloaded, we can run a command using the newly downloaded image
docker run --privileged --rm -v ${PWD}:/home/singularity exec /home/singularity/alpine_latest.sif cat /etc/os-release
NAME="Alpine Linux"
PRETTY_NAME="Alpine Linux v3.15"
Even though this is probably the easiest way to use singularity in a docker installed machine, but the command can get pretty confusing. It is highly advisable, that once you have tested enough and decided to use singularity, to actually install it in your system.

Sunday, March 20, 2022

Run an apache webserver with php using docker

This is actually very easy, just run below command to start it

docker run -d -p 8000:80 --mount type=bind,source"$(pwd):/htdocs",target=/var/www/html php:apache

The options are:

-d : run this container in a detached mode (in the background)

--mount : mount a folder in current directory called htdocs (will be created by docker) into /var/www/html in the container

-p 8000:80 : will map port 8000 in localhost to port 80 in the container

Once started, create a simple php script inside the htdocs directory

cd htdocs

cat >> index.php<<EOF

echo "This is my php script";



And browse using a normal web browser to http://localhost:8000. You should see "This is my php script" shown in your web browser 

Tuesday, March 15, 2022

Running a postgresql database using singularity

First, we need to pull the postgresql image from dockerhub

singularity pull docker://postgres:14.2-alpine3.15

The image will be saved as postgres_14.2-alpine3.15.sif. Now, create an environment file
cat >> pg.env <<EOF
export TZ=Asia/Kuala_Lumpurt
export POSTGRES_USER=pguser
export POSTGRES_PASSWORD=mypguser123
export POSTGRES_DB=mydb
export POSTGRES_INITDB_ARGS="--encoding=UTF-8"

Create 2 directories for data and run
mkdir pgdata
mkdir pgrun

Run the container. The options are -B to bind mount local directory to container, -e to clean environment before running the container, -C to start the container with PID, IPC and environment, and --env-file is to pass the environment variables in the file to the container
singularity run -B pgdata:/var/lib/postgresql/data -B pgrun:/var/run/postgresql -e -C --env-file pg.env postgres_14.2-alpine3.15.sif

The postgresql will be listening on localhost at port 5432. To test it out, just open another terminal, and use the same postgres_14.2-alpine3.15.sif to run psql
singularity exec postgres_14.2-alpine3.15.sif psql -h localhost -p 5432 -d mydb


Thursday, March 10, 2022

Running a simple nginx web server with custom index file using singularity

First, create a directory to house our index.html file
mkdir web

Create our custom index file
cat >> web/index.html<<EOF
<h1>This is my index<h1>


Then, download the image from dockerhub. The image will be downloaded as nginx_latest.sif.
singularity pull docker://nginx

Run instance, and mount the web directory to /usr/share/nginx/html in the instance. The options are, -B to bind the web directory in the host machine to the /usr/share/nginx/html in the container, while the --writable-tmpfs is to allow the container to write temporary files during execution. The container will be running on localhost port 80.
sudo singularity run -B web/:/usr/share/nginx/html --writable-tmpfs nginx_latest.sif

Check if our webserver is running fine using a standard web browser:

Saturday, March 5, 2022

Running a simple nginx web server using singularity

In this example, we will use the nginx web server image from docker hub.

1. Pull the nginx image from dockerhub. The image will be saved as nginx_latest.sif
singularity pull docker://nginx

2. Run an instance of nginx. We need to put --writable-tmpfs option so that the instance can write temporary files to disk.
sudo singularity run --writable-tmpfs docker://nginx web

3. To test, open a new terminal, and use curl to access http://localhost. We should be able to access the landing page of nginx running inside a singularity container 
curl localhost

<!DOCTYPE html> - - [05/Mar/2022:15:45:10 +0800] "GET / HTTP/1.1" 200 615 "-" "curl/7.68.0" "-"



<title>Welcome to nginx!</title>


html { color-scheme: light dark; }


 4. We can also use a web browser and browse to localhost

Tuesday, March 1, 2022

Running docker "hello-world" image using singularity

One of the advantage of singularity is, it does not require any service to run containers. And the images that you downloaded will be saved in normal files in your filesystem, rather than in some cache directory like docker.

To run dockerhub's hello-world image using singularity:

1. Pull the image from dockerhub

$ singularity pull docker://hello-world

2. The image will be saved as hello-world_latest.sif

$ ls 


3.1 To run a container based on that image, just use "singularity run" against the sif file

$ singularity run  hello-world_latest.sif


Hello from Docker!      

This message shows that your installation appears to be working correctly.


3.2 Or you can just "./" the sif file
$ ./hello-world_latest.sif


Hello from Docker!      

This message shows that your installation appears to be working correctly.


Monday, February 21, 2022

Installing singularity in ubuntu 20.04

SingularityCE is a container platform. It allows you to create and run containers that package up pieces of software in a way that is portable and reproducible. 

You can build a container using SingularityCE on your laptop, and then run it on many of the largest HPC clusters in the world, local university or company clusters, a single server, in the cloud, or on a workstation down the hall.

Your container is a single file, and you don’t have to worry about how to install all the software you need on each different operating system.

In short, singularity is an alternative to docker.

To install singularity in ubuntu 20.04:

1. Update repositories
$ sudo apt update

2. Download the installer. Please refer to the github page for the latest version. 3.9.7 is the latest version when this guide is being written
$ wget

3. Install singularity
$ sudo apt install ./singularity-ce_3.9.7-bionic_amd64.deb

4. Test singularity
$ singularity version

Thursday, February 10, 2022

How to install go in linux

Go is a programming language, created by engineers at Google in 2007 to create dependable and efficient software. Go is most similarly modeled after C.

To install go linux, the steps are very easy.

1. Download go package from

$ wget

2. Extract the tar package

$ tar xvf go1.18.linux-amd64.tar.gz 

3. Include the go bin directory into PATH

echo "export PATH=\$PATH:/home/user/go/bin" ~/.bashrc

source ~/.bashrc

4. Test your go command
$ go version
go version go1.18 linux/amd64

Tuesday, February 1, 2022

Testing SSL configuration using

SSL is an important part of web application security nowadays. Many tools are available to test out our SSL configuration, but almost all of the tools are web based. One of the great tool that I found that can be used out of a terminal, is called

Some of the benefits of using
  1. easy installation, even available as docker image
  2. easy usage
  3. fast
  4. clear and detailed output
  5. free
  6. open source
  7. privacy - your test, your result, only you can see it
To use this tool, simply download it:
$ wget

And deploy it anywhere on your linux machine

$ tar xvf

Make it easier to access

$ ln -s testssl

And we are good to go. To use it, just run the command, and provide the url we want to test against the command

$ cd testssl 

$ ./

Once we have the result, just fix the "NOT Ok" part, and rerun the above command. Rinse and repeat until you are fully satisfied with your ssl configuration. 

To get a visually better results with grading, just run the qualys ssl server test once you have fully tuned your ssl configuration with

Friday, January 28, 2022

Disabling old TLS in nginx

To increase nginx security, one of the thing that we can configure is, to disable old TLS. At this current moment, TLSv1.3 is the gold standard, and TLSv1 and TLSv1.1 should not be enabled in production nginx.

To disable TLSv1 and TLSv1.1, just go to /etc/nginx/nginx.conf, find ssl_protocols line and change it to look like below

ssl_protocols TLSv1.2 TLSv1.3;

Test your configuration for any syntax error

sudo nginx -t

And restart your nginx to activate the setting

sudo systemctl restart nginx

In order to quickly check if our nginx no longer support TLSv1 and TLSv1.1, use nmap command as below

 nmap --script ssl-enum-ciphers -p 443

Or, we can use one of the free web based SSL test tools: