Monday, December 6, 2021

Create a local repository in CentOS 7 over http

What you need to create a local repository in centos 7 over http is a web server (httpd) and createrepo command. Let's assume the ip address of this server is 10.10.10.10.

# yum install httpd createrepo yum-utils -y

Then, create a directory to store the repository files

# mkdir /var/www/html/repos

Create XML based rpm meta-structure repository, like an index file that points to rpm files for our repository

# createrepo /var/www/html

Configure httpd to allow followsymlinks 

# diff -u /etc/httpd/conf/httpd.conf.ori /etc/httpd/conf/httpd.conf

--- httpd.conf.ori      2021-12-06 12:23:36.704096479 +0800

+++ httpd.conf  2021-12-06 12:21:37.423092872 +0800

@@ -141,7 +141,8 @@

     # http://httpd.apache.org/docs/2.4/mod/core.html#options

     # for more information.

     #

-    Options Indexes FollowSymLinks

+    #Options Indexes FollowSymLinks

+    Options All Indexes FollowSymLinks

 

     #

     # AllowOverride controls what directives may be placed in .htaccess files.


Check for any error
# httpd -t
Restart httpd
# systemctl restart httpd
Sync files from the official repo
# yum install yum-utils
# reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/www/html/repos/ 
Allow port 80 in firewall
# firewall-cmd --add-service http
# firewall-cmd --add-service http --permanent
Create a yum repo configuration file
# cat >> /etc/yum.repos.d/local.repo <<EOF
[local]
name=CentOS Apache
baseurl=http://10.10.10.10
enabled=1
gpgcheck=0
EOF

Test repo. You should be able to see the local repository being listed. Run a yum search command to see whether yum is able to get package info from the local repo.
# yum repolist

Monday, November 22, 2021

Forward local connection to a remote server that is accessible to public using ssh

When we use standard ssh remote forwarding, the listening ip address on the remote side will always be 127.0.0.1 or localhost, and cannot be accessed using the remote machine's IP address. If you have no idea what this is about, please refer to this guide on how to create a reverse ssh tunnel.


In order to make the remote port accessible from any ip address available in the remote machine, we can use an option, -g. This option will allow remote host to connect to local forwarded port, and in turn, make our forwarded port available on the non loopback network interfaces.


Just use this command to achieve that:
$ ssh -R 18080:localhost:8080 myremotemachine -t 'ssh -g -L 8080:localhost:18080'


The meaning of the options are:

"ssh -R 18080:localhost:8080 myremotemachine" means that, local port 8080 will be forwarded to remote host's (myremotemachine) port 18080

"-t" means, force pseudo-terminal allocation, to allow running a command on a remote ssh session

"ssh -g -L 8080:localhost:18080" means that, the local port 18080 will be available on port 8080 locally, on all interfaces.


To verify, just run ss command. You will see that port 18080 is available only for localhost, and port 8080 is available for all interfaces (0.0.0.0).

$ ss -tulpn | grep 8080 

tcp    LISTEN   0        128               0.0.0.0:8080           0.0.0.0:*      users:(("ssh",pid=20656,fd=4))                                                 

tcp    LISTEN   0        128             127.0.0.1:18080          0.0.0.0:*                                                                                     

tcp    LISTEN   0        128                  [::]:8080              [::]:*      users:(("ssh",pid=20656,fd=5))                                                 

tcp    LISTEN   0        128                 [::1]:18080             [::]:*                                                                                     

Now you are able to use port 8080 on the remote machine, and you will be tunneled to port 8080 on local machine via ssh.


Thursday, November 18, 2021

Checking the operating system of the machine in your network

When you need to know the operating system, of the machines' connecting to your network, nmap can help. First, install nmap if you have not install it already.


Then, run below command to run TCP scan (-sT), with OS detection (-O)
$ sudo nmap -sT -O 192.168.0.0/24

You will get an output like below (your result will be completely different, this is just an example)

From the above result, we know that a Xiaomi device (probably a phone) is using 192.168.0.110. 




Monday, November 15, 2021

Scanning used ip addresses in your network

To do this, a tool named nmap can be used. This tool can easily be installed using below command:

Ubuntu

$ sudo apt install nmap -y

CentOS/RHEL/Fedora

$ sudo yum install nmap -y

Once installed, to scan your network for used IP addresses, just run below command. Please change the network address to suit your envinronment.

$ nmap -sn 192.168.0.0/24 

You will be getting output like below, and in this case you know that 5 IPs in your network has been used.





Tuesday, November 9, 2021

"ERROR 1049 (42000): Unknown database mydatabasename" when importing sql data into mysql/mariadb

Mysqldump is a tool frequently used in creating a backup of a mariadb or mysql database. To use this tool is pretty straight forward, just run below command:

$ mysqldump -u root -p mydatabasename > mydatabasename.sql 

The above command is fine, and we can always restore the data from the sql file into a database provided we have the database already in place, using below command:

$ mysql -u root -p mydatabasename <  mydatabasename.sql

A problem appears when we are transferring the sql file to another server which does not have the database already created. If we try to import the sql file, without the database already existed, we will get below error:

ERROR 1049 (42000): Unknown database mydatabasename

We can prevent this by adding an option to our mysqldump command. The option is "--databases" or in short "-B". To test it out, we can use below commands (dump the db, drop the db, and import back the db from the sql file):

$ mysqldump -u root -p --databases mydatabasename > mydatabasename.sql

$ mysqladmin -u root -p drop mydatabasename

$ mysql -u root -p < mydatabasename.sql     

This time, you would not get the above error, since the "--databases" option will add "CREATE DATABASE" query into the sql file, and that query will create the database if the database is not already exist.

 

Tuesday, November 2, 2021

Combining video and audio file into one using ffmpeg

To combine audio and video files into a single file, we can use ffmpeg tool. 

First, we need to install ffmpeg

$ sudo apt update && sudo apt install ffpmeg

Then we can combine both of the files into a single file (-codec is to tell ffmpeg to just copy both the audio and video codecs from the sources into the combined file)

$ ffmpeg -i audio.mp3 -i video.mp4 -codec copy audiovideo.mp4

 

 

Monday, October 18, 2021

Connect to a wifi by scanning a QR code in Linux

There is a project in github, that contain a script to easily accomplish this task. The project is called wifi-qr by kokoye2007. It can be cloned here.


To use the script, we need to clone the project from github.

$ git clone https://github.com/kokoye2007/wifi-qr

Once cloned, go into the project directory.

$ cd wifi-qr

To scan a QR code, and connect to the wifi information contained in the code 

$ ./wifi-qr s

A camera interface will be displayed. Show our wifi QR code to the computer's webcam, and the script will read the QR code and automatically connect to the wifi using the information in the QR code.


Apart from scanning QR code and connecting to the wifi, this script has a lot more functions. You can read about all the functions here. Thank you to kokoye2007 for this excellent tool.

 

Friday, October 15, 2021

Scanning QR Code in Ubuntu Linux

To get the information from a qr code, first and foremost, we need to save the qr code as a file. One of the way is to use a webcam, and an app called cheese to take picture using the webcam. Then we need to install zbar-tools to extract information from the qr code file. 


Install cheese

$ sudo apt update; sudo apt install cheese -y

Install zbar-tools

$ sudo apt install zbar-tools -y


Then, take a picture of the qr code using cheese via our webcam.


Once we have the qr code saved in a file, use zbarimg to extract the information from the qr code

$ zbarimg Downloads/frame.png 

QR-Code:https://www.linuxwave.info

scanned 1 barcode symbols from 1 images in 0.04 seconds


You can see that in the above output, that the qr code contain a url to this website.

 


 


Tuesday, October 12, 2021

Create QR code for wifi in linux

To easily share wifi credentials, a QR code can be used. A QR code can contain the information of the wifi, and can be scanned easily using any qr code reader in mobile phones. To make the QR code, we need a tool called qrencode.


First, prepare the wifi information using below format:

WIFI:S:{SSID name of your network};T:{security type - WPA or WEP};P:{the network password};;

For example, my wifi SSID is mysecurewifi, it is using WPA, and the password is mysecurewifipassword. 

 WIFI:S:mysecurewifi;T:WPA;P:mysecurewifipassword;;


Install qrencode

$ sudo apt update && sudo apt install qrencode


Provide the above information against qrencode command to generate a qr code image file called mywifi.png

$ qrencode -o mywifi.png '

WIFI:S:mysecurewifi;T:WPA;P:mysecurewifipassword;;'

 

If the generated image is too small, you can increase the size of the dot pixel using '-s' option

$ qrencode -o mywifi.png -s 10 'WIFI:S:mysecurewifi;T:WPA;P:mysecurewifipassword;;'


Share the file to anyone, or print it for the people who would like to use your wifi. 

Cheers! 

Thursday, September 30, 2021

Accessing android phone via ssh using termux

The ability to access your phone remotely is very useful. I frequently used this technique to copy files to and from my phone and checking my battery status while the phone is charging without actually holding the phone.

First and foremost, we have to install termux. You can install it from google playstore by clicking here


Once installed, fire up termux, and install openssh:

$ pkg install openssh

Then, create a password for the default user. The default user is u0_a274.

$ passwd

Then, launch an sshd daemon using below command to start openssh server in the phone, on port 1234 (the port can be any number above 1000).

$ sshd -p 1234

Check your ip address. using below command.

$ ip address 

Let's say our phone's ip address is 192.168.0.179. Make sure your desktop/laptop is connected to the same network as the phone's wifi connection. Fire up a terminal, and ssh to the ip address.

$ ssh u0_a274@192.168.0.179 -p 1234 



Congratulations, you are now connected to your phone, via ssh.

 

Monday, September 6, 2021

Enabling passwordless ssh for Synology DSM version 6

Synology DSM 6.0 does not allow, let alone passwordless ssh by default. In order to enable that, so that we can have automated backup to Synology NAS, below are the steps:

1. Enable User Home in User  Advanced


 
2. Enable SSH in "Terminal & SNMP"  Terminal


3. (Recommended) Create a user, and add the user into administrator group by going to User → Create. As an example, I created a user called dbbackup.



4. Change permission of the user's home directory to 755. You can do this by ssh'ing into synology as admin, and run below command
$ ssh administrator@synology-ip
$ sudo chmod 775 /var/services/homes/dbbackup 

$ exit 


5. Ssh into synology as your new user. Edit below lines in /etc/ssh/sshd_config, and save the file.
$ ssh dbbackup@synology-ip
$ sudo vi /etc/ssh/sshd_config
...
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
ChallengeResponseAuthentication no
...
6. Restart sshd service
$ sudo synoservicectl --restart sshd

7. Exit from synology shell, and create a pair of ssh key in client machine, and press enter for all the questions

$ ssh-keygen

8. Transfer the public key into synology

$ ssh-copy-id dbbackup@synology-ip

9. Test you ssh connection to synology. You should be able to ssh without password into synology.

$ ssh dbbackup@synology-ip 

Friday, August 6, 2021

Sharing linux directory with windows through remote desktop protocol

Rdesktop is a remote desktop protocol client for linux, to connect to windows machine through remote desktop protocol. Using the usual windows remote desktop client, user can easily drag and drop files between remote and local machines, but not with the rdesktop application.

To overcome this issue, rdesktop comes with an option to share the local linux directory, to the remote windows, so that the windows can access the local linux directory once rdesktop is connected. 

To do that, just specify -r while connecting using rdesktop

$ rdesktop -u administrator -r disk:download=/home/mine/Download mywindows:3389

The above command will share /home/mine/Download to mywindows machine, shown as download, just like below:


 






I can now open the download folder, and it will show me the Download folder in my linux machine (I call it workmachine).

If you do not have rdesktop command, you can easily install it in ubuntu using 

$ sudo apt install rdesktop -y

Sunday, July 11, 2021

Setup an openssh-server in a docker container

This is mainly used for testing only. 

First create a Dockerfile. I am using ubuntu:20.04 image for this

$ cat >> Dockerfile <<EOF

FROM ubuntu:20.04

RUN apt update && apt install openssh-server -y

RUN useradd myuser && echo "myuser:123456" | chpasswd && echo "root:123456" | chpasswd && mkdir /run/sshd

EXPOSE 22

CMD /usr/sbin/sshd -D

EOF

Then, build the image

$ docker build -t mysshimage .

Finally, run a container based on the image, and ssh into it. Use 123456 as password.

$ docker run -dit -p 1022:22 mysshimage

$ ssh myuser@localhost -p 1022 

To be a root user, just use su - command once you are logged in as myuser.


Monday, July 5, 2021

Combining pdf files in linux

One of the tool I always use to combine pdf file, is pdftk.

To install pdftk, just run apt install

$ sudo apt update && sudo apt install pdftk -y

To combine pdf files into one:

$ pdftk file1.pdf file2.pdf file3.pdf cat output combined.pdf

where file1.pdf , file2.pdf and file3.pdf are the outputs, and the combined.pdf is the result after the combination process.

Thursday, July 1, 2021

Git clone over ssh socks proxy

This is useful for a machine that needs to clone some repository from github, but does not having internet connection.


First, we must identify another machine that can access github.com, we can call this server proxy-server.

Then, establish a socks proxy from our no-internet-server
$ ssh -qN -D 1234 proxy-server

The above command will create a socks proxy at localhost port 1234

Use the git command with socks proxy. Let's say we want to clone the 30-seconds-of-code repository, run below command in a new shell
$ git -c http-proxy=socks5h://localhost:1234 clone https://github.com/30-seconds/30-seconds-of-code

Once done, press ctrl-c in the first shell, to terminate the socks proxy

Tuesday, June 29, 2021

Converting putty formatted ppk private key into ssh formatted private key

Putty used different format of private key compared to openssh. To use putty private key (usually with .ppk extension) with openssh, we need to convert it into openssh formatted private key.

To do this, we need putty tools. To install putty tools:

# apt install putty-tools -y


To convert, just use a command called puttygen, which is part of the putty-tools package

# puttygen myprivatekey.ppk -O private-openssh -o myprivatekey.priv

whereby myprivatekey.ppk is the private key in putty format, -O is to specify what output type we want puttygen to produce and -o is to specify the output file.


Once produced, we can test the private key using ssh command

# ssh myuser@myserver -i myprivatekey.pem

Saturday, June 26, 2021

Adding custom nameserver in systemd-resolve

The old /etc/resolve is now being managed by systemd-resolve service, which is part of systemd. In order to add new nameserver, please follow below steps


1. Create a directory named /etc/systemd/resolved.conf.d/

# mkdir /etc/systemd/resolved.conf.d


2. Add a new configuration file for your new dns server. Let's say we want to add google's dns ip address, which are 8.8.8.8 and 8.8.4.4 

# cat >> /etc/systemd/resolved.conf.d/mynameserver.conf <<EOF

[Resolve]

DNS=8.8.8.8 8.8.4.4

EOF


3. Restart the service

# systemctl restart systemd-resolved


4. Verify that your dns is now being used by the system

# systemd-resolve --status

Global

       LLMNR setting: no                  

MulticastDNS setting: no                  

  DNSOverTLS setting: no                  

      DNSSEC setting: no                  

    DNSSEC supported: no                  

         DNS Servers: 8.8.8.8             

                                8.8.4.4             

...


For more information about what option can be included in the configuration file, please refer to resolved.conf man page.

Friday, June 25, 2021

Testing ssl certificate and key using nginx docker

This is assuming our certs are for www.mydomain.com, our key is domain.key and our domain cert is domain.crt.

1. Get the domain certificate and your private key. The key is generated when you generate the CSR to apply for ssl, and the certificate is sent to you from you ssl provider

$ ls 

mydomain.crt mydomain.key


2. If your provider does not provide you with the bundled certificate, you need to get the root and intermediate certificate from the provider, since nginx needs the root, intermediate and domain to be in the same file for the ssl to work.


3. Combine domain certificate, intermediate certificate and root certificate into a file, let's call the file combined.crt

$ cat mydomain.crt intermediate.crt root.crt > combined.crt


4. Remove any ^M (carriage return) characters from the combined.crt file

$ sed -i 's/\r$//' combined.crt


5. Start an nginx docker container

$ docker run -dit --name nginx -v ${PWD}:/ssl nginx:latest


6. Get the ip address of the docker container

$ docker inspect nginx | grep -w IPAddress

            "IPAddress": "172.17.0.2",

                    "IPAddress": "172.17.0.2",


7. Put the reference of our domain to the container's ip address in /etc/hosts
# cat >> /etc/hosts <<EOF
172.17.0.2 www.mydomain.com
EOF

8. Prepare an nginx config file with ssl setting
cat >> mydomain.com.conf << EOF
server {
    listen 80;
    server_name  mydomain.com;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}

server {
    listen 443 ssl;
    server_name  mydomain.com;
    ssl_certificate /ssl/combined.crt;
    ssl_certificate_key /ssl/mydomain.key;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}
EOF

9. Create a symlink for the configuration file into /etc/nginx/conf.d inside the container
docker exec -it nginx ln -s /ssl/mydomain.com.conf /etc/nginx/conf.d

10. Access bash inside the container, and test the configuration
docker exec -it nginx nginx -t

11. Restart nginx container, if the above command returned no error
$ docker restart nginx

12. Make sure the container restarted successfully
$ docker ps

13. Open up a browser and browse to https://www.mydomain.com. If all is good, you should be able to see the padlock icon beside the domain nama, and the status of the connection is secure


Thursday, June 24, 2021

Checking if private key matches ssl certificate

To check if our private key and ssl certificate matched against each other, we need to compare two command outputs:


1. Run below command against the private key
$ openssl rsa -noout -modulus -in private.key | openssl md5

2. Run below command against the ssl certificate
$ openssl x509 -noout -modulus -in server.cert | openssl md5

The output for both the commands should be the same, showing that the key and cert are compatible.

Wednesday, June 23, 2021

Increase file upload size in wildfly

To increase file upload size limit in wildfly, the steps are as follows:

1. Go to bin directory in wildfly (assuming your wildfly directory is located in /opt), and connect to wildfly console

# cd /opt/wildfly/bin

# ./jboss-cli.sh -c


2. Go to /subsystem=undertow/server=default-server/http-listener=default

[standalone@localhost:9990 /] cd /subsystem=undertow/server=default-server/http-listener=default


3. Increase max-header-size to a higher value

[standalone@localhost:9990 /] :write-attribute(name=max-header-size,value=30000000)

{

    "outcome" => "success",

    "response-headers" => {

        "operation-requires-reload" => true,

        "process-state" => "reload-required"

    }

}


4. Increase max-post-size to a higher value

[standalone@localhost:9990 /] :write-attribute(name=max-post-size,value=30000000)

{

    "outcome" => "success",

    "response-headers" => {

        "operation-requires-reload" => true,

        "process-state" => "reload-required"

    }

}


5. Check if both is now increased in value
[standalone@localhost:9990 /] ls
max-header-size=30000000
...
max-post-size=30000000  

You are all set, test upload using your application to verify the change. Restart wildfly if necessary.

Saturday, June 19, 2021

Sending email from command line

To send simple email easily using command line, you can use sendmail command. This command is part of postfix, and should not be confused with the sendmail mail server. You can get this command by installing postfix.

# yum install postfix -y

To use sendmail command, it is very simple. Just run sendmail with a recipient's email address, type your message, and press dot (.) to send the message.
# sendmail myemail@myserver.com
SUBJECT: This is a test email 
Please do not reply, this is just a test email.
.

Check maillog to see if the email is being sent
# tail /var/log/maillog
...
Jun 19 08:19:08 myotherserver postfix/smtp[541113]: 35B81402B95B: to=<myemail@myserver.com>
19/0.04/0.02/0, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 92D95960492)
Jun 19 08:19:08 myotherserver postfix/qmgr[541108]: 35B81402B95B: removed

Tuesday, June 8, 2021

Putting a currently running process in the background

Sometimes a process seems like hanging, the terminal is irresponsive, you know it is running, but somehow you need to logout and need the process to continue running in the background.

This calls for a "kill" command with SIGSTOP and SIGCONT signals.

1. First, find the pid of the process

$ ps -ef | grep myprocess

1234

2. Once you have the pid, issue a kill with SIGSTOP signal to stop the process, assuming the process id is 1234.

$ sudo kill -SIGSTOP 1234

3. Issue a kill with SIGCONT to continue the process back in the background

$ sudo kill -SIGCONT 1234

4. You can check the backgrounded process using jobs command

$ jobs -l

[1]+ 1234 Stopped (signal)        myprocess

5. To get the process back to the foreground, just use fg command. %1 referring to the jobs number when you use the jobs command.

$ fg %1


You can refer to kill and signal man pages for more information.

Monday, May 17, 2021

Cleaning docker overlay2 disk usage

After using docker for a while, one thing that I notice is, the disk space usage on /var/lib/docker/overlay2 is quite high. For jsut 3 running containers, and a couple of images, my overlay2 disk usage is around 30GB, which is quite high. To reclaim back the space, what we can do is to clear off unused containers, images, volumes and other docker components, but that is going to be a daunting task for some.

Fortunately, docker comes with some tools that can ease up our work maintaining the software. The command to run it as per below:

$ docker system prune --all --volumes

This command will clear all unused components in docker, including unused volumes and images, Rest assured that running containers, and images used by running containers will be spared. From the man page, we can see that this command's usage is for cleaning up unused data in docker.

Sunday, May 16, 2021

Downloading torrent using command line

To download torrents using a command line, the easiest tool to use is transmission-cli. This tool is available in debian based distro and redhat based distro alike.

To use this tool, you have to install it first.

In debian based distro:

$ sudo apt install transmission-cli -y

In redhat based distro:

$ sudo yum install transmission-cli -y 

To use it with torrent file, first download the torrent file, and then run transmission-cli against the file

$ wget https://releases.ubuntu.com/20.04/ubuntu-20.04.2-live-server-amd64.iso.torrent

$ transmission-cli ubuntu-20.04.2-live-server-amd64.iso.torrent


To use it with magnet link, just run transmission-cli against the magnet link

$ transmission-cli magnet:?xt=urn:btih:eb6354d8d9b9427458af8bee90457101a4c1e8e3&dn=archlinux-2021.05.01-x86_64.iso


Saturday, May 8, 2021

Listing docker containers according to fields

Command like 'docker ps' is a good tool to check on your running container/s. This is visually pleasant if you do not have many containers. What if you have hundreds of containers, and you just want to print just the names of the containers, or even better the names and id of the containers?


We can use --format flag in this situation. This flag is available on pretty much every docker commands that produce some kind of output to stdout, so that you can filter what you want to see as the output.

To use this flag, you just need to follow below format:
$ docker ps --format '{{json .Names}}'

"hardcore_carson" 


whereby ".Name" is the field that you want to be displayed. For example, you want to list out just the ID of all the running containers, you can use:
$ docker ps --format '{{json .ID}}'

"e914bd4963d4" 


You can see that the field is different from the displayed field name without the --format flag.
$ docker ps
CONTAINER ID   IMAGE     COMMAND     CREATED          STATUS          PORTS     NAMES
e914bd4963d4   alpine    "/bin/sh"   29 minutes ago   Up 29 minutes             hardcore_carson

To know which flag is available to be used:
$ docker ps --format '{{json .}}'

{"Command":"\"/bin/sh\"","CreatedAt":"2021-05-08 11:32:12 +0800 +08","ID":"e914bd4963d4","Image":"alpine","Labels":"","LocalVolumes":"0","Mounts":"","Names":"hardcore_carson","Networks":"bridge","Ports":"","RunningFor":"33 minutes ago","Size":"0B (virtual 5.61MB)","State":"running","Status":"Up 33 minutes"} 


To list 2 (or more) fields:
$ docker ps --format '{{json .ID}} {{json .Names}}'

"e914bd4963d4" "hardcore_carson" 


You can also use the --format without the json keyword, the only different is the output would not be double quoted (which is not easy on the eyes if you have many fields)

$ docker ps --format '{{.ID}} {{.Names}}'

e914bd4963d4 hardcore_carson

 

Thursday, April 29, 2021

Checking if our password is based on a dictionary word on linux command line

There is a package in linux that can be used to check the our password, if it is based on a dictionary word, and that is libcrack (or libcrack2 in debian based).


To install the package:
  • In redhat family:
# yum install cracklib -y
  • In debian family:
# apt install libcrack2

To use, just run cat and pipe it against cracklib-check command, and supply your password after that. Do not forget to type ctrl-d or ctrl-c to get back to the shell, once done.
$ cat | cracklib-check

password 

password: it is based on a dictionary word
$ cat | cracklib-check

Xeir3oongex*

Xeir3oongex*: OK

Thursday, April 22, 2021

Adding Rules to Firewalld before Starting It

This is really useful if we want to start firewalld via ssh. If we start firewalld without allowing ssh, we will be locked out from the machine.

The solution is, to use a command called firewall-offline-cmd. This tool acts similarly with firewal-cmd, except it works during the daemon is dead.

To avoid being locked out of a remotely accessed, we should first allow ssh in firewalld

$ sudo firewall-offline-cmd --add-service ssh

We are now safe to start firewalld

$ sudo systemctl start firewalld

Once started, we can make the rule permanent on firewalld restart 

$ sudo firewall-cmd --add-service ssh --permanent

Make firewalld start automatically on every server boot 

$ sudo systemctl enable firewalld


Wednesday, April 21, 2021

Changing Kernel on Next Boot

To choose which kernel version you want to boot into on next reboot, below are the steps


1. Check what kernel version is available

# grep ^menuentry /etc/grub.cfg

2. Choose which kernel that you want to boot from, remember that the list from the above command start from 0. Let's say we want to choose the second kernel

# grub2-set-default 1

3. Rebuild grub.cfg

# grub2-mkconfig -o /boot/grub2/grub.cfg

4. Reboot the server

# reboot 


You server will reboot to the kernel version that you choose above. 

Sunday, April 18, 2021

Running php-fpm and Nginx in Docker

Php-fpm is an advanced and highly efficient processor for php. In order for your php files to be viewable in a web browser, php-fpm needs to be coupled with a web server, such as nginx. In this tutorial we will show how to setup php-fpm and nginx is docker.

1. Create a directory for your files 

$ sudo mkdir phpfpm

2. Create a network for the containers to use. This makes sure that we can use container's name in the configuration file.

$ docker network create php-network

3. Create nginx config file

$ cd phpfpm

$ cat > default.conf <<EOF

server {

    listen  80;    

# this path MUST be exactly as docker-compose.fpm.volumes,

    # even if it doesn't exist in this dock.

    root /complex/path/to/files;

    location / {

        try_files $uri /index.php$is_args$args;

    }

    location ~ ^/.+\.php(/|$) {

        fastcgi_pass fpm:9000;

        include fastcgi_params;

        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 

    }

}

EOF

4. Create an index.php file with some random php code (we are using phpinfo() to make it easier)

$ cat > index.php <<EOF

<?php phpinfo(); ?> 

EOF

5. Run a php-fpm container, in detached and intaractive mode, using php-network, and we mount /home/user/phpfpm to /var/www/html in container

$ docker run -dit --name fpm --network php-network -v /home/user/phpfpm:/var/www/html

6. Run an nginx container in detached and intaractive mode, using php-network, and we mount /home/user/phpfpm/default.conf to /etc/nginx/conf.d/default.conf in container

$ docker run -dit --name nginx --network php-network -v /home/user/phpfpm/default.conf:/etc/nginx/conf.d/default.conf -p 80:80 nginx

7. Open a browser, and browse to http://localhost, you should now be able to see the PHPinfo page. 

Of course, there is an easier way to set this up using docker-compose. We will cover that in another post.


Saturday, April 17, 2021

Hiding Apache2 (httpd) Version in HTTP Header

One of the basic concept of cybersecurity, is to hide as much information about your system from the public view. For apache2 (httpd), this is pretty easy to do.

1. First, open /etc/httpd/conf/httpd.conf

$ sudo vi /etc/httpd/conf/httpd.conf

2. Then, append below lines to the file

...

ServerTokens Prod

ServerSignature Off

3. Save the file

4. Test the configuration, to make sure no typo error that can cause httpd to fail to start
$ sudo httpd -t

5. Restart httpd to activate the settings

$ sudo systemctl restart httpd

6. Finally, you can verify the visibility of the webserver's version number using curl or wget 

$ curl --head http://www.mydomainname.com

...

Server: Apache

... 

 

Thursday, April 15, 2021

Checking Web Server Version Using Command Line

We usually use these methods to verify what is being displayed in our HTTP header to the public. There are 2 tools that can be used, curl and wget.


To use wget:
$ wget --server-response --spider http://www.mydomain.com
...
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_fcgid/2.3.9
...

To use curl:
$ curl --head http://www.mydomain.com
...
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips mod_fcgid/2.3.9
...

Wednesday, April 14, 2021

Accessing MTP Mounted Device Via Command Line

When you connect your android phone to a linux box using usb cable, the storage of the phone will appear in your file manager (thanks to automount). It is easily accesible from there, but what if you want to access it via command line? Where is it located?

To know the location of the MTP mounted storage, you need to know your user id

$ id

uid=1000(user) gid=1000(user) groups=1000(user),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),114(lpadmin),134(sambashare)

From the above command, the user ID is 1000. The MTP mounted device can be accessed at /run/user/<user ID>/gvfs

$ ls /run/user/1000/gvfs/

'mtp:host=Android_Android_28f5d3440504'

Just go to the 'mtp:host=Android_Android_28f5d3440504' directory (the name might differ), and you will see your phone's storage.

 

 

 

 

Thursday, April 1, 2021

Getting Access Denied Error when Using systemctl as root

 I got this error in one of our server, when trying to restart nginx

# systemctl status nginx

Failed to get properties: Access denied


Does not make sense, I am a root user. After some searching, a few suggestions came. 

The first suggestion was to restart systemctl daemon:

# systemctl daemon-reexec


That did not work for me. Another solution is to disable selinux temporarily, but this also did not work for me:

# setenforce 0 


The last thing that I tried (that actually worked) was to sending sigterm to systemd, and it will restart by itself:

# kill -TERM 1


If you guys happened to encounter this sort of error, you can try all the above. Some might suit you better than the other.

Installing elinks in CentOS 8

Elinks is a text based web browser, and it is now available in powertools repository. Powertools repository is not enabled by default, thus elinks is not available to be installed just by using standard yum install command.


List all available repositories

$ sudo yum repolist --all

...

powertools

...


Install elinks while enabling powertools repository temporarily

$ sudo yum install --enablerepo=powertools elinks -y

...

Installed:

  elinks-0.12-0.58.pre6.el8.x86_64                                                     gpm-libs-1.20.7-15.el8.x86_64                                                    

Complete!


Congratulations, you can now use elinks to view any website, but only in text based mode.

Wednesday, March 10, 2021

Fixing Wrong Timezone in Docker Logs

I had this issue of my running container logs is using UTC as its timezone, which will make troubleshooting process quite troublesome.

After some experiment, I found that we have to set the timezone early when we run the container, to make the logs recorded in the timezone we prefer.

For currently running container, the safest way is to commit it to image, and rerun a new container based on that image.

1. My current container's logs

$ docker logs mycontainer

...

2021-03-09 12:01:08.939 UTC [1283819] WARNING:  terminating connection because of crash of another server process

...

2. Back up the current container to an image called mycontainer-backup:20210310

$ docker commit mycontainer mycontainer-backup:20210310

3. Stop the current container, to avoid port clash (if any)

$ docker stop mycontainer

4. Run a new container, based of the backup image, this time we need to specify the timezone variable

$ docker run -dit --name mynewcontainer -e TZ=Asia/Kuala_Lumpur mycontainer-backup:20210310

5. Done, we can verify the log's timezone, just to be sure

$ docker logs mynewcontainer

...

2021-03-10 08:03:14.429 +08 [1] LOG:  listening on IPv4 address "0.0.0.0",

...

Monday, March 8, 2021

Postgresql 13 Streaming Replication using Docker

In this setup, we will create a 2 nodes postgresql streaming replication, in docker. This article will make use of postgresql image version 13.

1. Create a network, and take note of the network ip range
$ docker network create mynet

$ docker network inspect mynet 

2. Make 1 directory for pgmaster. 

$ sudo mkdir pgmasterdata
3.  Create a container called pgmaster
$ docker run -dit -v "$PWD/pgmasterdata/:/var/lib/postgresql/data -e POSTGRES_PASSWORD=abc -p 5432:5432 --restart=unless-stopped --network=mynet --name=pgmaster postgres 
4. Backup and edit pgmaster's postgresql.conf with below settings
$ sudo cp pgmasterdata/postgresql.conf pgmasterdata/postgresql.conf.ori
$ cat > postgresql.conf <<EOF
listen_addresses = '*'
port = 5432
max_connections = 50
ssl = off
shared_buffers = 32MB
# Replication Settings - Master
wal_level = hot_standby
max_wal_senders = 3
EOF

$ sudo cp postgresql.conf pgmasterdata/postgresql.conf 

5. Login to pgmaster and create a user for replication
$ docker exec -it pgmaster psql -U postgres -h localhost -d postgres
postgres=# create role replicator with replication password 'abc';
postgres=# \q
 
6. Backup and edit pgmaster's pg_hba.conf with ip range from step 1
$ sudo cp pgmasterdata/pg_hba.conf pgmasterdata/pg_hba.conf.ori
$ echo "host    replication  all  172.16.0.0/16  trust" | sudo tee -a pgmasterdata/pg_hba.conf 
7. Restart pgmaster container
$ docker restart pgmaster
8.  Run backup of master to /slavedata in pgmaster

$ docker exec -it pgmaster bash

# mkdir /pgslavedata 

# pg_basebackup -h pgmaster -D /pgslavedata -U replicator -v -P --wal-method=stream

9. Copy /slavedata in pgmaster to host

$ docker cp pgmaster:/pgslavedata pgslavedata

10. Tell pgslave that it is a slave

$ sudo touch  pgslavedata/standby.signal

11. Edit postgresql.conf in pgslavedata

$ sudo cp pgslavedata/postgresql.conf pgslavedata/postgresql.conf.ori

$ cat > postgresql.conf <<EOF

listen_addresses = '*'

port = 5432

max_connections = 50

ssl = off

shared_buffers = 32MB

# Replication Settings - Slave

hot_standby = on

primary_conninfo = 'host=<master ip> port=5432 user=replicator password=abc@123'

EOF

$ sudo cp postgresql.conf pgslavedata/postgresql.conf

12. Start pgslave

$ docker run -dit -v "$PWD"/pgslavedata/:/var/lib/postgresql/data -e POSTGRES_PASSWORD=abc -p 15432:5432 --network=mynet --restart=unless-stopped--name=pgslave postgres

13. Check replication state in pgmaster

$ docker exec -it pgmaster psql -h localhost -U postgres -d postgres -c "select usename,state from pg_stat_activity where usename = 'replicator';"

usename   | state  

------------+--------

 replicator | active 

14. Verify the setup by creating a database in pgmaster, and check if the same database appear in pgslave. 

15. To promote to pgslave if pgmaster is down, simply run "pg_ctl promote" command

$ docker exec -it pgslave bash

# pg_ctl promote

Saturday, March 6, 2021

Configuring Mysql Asynchronous Replication

In this exercise, we will use one master and one slave. The addresses are as follow:

master: 10.0.0.10

slave: 10.0.0.20


1. Make sure mysql-server is installed in both machines

2. In master, check the user_id

$ mysql -u root -p

mysql> show variables like 'server_id';

3. In master, create a user for replication

mysql> create user replicator@'%' identified with 'mysql_native_password' by 'mypassword';

3. In master, grant the user a privilege of slave replication

mysql> grant replication slave on *.* to replicator@'%';

4. In slave, change the server_id to a number different from the master

$ mysql -u root -p

mysql> set global variable server_id = 20;

5. Configure slave with the server's details

mysql> change master to master_host='10.0.0.10',master_user='replicator',master_password='mypassword';

6.  Start slave

mysql> start replica;

7. Check slave status. Make sure slave_io and slave_sql are in running state.

mysql> show slave status\G

8. Test the setup. Create database, add table and some data into master, and check if the data gets replicated to slave. 

Sunday, February 14, 2021

Connecting to GlobalProtect VPN in Linux Mint

I encountered GlobalProtect (GP) vpn while working on a project, and somehow the vpn portal does not have any linux client for me to connect to the server. They have windows and mac though, so I tried searching around for solution.


After a while, I stumbled upon this post and this other post, stating that openconnect client can connect to GP vpn.

To install openconnect is fairly easy. Just fire up your terminal, and use below command to install openconnect client
$ sudo apt install openconnect -y

Once installed, you just have to use below command to connect to your GP vpn
$ sudo openconnect --protocol gp -u foo vpn.server.com

You will get some warning about "Certificate failed verification", just answer yes
Certificate from VPN server "vpn.server.com" failed verification.
Reason: signer not found
To trust this server in future, perhaps add this to your command line:
    --servercert pin-sha256:1YWmjjGL3wppl245dRc3/p+mytteBnvaVz456DQY+wutt=
Enter 'yes' to accept, 'no' to abort; anything else to view: yes 

It will later ask for you password, just put in your password
Connected to HTTPS on vpn.server.com
Enter login credentials
Password: 

You will know that you are connected if you find something resembles below line
Connected as 192.168.100.72, using SSL, with ESP in progress

Try to access your internal server, and you should be able to.

Rejoice!

Saturday, February 6, 2021

Displaying Text on Remote Desktop over SSH

I used this trick to warn my kids that their computer playing time is almost over. The application name is zenity, and we need to set our DISPLAY environment to :0 beforehand so that the message will appear on their screen.

1. ssh into the machine

$ ssh foo@machine

2. Set the DISPLAY environment

$ export DISPLAY=:0

3. Use zenity to prompt messages to their screen. In this case, I use warning style message, and will be displayed for 2 seconds

$ zenity --warning --timeout=2 --text="Computer will be shut down in 15 minutes"







Monday, January 25, 2021

Replacing Newline with Space

Let's say we have a list of words like below

$ cat animals
cat
fish
zebra
monkey

and we want it to be arranged in one horizontal line, separated by space. We can achieve that by using tr command.
$ cat animals | tr "\n" " " 
cat fish zebra monkey

What happened is, \n (symbol for newline), will be replaced with " ", which is symbol for space. 

Sunday, January 24, 2021

Generating Certificate Signing Request (CSR) for Multi Domain

For multi domain, we have to create a config file for openssl command to refer to, since the interactive mode would not, by default ask for multi domain in a CSR creation.


To create the config file, please follow below command (this example is for mydomain.com)

$ cat >> www-portal.mydomain.conf <<EOF

[req]

distinguished_name = req_distinguished_name

req_extensions = v3_req

prompt = no

[req_distinguished_name]

C = MY

ST = Selangor

L = Cyberjaya

O = MyCompany

OU = Software Development Division

CN = www.mydomain.com

[v3_req]

keyUsage = keyEncipherment, dataEncipherment

extendedKeyUsage = serverAuth

subjectAltName = @alt_names

[alt_names]

DNS.1 = portal.mydomain.com

EOF


Run openssl CSR creation command against the config file

$ openssl req -new -newkey rsa:2048 -nodes -keyout www-portal.mydomain.key -out www-portal.mydomain.csr -config www-portal.mydomain.conf


Once generated, we can send the CSR to the Certificate Authority (usually SSL provider), to get our cert. This one CSR is usable for 2 domains, which are www.mydomain.com and portal.mydomain.com.


Generating a Certificate Signing Request (CSR) for a Single Domain

To generate a certificate signing request (CSR), you need to have openssl package installed. Please refer here for the instruction on how to install it.


Once you have openssl installed, please use below command to create a CSR with key for mydomain.com. 

$ openssl req -new -newkey rsa:2048 -nodes -keyout mydomain.com.key -out mydomain.com.csr

Press Enter and you will need to provide a few information regarding the CSR. The information are as follows:

  1. Common Name: The FQDN (fully-qualified domain name) you want to secure with the certificate. For example: mydomain.com
  2. Organization: The full legal name of your organization including the corporate identifier. For example: MyCompany Co
  3. Organization Unit (OU): Your department such as 'Information Technology' or ‘Website Security.’
  4. City or Locality: The locality or city where your organization is legally incorporated. Do not abbreviate. For example: Cyberjaya
  5. State or Province: The state or province where your organization is legally incorporated. For example: Selangor
  6. Country: The official two-letter country code where your organization is legally incorporated. For example: MY

Once the CSR has been generated, we can provide it to the SSL provider, so that they can use it to provide the SSL for your domain. Please be mindful to keep the key file, because we will need it during our SSL setup.

Friday, January 22, 2021

Installing Openssl Application to Use SSL Functions

The openssl program is a command line tool for using the various cryptography functions of OpenSSL's crypto library from the shell. 


To install openssl in ubuntu or debian:

$ sudo apt install openssl -y


To install openssl in RHEL, CentOS or Fedora:

$ sudo yum install openssl -y


Tuesday, January 19, 2021

How to Redirect HTTP Traffic to HTTPS in httpd on CentOS

The easiest way is to do it using the VirtualHost configuration, if you have control over it. 


Edit your virtualhost configuration for that domain, in CentOS it is usually located in /etc/httpd/conf.d/mydomain.conf:

<VirtualHost *:80>
   ServerName www.mydomain.com
   Redirect / https://www.mydomain.com
</VirtualHost>

<VirtualHost _default_:443>
   ServerName www.mydomain.com
   DocumentRoot /usr/local/apache2/htdocs
   SSLEngine On
...
</VirtualHost>

The most important line is the "Redirect" line which will redirect all http traffic to https.

Once done, save the file.

Do not forget to run syntax test of the configuration files.
# httpd -t

And reload the service
# systemctl reload httpd

Sunday, January 3, 2021

Force Logout Other User from Linux

First, we need to know the username. This can easily be done using w command

# w

12:34:31 up 36 days, 14:14,  2 users,  load average: 0.07, 0.02, 0.00

USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT

ironman   pts/0    ::1              12:34    2.00s  0.01s  0.01s w

root     pts/3    10.29.25.230     12:22    7.00s  0.13s  0.00s bash


Let's say we want to log out ironman from our server. What we have to do is to use a command called pkill against that user.

# pkill -u ironman 


All processes that is owned by that user will be killed. 
# w

12:34:31 up 36 days, 14:14,  2 users,  load average: 0.07, 0.02, 0.00

USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT

root     pts/3    10.29.25.230     12:22    7.00s  0.13s  0.00s bash


The ironman user is no longer logged in. Easy peasy.