Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Thursday, October 10, 2024

Installing docker and docker compose on almalinux 9

Docker does not explicitly support almalinux, so we have to use centos repository instead.


Below are the steps:

1. Update system
sudo dnf --refresh update
sudo dnf upgrade -y

2. Enable docker repository
sudo dnf install yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

3. Install docker
sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y

4. Enable and start docker
sudo systemctl enable --now docker

5. Add user to docker group
sudo usermod -G docker -a myuser

6. Refresh group list 
newgrp

7. Check docker and docker compose version
docker version
docker compose version          

Wednesday, July 12, 2023

Transferring docker image to another machine via network without a registry

We can use "docker save" and "docker load" commands to achieve this, combined with ssh.

These are the steps:

Save your image to a file

$ docker save -o filename imagename:tag

To get a smaller filesize, we can use xz, bzip2 or gz compression

$ docker save imagename:tag | xz > filename.xz

$ docker save imagename:tag | bzip2 > filename.bz2

$ docker save imagename:tag | gzip > filename.gz 

Then, transfer the file over ssh to another machine

$ scp filename.xz user@anothermachine

Load back the image in the other machine. "Docker load" will automatically decompress the file if it is compressed with xz, bzip2 or gz.

$ docker load -i filename.xz

We can also use redirection, instead of the -i option

$ docker load < filename.xz

All that can also be done in one liner

$ docker save imagename:tag | xz | ssh user@anothermachine docker load

Wednesday, May 24, 2023

Unable to ssh into docker playground virtual machine (Permission denied (publickey) error)

Docker playground is a very useful place to learn how to use docker. However, the web interface is sometimes can be quite difficult to use, especially if we are trying to copy long commands into the virtual machine. 


A good solution to this, is to connect to the virtual machine using ssh. We can copy the link at the ssh column of the virtual machine, and paste it in our terminal. 




One of the issue that we encounter when we are trying to ssh into the virtual machine, is we will get permission denied (publickey) error, like below 








The reason this happened is, the ssh server inside the playground's virtual machine is expecting the client to connect from a machine that owns a ed25519 key. This can be verified by running below command inside the playground's virtual machine






To encounter that, simply create an ed25519 in our machine, using ssh-keygen
$ ssh-keygen -t ed25519


























We should be able to ssh into the playground's virtual machine now


Tuesday, May 9, 2023

Exiting a docker container running in interactive mode

To exit from a docker container while in interactive mode (using the -it option without -d), there are 2 options:


1. Press ctrl-d to exit the shell (if you are in it) and exiting the container

2. Press ctrl-p, then ctrl-q to daemonize the container, making it run in the background without occupying the terminal

Tuesday, January 31, 2023

Sharing Files Over http Using Nodejs In Docker

Sometimes we just need to share some files over network to some friends, and need a solution that is easy and fast to setup, provided we already have docker installed in our machine.


First, prepare a directory. Then, put all the files that we want to share inside the directory
$ mkdir files

Then, run a container based on nodejs:slim image, and mount the above directory to our container, which is named "fileshare" in this example
$ docker run -dit -p 8080:80 --name fileshare -v $PWD:/files -w /files node:slim

Install http-server inside the container
$ docker exec -it fileshare npm install -g http-server 

Run http-server inside the container
$ docker exec -it fileshare http-server -p 80 .

You should now be able to view your files using a web browser. Just browse to your ip with port 8080 like below
















Once you are done, just press control-C on the terminal where the http-server is running, and the http-server will be terminated

Thursday, December 22, 2022

How to Install Podman on Ubuntu 22.04

Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. 

Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine.

Some of the advantages of podman over docker
1. Podman is daemonless
2. Podman is fully compatible with docker, thus one can run docker images without any modification even from docker.io
3. Most podman commands can be run as a regular user, without requiring additional privileges.

To install podman on ubuntu 22.04:

1. Update apt database 
$ sudo apt update

2. Install podman
$ sudo apt install podman

3. Check podman version
$ podman -v

4. Test podman
$ podman run docker.io/hello-world

5. If you get someting like below, your podman installation is successul and it is able to pull and run image from dockerhub


Thursday, September 1, 2022

Testing SSL Certs Using Apache on Docker

Sometimes we have a need to test our SSL, before we deploy it to production. If we have a development or staging environment, then we can test it there. But if we do not have that, we can always rely on trusty old docker to test the ssl in our own machine. Please follow along to learn how to do it.


Pre-requisite:
- have docker installed 

1. Put our ssl cert, intermediate cert (if we have one) and key into our current directory. Rename them as server.crt, server.key and server-ca.crt

2. Prepare a configuration file like below inside our current directory, and save it as https.conf
Listen 443
<VirtualHost _default_:443>
  DocumentRoot "/usr/local/apache2/htdocs"
  ServerName linuxwave.info
  ServerAdmin me@linuxwave.info
  ErrorLog /proc/self/fd/2
  TransferLog /proc/self/fd/1
  SSLEngine on
  SSLCertificateFile "/ssl/server.crt"
  SSLCertificateKeyFile "/ssl/server.key"
  SSLCertificateChainFile "/ssl/server-ca.crt"
</VirtualHost>
3. Run a container based on the httpd image from dockerhub, and mount the current folder with our ssl key and certs into /ssl in the container
docker run -dit --name apache -v ${PWD}:/ssl httpd
4. Copy /usr/local/apache2/conf into /ssl, so that we can edit it inside our host machine
docker exec -it apache cp /usr/local/apache2/conf/httpd.conf /ssl
5. Enable ssl in apache config by adding these lines into httpd.conf. We can just edit the file in our host machine, since text editor is not installed by default inside the apache image. The first 2 lines are to enable ssl support for apache, and the last line is for apache to include any files that end with .conf into its configuration
LoadModule ssl_module modules/mod_ssl.so
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
Include conf/extra/https.conf
6. Copy the edited httpd.conf file back into its original location
docker exec -it apache cp /ssl/httpd/conf /usr/local/apache2/conf
7. Create a symlink from /ssl/https.conf into /usr/local/apache2/conf/extra/
docker exec -it apache ln -s /ssl/https.conf /usr/local/apache2/conf/extra
8. Test the configuration file
docker exec -it apache httpd -t
9. If no error was found from the above command, restart the container
docker restart apache
10. Open a new terminal, and get the ip address of the container
docker inspect apache | grep IPAddress
11. Put the ip address and your hostname inside your machine's /etc/hosts
echo "172.17.0.2 linuxwave.info" | sudo tee -a /etc/hosts 

12. Try to access the above domain using a web browser, and check the ssl cert information















13. If the ssl certs are working fine inside docker, you can be sure that it will work just fine in your production server


Friday, July 1, 2022

Change Docker Data Location

The default location that docker use to store all the components of docker, such as images and containers is /var/lib/docker.


In some linux installation, sometimes the / or /var directories are not that big, and we would like to have our docker save all the images and containers in another directory.

To set docker to use other directory:

1. Create the new directory (let's say we are using /data/docker )
$ sudo mkdir /data/docker

2. Stop docker daemon
$ sudo systemctl stop docker

3. Create a file called /etc/docker/daemon.json (Edit if the file is already exist)
$ sudo touch /etc/docker/daemon.json

4. Edit and put in below content into the file
{
        "data-root": "/data/docker"
}

5. Save and exit the editor

6. Start docker
$ sudo systemctl start docker

7. Verify that docker is running
$ sudo systemctl status docker

Saturday, June 4, 2022

Running Mongodb Replication Using Docker

For a proper mongodb replication, we are going to start 3 containers for this exercise.


First, start the first container, we will call it mongorep1. We need to set it so that it has hostname, configured to listen to all interfaces, and set a replSet for it called myrepl
docker run -dit --name mongorep1 --hostname mongorep1 mongo:6 --bind_ip_all --replSet myrepl

Once running, we need to get the ip address of mongorep1
docker inspect mongorep1 | grep -w IPAddress
            "IPAddress": "172.17.0.2",

Then, we will start the second container. We need to feed the ip address of the first container to the second container as hosts so that mongo will not have issue setting up the replication
docker run -dit --name mongorep2 --hostname mongorep2 --add-host mongorep1:172.17.0.2 mongo:6 --bind_ip_all --replSet myrepl

Start the third and final container, with command almost similar to the second container.
docker run -dit --name mongorep3 --hostname mongorep3 --add-host mongorep1:172.17.0.2 mongo:6 --bind_ip_all --replSet myrepl

Once we have all the nodes running, access mongosh on the first container, and initiate replica set
docker exec -it mongorep1 mongosh
test> rs.initiate()

Add the other node into the replicaset
myrepl [direct: secondary] test> rs.add("172.17.0.3")
myrepl [direct: primary] test> rs.add("172.17.0.4")

Check the status of the replica set, make sure the first node is the primary node, and the other 2 are the secondary nodes
myrepl [direct: primary] test> rs.status()

...

  members: [

    {

      _id: 0,

      name: 'mongorep1:27017',

      health: 1,

      state: 1,

      stateStr: 'PRIMARY',

...

    {

      _id: 1,

      name: '172.17.0.3:27017',

      health: 1,

      state: 2,

      stateStr: 'SECONDARY',

...

    {

      _id: 2,

      name: '172.17.0.4:27017',

      health: 1,

      state: 2,

      stateStr: 'SECONDARY',

... 


Check if the other node is lagged in terms of replicating data
myrepl [direct: primary] test> db.printSecondaryReplicationInfo()

source: 172.17.0.3:27017

{

  syncedTo: 'Mon Dec 19 2022 15:48:01 GMT+0000 (Coordinated Universal Time)',

  replLag: '0 secs (0 hrs) behind the primary '

}

---

source: 172.17.0.4:27017

{

  syncedTo: 'Mon Dec 19 2022 15:48:01 GMT+0000 (Coordinated Universal Time)',

  replLag: '0 secs (0 hrs) behind the primary '

}

We can test the replication, by adding data into the first node, and check if the data is being replicated into the second and third node. 
docker exec -it mongorep1 mongosh
myrepl [direct: primary] test> use mynewdb
myrepl [direct: primary] mynewdb> db.people.insertOne( { name: "John Rambo", occupation: "Soldier" } ) 
exit
Now access mongosh in the second node and view the data, The data should be similar to the mongorep1
docker exec -it mongorep2 mongosh
myrepl [direct: secondary] test> show dbs
myrepl [direct: secondary] test> use mynewdb
myrepl [direct: secondary] test> db.people.find()
[
  {
    _id: ObjectId("63a08880e1c97fba6959ec15"),
    name: 'John Rambo',
    occupation: 'Soldier'
  }
]

If you encounter this error:

MongoServerError: not primary and secondaryOk=false - consider using db.getMongo().setReadPref() or readPreference in the connection string

Run below command to enable read on the secondary nodes
myrepl [direct: secondary] test> db.getMongo().setReadPref("secondary")

Do the same for the third node, the data should also be the same.

docker exec -it mongorep3 mongosh
myrepl [direct: secondary] test> use mynewdb
myrepl [direct: secondary] test> db.people.find() 
[
  {
    _id: ObjectId("63a08880e1c97fba6959ec15"),
    name: 'John Rambo',
    occupation: 'Soldier'
  }
]

Friday, March 25, 2022

Running singularity without installing using docker

Singularity is another container platform, similar to docker. It is widely used in high performance computing world, due to better security and portability.


But many of us are already familiar with docker, since that is the most widely used container technology. To try to learn singularity, the easiest way is to use docker that we already have inside our machine and launch singularity from there. 

We can run singularity image from quay.io by running below command
docker run --privileged --rm quay.io/singularity/singularity:v3.10.0 --version
singularity-ce version 3.10.0
In order to download image from docker and convert it into sif, we can use this
docker run --privileged --rm -v ${PWD}:/home/singularity quay.io/singularity/singularity:v3.10.0 pull /home/singularity/alpine_latest.sif docker://alpine
Once downloaded, we can run a command using the newly downloaded image
docker run --privileged --rm -v ${PWD}:/home/singularity quay.io/singularity/singularity:v3.10.0 exec /home/singularity/alpine_latest.sif cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.15.4
PRETTY_NAME="Alpine Linux v3.15"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
Even though this is probably the easiest way to use singularity in a docker installed machine, but the command can get pretty confusing. It is highly advisable, that once you have tested enough and decided to use singularity, to actually install it in your system.

Sunday, March 20, 2022

Run an apache webserver with php using docker

This is actually very easy, just run below command to start it

docker run -d -p 8000:80 --mount type=bind,source"$(pwd):/htdocs",target=/var/www/html php:apache

The options are:

-d : run this container in a detached mode (in the background)

--mount : mount a folder in current directory called htdocs (will be created by docker) into /var/www/html in the container

-p 8000:80 : will map port 8000 in localhost to port 80 in the container


Once started, create a simple php script inside the htdocs directory

cd htdocs

cat >> index.php<<EOF
<?php

echo "This is my php script";

?>

EOF


And browse using a normal web browser to http://localhost:8000. You should see "This is my php script" shown in your web browser 

Tuesday, March 1, 2022

Running docker "hello-world" image using singularity

One of the advantage of singularity is, it does not require any service to run containers. And the images that you downloaded will be saved in normal files in your filesystem, rather than in some cache directory like docker.


To run dockerhub's hello-world image using singularity:


1. Pull the image from dockerhub

$ singularity pull docker://hello-world


2. The image will be saved as hello-world_latest.sif

$ ls 

hello-world_latest.sif


3.1 To run a container based on that image, just use "singularity run" against the sif file

$ singularity run  hello-world_latest.sif

...

Hello from Docker!      

This message shows that your installation appears to be working correctly.

...

3.2 Or you can just "./" the sif file
$ ./hello-world_latest.sif

...

Hello from Docker!      

This message shows that your installation appears to be working correctly.

...

Sunday, July 11, 2021

Setup an openssh-server in a docker container

This is mainly used for testing only. 

First create a Dockerfile. I am using ubuntu:20.04 image for this

$ cat >> Dockerfile <<EOF

FROM ubuntu:20.04

RUN apt update && apt install openssh-server -y

RUN useradd myuser && echo "myuser:123456" | chpasswd && echo "root:123456" | chpasswd && mkdir /run/sshd

EXPOSE 22

CMD /usr/sbin/sshd -D

EOF

Then, build the image

$ docker build -t mysshimage .

Finally, run a container based on the image, and ssh into it. Use 123456 as password.

$ docker run -dit -p 1022:22 mysshimage

$ ssh myuser@localhost -p 1022 

To be a root user, just use su - command once you are logged in as myuser.


Friday, June 25, 2021

Testing ssl certificate and key using nginx docker

This is assuming our certs are for www.mydomain.com, our key is domain.key and our domain cert is domain.crt.

1. Get the domain certificate and your private key. The key is generated when you generate the CSR to apply for ssl, and the certificate is sent to you from you ssl provider

$ ls 

mydomain.crt mydomain.key


2. If your provider does not provide you with the bundled certificate, you need to get the root and intermediate certificate from the provider, since nginx needs the root, intermediate and domain to be in the same file for the ssl to work.


3. Combine domain certificate, intermediate certificate and root certificate into a file, let's call the file combined.crt

$ cat mydomain.crt intermediate.crt root.crt > combined.crt


4. Remove any ^M (carriage return) characters from the combined.crt file

$ sed -i 's/\r$//' combined.crt


5. Start an nginx docker container

$ docker run -dit --name nginx -v ${PWD}:/ssl nginx:latest


6. Get the ip address of the docker container

$ docker inspect nginx | grep -w IPAddress

            "IPAddress": "172.17.0.2",

                    "IPAddress": "172.17.0.2",


7. Put the reference of our domain to the container's ip address in /etc/hosts
# cat >> /etc/hosts <<EOF
172.17.0.2 www.mydomain.com
EOF

8. Prepare an nginx config file with ssl setting
cat >> mydomain.com.conf << EOF
server {
    listen 80;
    server_name  mydomain.com;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}

server {
    listen 443 ssl;
    server_name  mydomain.com;
    ssl_certificate /ssl/combined.crt;
    ssl_certificate_key /ssl/mydomain.key;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}
EOF

9. Create a symlink for the configuration file into /etc/nginx/conf.d inside the container
docker exec -it nginx ln -s /ssl/mydomain.com.conf /etc/nginx/conf.d

10. Access bash inside the container, and test the configuration
docker exec -it nginx nginx -t

11. Restart nginx container, if the above command returned no error
$ docker restart nginx

12. Make sure the container restarted successfully
$ docker ps

13. Open up a browser and browse to https://www.mydomain.com. If all is good, you should be able to see the padlock icon beside the domain nama, and the status of the connection is secure


Monday, May 17, 2021

Cleaning docker overlay2 disk usage

After using docker for a while, one thing that I notice is, the disk space usage on /var/lib/docker/overlay2 is quite high. For jsut 3 running containers, and a couple of images, my overlay2 disk usage is around 30GB, which is quite high. To reclaim back the space, what we can do is to clear off unused containers, images, volumes and other docker components, but that is going to be a daunting task for some.

Fortunately, docker comes with some tools that can ease up our work maintaining the software. The command to run it as per below:

$ docker system prune --all --volumes

This command will clear all unused components in docker, including unused volumes and images, Rest assured that running containers, and images used by running containers will be spared. From the man page, we can see that this command's usage is for cleaning up unused data in docker.

Saturday, May 8, 2021

Listing docker containers according to fields

Command like 'docker ps' is a good tool to check on your running container/s. This is visually pleasant if you do not have many containers. What if you have hundreds of containers, and you just want to print just the names of the containers, or even better the names and id of the containers?


We can use --format flag in this situation. This flag is available on pretty much every docker commands that produce some kind of output to stdout, so that you can filter what you want to see as the output.

To use this flag, you just need to follow below format:
$ docker ps --format '{{json .Names}}'

"hardcore_carson" 


whereby ".Name" is the field that you want to be displayed. For example, you want to list out just the ID of all the running containers, you can use:
$ docker ps --format '{{json .ID}}'

"e914bd4963d4" 


You can see that the field is different from the displayed field name without the --format flag.
$ docker ps
CONTAINER ID   IMAGE     COMMAND     CREATED          STATUS          PORTS     NAMES
e914bd4963d4   alpine    "/bin/sh"   29 minutes ago   Up 29 minutes             hardcore_carson

To know which flag is available to be used:
$ docker ps --format '{{json .}}'

{"Command":"\"/bin/sh\"","CreatedAt":"2021-05-08 11:32:12 +0800 +08","ID":"e914bd4963d4","Image":"alpine","Labels":"","LocalVolumes":"0","Mounts":"","Names":"hardcore_carson","Networks":"bridge","Ports":"","RunningFor":"33 minutes ago","Size":"0B (virtual 5.61MB)","State":"running","Status":"Up 33 minutes"} 


To list 2 (or more) fields:
$ docker ps --format '{{json .ID}} {{json .Names}}'

"e914bd4963d4" "hardcore_carson" 


You can also use the --format without the json keyword, the only different is the output would not be double quoted (which is not easy on the eyes if you have many fields)

$ docker ps --format '{{.ID}} {{.Names}}'

e914bd4963d4 hardcore_carson

 

Sunday, April 18, 2021

Running php-fpm and Nginx in Docker

Php-fpm is an advanced and highly efficient processor for php. In order for your php files to be viewable in a web browser, php-fpm needs to be coupled with a web server, such as nginx. In this tutorial we will show how to setup php-fpm and nginx is docker.

1. Create a directory for your files 

$ sudo mkdir phpfpm

2. Create a network for the containers to use. This makes sure that we can use container's name in the configuration file.

$ docker network create php-network

3. Create nginx config file

$ cd phpfpm

$ cat > default.conf <<EOF

server {

    listen  80;    

# this path MUST be exactly as docker-compose.fpm.volumes,

    # even if it doesn't exist in this dock.

    root /complex/path/to/files;

    location / {

        try_files $uri /index.php$is_args$args;

    }

    location ~ ^/.+\.php(/|$) {

        fastcgi_pass fpm:9000;

        include fastcgi_params;

        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 

    }

}

EOF

4. Create an index.php file with some random php code (we are using phpinfo() to make it easier)

$ cat > index.php <<EOF

<?php phpinfo(); ?> 

EOF

5. Run a php-fpm container, in detached and intaractive mode, using php-network, and we mount /home/user/phpfpm to /var/www/html in container

$ docker run -dit --name fpm --network php-network -v /home/user/phpfpm:/var/www/html

6. Run an nginx container in detached and intaractive mode, using php-network, and we mount /home/user/phpfpm/default.conf to /etc/nginx/conf.d/default.conf in container

$ docker run -dit --name nginx --network php-network -v /home/user/phpfpm/default.conf:/etc/nginx/conf.d/default.conf -p 80:80 nginx

7. Open a browser, and browse to http://localhost, you should now be able to see the PHPinfo page. 

Of course, there is an easier way to set this up using docker-compose. We will cover that in another post.


Wednesday, March 10, 2021

Fixing Wrong Timezone in Docker Logs

I had this issue of my running container logs is using UTC as its timezone, which will make troubleshooting process quite troublesome.

After some experiment, I found that we have to set the timezone early when we run the container, to make the logs recorded in the timezone we prefer.

For currently running container, the safest way is to commit it to image, and rerun a new container based on that image.

1. My current container's logs

$ docker logs mycontainer

...

2021-03-09 12:01:08.939 UTC [1283819] WARNING:  terminating connection because of crash of another server process

...

2. Back up the current container to an image called mycontainer-backup:20210310

$ docker commit mycontainer mycontainer-backup:20210310

3. Stop the current container, to avoid port clash (if any)

$ docker stop mycontainer

4. Run a new container, based of the backup image, this time we need to specify the timezone variable

$ docker run -dit --name mynewcontainer -e TZ=Asia/Kuala_Lumpur mycontainer-backup:20210310

5. Done, we can verify the log's timezone, just to be sure

$ docker logs mynewcontainer

...

2021-03-10 08:03:14.429 +08 [1] LOG:  listening on IPv4 address "0.0.0.0",

...

Monday, March 8, 2021

Postgresql 13 Streaming Replication using Docker

In this setup, we will create a 2 nodes postgresql streaming replication, in docker. This article will make use of postgresql image version 13.

1. Create a network, and take note of the network ip range
$ docker network create mynet

$ docker network inspect mynet 

2. Make 1 directory for pgmaster. 

$ sudo mkdir pgmasterdata
3.  Create a container called pgmaster
$ docker run -dit -v "$PWD/pgmasterdata/:/var/lib/postgresql/data -e POSTGRES_PASSWORD=abc -p 5432:5432 --restart=unless-stopped --network=mynet --name=pgmaster postgres 
4. Backup and edit pgmaster's postgresql.conf with below settings
$ sudo cp pgmasterdata/postgresql.conf pgmasterdata/postgresql.conf.ori
$ cat > postgresql.conf <<EOF
listen_addresses = '*'
port = 5432
max_connections = 50
ssl = off
shared_buffers = 32MB
# Replication Settings - Master
wal_level = hot_standby
max_wal_senders = 3
EOF

$ sudo cp postgresql.conf pgmasterdata/postgresql.conf 

5. Login to pgmaster and create a user for replication
$ docker exec -it pgmaster psql -U postgres -h localhost -d postgres
postgres=# create role replicator with replication password 'abc';
postgres=# \q
 
6. Backup and edit pgmaster's pg_hba.conf with ip range from step 1
$ sudo cp pgmasterdata/pg_hba.conf pgmasterdata/pg_hba.conf.ori
$ echo "host    replication  all  172.16.0.0/16  trust" | sudo tee -a pgmasterdata/pg_hba.conf 
7. Restart pgmaster container
$ docker restart pgmaster
8.  Run backup of master to /slavedata in pgmaster

$ docker exec -it pgmaster bash

# mkdir /pgslavedata 

# pg_basebackup -h pgmaster -D /pgslavedata -U replicator -v -P --wal-method=stream

9. Copy /slavedata in pgmaster to host

$ docker cp pgmaster:/pgslavedata pgslavedata

10. Tell pgslave that it is a slave

$ sudo touch  pgslavedata/standby.signal

11. Edit postgresql.conf in pgslavedata

$ sudo cp pgslavedata/postgresql.conf pgslavedata/postgresql.conf.ori

$ cat > postgresql.conf <<EOF

listen_addresses = '*'

port = 5432

max_connections = 50

ssl = off

shared_buffers = 32MB

# Replication Settings - Slave

hot_standby = on

primary_conninfo = 'host=<master ip> port=5432 user=replicator password=abc@123'

EOF

$ sudo cp postgresql.conf pgslavedata/postgresql.conf

12. Start pgslave

$ docker run -dit -v "$PWD"/pgslavedata/:/var/lib/postgresql/data -e POSTGRES_PASSWORD=abc -p 15432:5432 --network=mynet --restart=unless-stopped--name=pgslave postgres

13. Check replication state in pgmaster

$ docker exec -it pgmaster psql -h localhost -U postgres -d postgres -c "select usename,state from pg_stat_activity where usename = 'replicator';"

usename   | state  

------------+--------

 replicator | active 

14. Verify the setup by creating a database in pgmaster, and check if the same database appear in pgslave. 

15. To promote to pgslave if pgmaster is down, simply run "pg_ctl promote" command

$ docker exec -it pgslave bash

# pg_ctl promote

Thursday, December 24, 2020

Using Volume in Docker to Store Persistent Data

Docker container is not persistent, that means, the data stored in a docker container will be gone if the container is destroyed. One of the way to store persistent data while using docker container, is to use the docker volume feature to store your data. 


To use the volume, we need to create the volume first. 

$ docker volume create myvolume


We can inspect the volume's detailed information using inspect command. This is how we get to know where exactly in filesystem, that the volume resides. and all other information about the volume.

$ docker volume inspect myvolume

[

    {

        "CreatedAt": "2020-12-24T09:18:03+08:00",

        "Driver": "local",

        "Labels": {},

        "Mountpoint": "/var/lib/docker/volumes/myvolume/_data",

        "Name": "myvolume",

        "Options": {},

        "Scope": "local"

    }

]


To make use of the volume, we will use the --volume flag of the docker run command. For example, we want to create a container from an alpine linux image, run a command in it (in this example, a date command), store the output of the command in a volume, exit, and delete the container after that.

$ docker run --rm --volume=myvolume:/tmp alpine sh -c "date > /tmp/currenttime"


Once the container finished running, we can go to the location of the volume in the filesystem, and we can retrieve the file that is created by the container, even though that container no longer exist.

$ sudo ls /var/lib/docker/volumes/myvolume/_data

currenttime

$ sudo cat  /var/lib/docker/volumes/myvolume/_data/currenttime

Thu Dec 24 01:25:34 UTC 2020


So this is how my friend, one of the way to store persistent data generated using docker container.