Mur's castle inside

In this last post (see the previous one) I’m going to talk about Jenkins, how to configure it and what I use it for.


To install Jenkins on Ubuntu:

wget -q -O - | sudo apt-key add -
sudo sh -c 'echo deb binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt install openjdk-11-jdk-headless
sudo apt install jenkins
sudo systemctl enable --now jenkins

I have this file server-setup/jenkins/nginx.conf already created:

upstream jenkins {
  server fail_timeout=0;

server {

  ssl_certificate /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;

  location / {
    proxy_set_header        Host $host:$server_port;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto $scheme;
    proxy_redirect http:// https://;
    proxy_pass              http://jenkins;
    # Required for new HTTP-based CLI
    proxy_http_version 1.1;
    proxy_request_buffering off;
    proxy_buffering off; # Required for HTTP-based CLI to work over SSL
    # workaround for
    add_header 'X-SSH-Endpoint' '' always;

Copy that file to nginx conf and restart nginx:

sudo cp server-setup/jenkins/nginx.conf /etc/nginx/sites-available/
sudo rm /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/
sudo systemctl reload nginx

I have exposed Jenkins on Now, I can enter and configure my user.

Once installed, open it and create your user.

After that, install Github hook and credentials plugins. Then:

  • Add ssh credentials to github:
sudo su - jenkins

# copy /var/lib/jenkins/.ssh/ to github ssh new credential
  • Add pachatary env.list file as secret file named pachatary_env_list and db.env.list file as secret file named pachatary_db_env_list (files with env and secrets vars that will be used by the jobs).


Here I’m going to explain the jobs I’ve created for my blog and Pachatary.



  • Deploy


This job is almost the same as the script to deploy my blog (see the previous post). The difference here is that I configured it to run on git push on master branch of the Github repository of the project. To make that enter to Configure page of the job and fill the Github forms.



  • Deploy
  • Test
  • Test and deploy
  • Restore db
  • Backup db
  • Reindex all experiences


This job is not hooked with github because we will trigger it from test_and_deploy pipeline. The only configuration that has to be added here is the env.list secret file. I added it under the name pachatary_env_list. Here is the job script:

cat $pachatary_env_list > env.list

sudo docker build -t pachatary/api .

sudo docker update --restart=no pachatary-nginx-01
sudo docker stop pachatary-nginx-01
sudo docker rm pachatary-nginx-01
sudo docker update --restart=no pachatary-api-01
sudo docker stop pachatary-api-01
sudo docker rm pachatary-api-01
sudo docker run -d --restart=always --env-file env.list --net pachatary-net -v pachatary-statics-01:/code/pachatary/staticfiles --name pachatary-api-01 -e INTERNAL_IP= -t pachatary/api
sudo docker run --name pachatary-nginx-01 -v pachatary-statics-01:/usr/share/nginx/html/static:ro -v /etc/nginx/sites-available/ -p --net pachatary-net --restart=always -d nginx

while [ $response -gt 499 -o "${response}" = 000 ]
    sleep 1
    response=$(curl --write-out %{http_code} --silent --output /dev/null
    echo $response

sudo docker update --restart=no pachatary-nginx-02
sudo docker stop pachatary-nginx-02
sudo docker rm pachatary-nginx-02
sudo docker update --restart=no pachatary-api-02
sudo docker stop pachatary-api-02
sudo docker rm pachatary-api-02
sudo docker run -d --restart=always --env-file env.list --net pachatary-net -v pachatary-statics-02:/code/pachatary/staticfiles --name pachatary-api-02 -e INTERNAL_IP= -t pachatary/api
sudo docker run --name pachatary-nginx-02 -v pachatary-statics-02:/usr/share/nginx/html/static:ro -v /etc/nginx/sites-available/ -p --net pachatary-net --restart=always -d nginx

It is similar to the one used on the previous post, but in this case we cat the env.list file from a secret file. I’ve also added an sleep between the update of the container 01 and 02 to avoid downtime (HAProxy does the rest).


I run the tests using docker-compose. I’ve also used Django testing tags to exclude elasticsearch ones (elasticsearch docker container weights 2GB and generates out of memory exceptions).

sudo docker-compose down
sudo docker-compose build
sudo docker-compose run api bash -c "pytest && python test --exclude-tag=elasticsearch"
sudo docker-compose down

Test and deploy (pipeline)

This pipeline launches test and deploy jobs:

stage('test') {
    build 'test';
stage('deploy') {
    build 'deploy'

The intention is to run it when a git push occurs on master branch, but I couldn’t get Github hooks working with a Jenkins pipeline so I added another simple job that triggers this pipeline to achieve that (ugly but it does the trick):

Test and deploy trigger

Hook it to github and make it trigger test_and_deploy (previous one) job.

Backup db

Here I also needed db.env.list secret file to hide some variables. I added it under the name pachatary_db_env_list. What this job does is to make a pg_dump and upload it to aws s3 (previously installed and configured profiles) to files latest.dump(which always keeps the last version) and 2020-08-10.dump (for example):

eval "$(cat $pachatary_db_env_list)" 
date=$(date +%F)
sudo docker exec pachatary-postgres pg_dump -U postgres --verbose $PACHATARY_DB > $date.dump
sudo aws s3 cp $date.dump s3://pachatary-db/$date.dump --profile pachatary
sudo aws s3 cp s3://pachatary-db/$date.dump s3://pachatary-db/latest.dump --profile pachatary

Restore db

This is the inverse job of the previous one: it takes the latest.dump from aws s3 and recreates the postgres database with it:

eval "$(cat $pachatary_db_env_list)" 
sudo aws s3 cp s3://pachatary-db/latest.dump latest.dump --profile pachatary
sudo docker run --rm -v $PWD:/src -v pachatary-pgdata:/dest -w /src alpine cp latest.dump /dest
sudo docker exec -t pachatary-postgres psql -U postgres -c "SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$PACHATARY_DB' AND pid <> pg_backend_pid();"
sudo docker exec -t pachatary-postgres psql -U postgres -c "drop database $PACHATARY_DB"
sudo docker exec -t pachatary-postgres psql -U postgres -c "create database $PACHATARY_DB with owner $PACHATARY_DB_ROLE"
sudo docker exec -t pachatary-postgres bash -c "psql -U $PACHATARY_DB_ROLE -d $PACHATARY_DB < /var/lib/postgresql/data/latest.dump"

It uses docker to copy the file to a volume shared with postgres docker container. Then, it drops and creates database again from that file using docker too.

Reindex all experiences

And the last job is an example of how to launch a routinary Django command, in this case the one that reindex all experiences entities from database to elasticsearch:

sudo docker exec -t pachatary-api python reindex_all_experiences

Extra trick:

I’ve versioned and backuped Jenkins config files so I can better work with them and restore if needed. Take a look at server-jenkins repository. Be careful to not version control any sensible file using .gitignore!

I hope you enjoyed this journey, where I’ve explained how I migrated 4 of my projects to a server. I’m a noob on the systems world so I know there are a lot of things that can be improved. Feel free to comment anything. Thanks for reading me!