Skip to main content

2 posts tagged with "swarm"

View All Tags

· 6 min read
TheBidouilleur

Since the DevOps movement started (or rather Platform engineering), the topic of high availability has been brought to the forefront. And one of the most versatile solutions to achieve high availability is to create application clusters. (and so: containers)

So I've been running a Swarm cluster for a few years and I recently switched to Kubernetes (k3s to be precise). And by having clusters holding several hundreds of containers, we forget about maintenance and update.

And in this article, we will talk about updates.

Out-of-cluster container upgrade solutions

WatchTower

I think the best known solution is Watchtower

Watchtower is easy to use and is based (like many others) on labels. A label allows to define some parameters and to activate (or deactivate) the monitoring of updates.

Updating is not always good...

Be careful not to automatically update sensitive programs! We can't check what an update and if they won't break something. It's up to you to choose which applications to monitor, and to trigger an update or not.

WatchTower will notify you in several ways:

  • email
  • slack
  • msteams
  • gotify
  • shoutrrr

And among these methods, you do not have only proprietary solutions, free to you to host a shoutrrr, a gotify or to use your smtp so that this information does not leave your IS! *(I am very critical of the use of msteams, slack, discord to receive notifications)

WatchTower will scan for updates on a regular basis (configurable).

container-updater (from @PAPAMICA)

The most provided/complex solution is not always the best. Papamica has set up a bash script to meet his specific needs (which many other people must have): an update system notifying him through Discord and Zabbix.

This one is also based on labels and also takes care of the case where you want to update by docker-compose. (instead of doing a docker pull, docker restart like Watchtower)

labels:
- "autoupdate=true"
- "autoupdate.docker-compose=/link/to/docker-compose.yml"

Even if I don't use it, I had a time when I was using Zabbix and I needed to be notified on my Zabbix. (which notified me by Mail/Gotify)

Papamica states that he plans to add private registry support (for now only github registry or dockerhub) as well as other notification methods.

Solutions for Swarm

Swarm is probably the container orchestrator I enjoyed the most: it's **simple**! You learn fast, you discover fast and you get quick results. But I've already written about Swarm in another article...

Sheperd

What I like in Papamica's program (and that goes with Sheperd) is that we keep bash as the central language. A language that we all know in the main thanks to Linux, and that we can read and modify if we take the time.

Shepherd's code is only ~200 lines and works fine like that.

version: "3"
services:
...
shepherd:
build: .
image: mazzolino/shepherd
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints:
- node.role == manager

This one will accept several private registers, which gives a nice advantage compared to the other solutions presented. Example:

    deploy:
labels:
- shepherd.enable=true
- shepherd.auth.config=blog

Shepherd does not include a (default) notification system. That's why its creator decided to offer a Apprise sidecar as an alternative. Which can redirect to many things like Telegram, SMS, Gotify, Mail, Slack, msteams etc....

I think this is the simplest and most versatile solution. I hope it will be found in other contexts. I hope it will be found in other contexts (but I don't go into too much detail on the subject, I'd like to write an article about it).

I used Shepherd for a long time and I had no problems.

Solutions for Kubernetes

For Kubernetes, we start to lose in simplicity. Especially since with the imagePullPolicy: Always option, you just have to restart a pod to get the last image with the same tag. For a long time, I used ArgoCD to update my configurations and re-deploy my images at each update on Git.

But ArgoCD is only used to update the configuration and not the image. The methodology is incorrect and it is necessary to find a suitable tool for that.

Keel.sh

Keel is a tool that meets the same need: Update pod images. But it incorporates several features not found elsewhere.

Keel

If you want to keep the same operation as the alternatives (i.e. regularly check for updates), it is possible:

metadata:
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
keel.sh/pollSchedule: "@every 3m"

But where Keel excels is that it offers triggers and approvals.

A trigger is an event that will trigger the update of Keel. We can imagine a webhook coming from Github, Dockerhub, Gitea which will trigger the update of the server. *(So we avoid a regular crontab and we save resources, traffic and time) As the use of webhook has become widespread in CICD systems, it can be coupled to many use cases.

The approvals are the little gem that was missing from the other tools. Indeed, I specified that updating images is dangerous and you should not target sensitive applications in automatic updates. And it's just in response to that that Keel developed the approvals.

The idea is to give permission to Keel to update the pod. We can choose the moment and check manually.

I think it's a pity that we have Slack or MSTeams imposed for the approvals, it's then a feature that I won't use.

A UI

So for now, I use Keel without its web interface, it may bring new features, but I would like to avoid an umpteenth interface to manage.

Conclusion

Updating a container is not that easy when you are looking for automation and security. If today, I find that Keel corresponds to my needs, I have the impression that the tools are similar without offering real innovations. (I'm thinking of tackling the canary idea one day) I hope to discover new solutions soon, hoping that they will better fit my needs.

· 5 min read
TheBidouilleur

[ This article is from my old-blog, it will also be available in the "Documentation" section of the site ]

Docker Swarm

Introduction

The world of containerization has brought many things into system administration, and has updated the concept of DevOps. But one of the main things that containers (and especially Docker) bring us is automation.

And although Docker is already complete with service deployment, we can go a little further by automating container management! And to answer that: Docker Inc. offers a tool suitable for automatic instance orchestration: Docker Swarm.

What is Docker Swarm?

As previously stated: Docker Swarm is an orchestration tool. With this tool, we can automatically manage our containers with rules favoring High-availability, and Scalability of your services. We can therefore imagine two scenarios that are entirely compatible:

  • Your site has a peak load and requires several containers: Docker Swarm manages replication and load balancing
  • A machine hosting your Dockers is down: Docker Swarm replicates your containers on other machines.

So we'll see how to configure that, and take a little look at the state of play of the features on offer.

Create Swarm Cluster

For testing, I will use PWD (Play With Docker) to avoid mounting this on my infra:)

So I have 4 machines under Alpine on which I will start a Swarm cluster.

The first step is to define a Manager, this will be the head of the cluster, as well as the access points to the different machines. In our case, we will make it very simple, the manager will be Node1.

To start the Swarm on the manager, simply use the 'docker swarm init' command. But, if your system has a network card count greater than 1 (Fairly easy on a server), you must give the listening IP. In my case, the LAN interface IP (where VMs communicate) is 192.168.0.8. So the command I'm going to run is

docker swarm init èèadvertise-addr 192.168.0.8

Docker says:

Swarm initialized: current node (cdbgbq3q4jp1e6espusj48qm3) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join —token SWMTKN-1-5od5zuquln0kgkxpjybvcd45pctp4cp0l12srhdqe178ly8s2m-046hmuczuim8oddmk08gjd1fp 192.168.0.8:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.`

In summary: The cluster is well started, and it gives us the exact command to join the cluster from other machines! Since Node1 is the manager, I just need to run the docker swarm join command on Node2-4.

docker swarm join --token SWMTKN-1-5od5zuquln0kgkxpjybvcd45pctp4cp0l12srhdqe178ly8s2m-046hmuczuim8oddmk08gjd1fp 192.168.0.8:2377

Once completed, you can view the result on the manager with the command 'docker node ls'

Deploy a simple service

If you are a docker run user and you refuse docker-compose, you should know one thing: i don't like you. As you are nice to me, here is a piece of information that won't help: the equivalent of 'docker run' in Swarm is 'docker service'. But we're not going to get into that in this article.

Instead, we will use the docker-composed equivalent, which is the docker stack. So first of all, here's the .yml file

version: "3"
services:
viz:
image: dockersamples/visualizer
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "8080:8080"
deploy:
replicas: 1
placement:
constraints:
- node.role == manager

Before you start it, you'll probably notice the deploy part that lets you give directions to Swarm. So we can add constraints to deploy this on the manager(s), ask the host to limit the use of resources, or manage replicas for load balancing.

This first container will be used to have a simple dashboard to see where the Dashboards are positioned, and avoid going to CLI only for this function.

We will deploy this compose with the following command:

docker stack deploy —compose-file docker-compose.yml swarm-visualize

Once the command is complete, you simply open the manager's web server at port 8080.

So we now have a web panel to track container updates.

Simplified management of replicas

When you access a container, you must go through the manager. But there is nothing to prevent being redirected to the 3-4 node via the manager. This is why it is possible to distribute the load balancing with a system similar to HAProxy, i.e. by redirecting users to a different container each time a page is loaded.

Here is a docker-compose automatically creating replicas:

version: '3.3'
services:
hello-world:
container_name: web-test
ports:
- '80:8000'
image: crccheck/hello-world
deploy:
replicas: 4

And the result is surprising:

We can also adjust the number of replica. By decreasing it:

docker service scale hello-world_hello-world=2

Or by increasing it:

docker service scale hello-world_hello-world=20

What about High Availability?

I focused this article on the functions of Swarm, and how to use them. And if I did not address this item first, it is because every container created in this post is managed in HA! For example, I will forcibly stop the 10th replica of the "Hello world" container, which is on Node1. And this one will be directly revived,

Okay, But docker could already automatically restart containers in case of problem, how is swarm different?

And to answer that, I'm going to stop the node4

It is noted that the other nodes distribute automatically (and without any intervention) the stopped containers. And since we only access services through managers, they will only redirect to the containers that are started. One of the servers can therefore catch fire, the service will always be redundant, balanced, and accessible.

Conclusion

Docker-Swarm is a gateway to application clusters that are incredibly complex without a suitable tool. Swarm is easy to meet special needs without any technical expertise. In a production environment, it is advisable to switch to Kubernetes or Nomad which are much more complete and powerful alternatives.

I encourage you to try this kind of technology that will govern our world of tomorrow!

Thanks for reading