Simplifying Work with Container Technology
Container technology has made an impact
“Significant advantages over virtual machines”
Four years ago, we reported in one of our blog posts (in German only) about the evolution of virtualization through the use of container technology. In the meantime, container technology has made its way into many companies, including medium-sized ones. The reason is: It offers significant advantages in the deployment of applications compared to the virtual machines that have been in use for many years. While a virtual machine contains both the applications and the operating system, a container incorporates only the applications and some necessary components to access an operating system. The operating system itself is not included and is used by the system on which the container is running. The container engine provides the virtualization. A popular container engine is the Docker engine, which comes from the US company Docker Inc. and is probably the most frequently used container engine worldwide. On this system, multiple containers can now run independently of each other. Changes in the operating system can thus affect all containers. This means that the operating system only needs to be updated once, e.g. when a security gap needs to be fixed.
The basic advantages of using containers over virtual machines are that they require less CPU and memory. Containers are also very flexible, as they can be run on different systems. Furthermore, several containers of the same type can be run in parallel, so that better scalability of an application can be achieved.
Container technology simply explained
The basic technical terms in dealing with containers can be explained quickly: To run an application in a container, the application must first be delivered through a template (the technical term for this is image). If the image is started and executed through it, it is referred to as a container. An image is accessed via a directory (= registry). This directory can be private, allowing only a restricted group of people to access it. Or it can be public. One such public registry is Docker Hub, which is run by the Docker developer. It provides over 100,000 applications from many different vendors.
The data is stored outside of the containers in a volume. This prevents from data loss when upgrading to a new version of a system. To make the introduction to container technology easy, Docker provides the Docker Desktop application. In addition to the container engine, this application also includes a graphical user interface for managing installed images and containers based on them.
Workflow Management via Docker Hub
When using Docker Desktop, even the technically savvy citizen developer (power user) can set up an executable environment in the own infrastructure with simple means. For example, he can set up an executable low-code platform focused on workflow management in three simple steps. This contains easy-to-use tools, such as graphical editors, which allow the development of own applications with only minimal programming knowledge. Since the citizen developer has the necessary knowledge of the processes in his own department, he can map them electronically in a time-saving manner and at least test the feasibility by implementing a prototype.
GBS Workflow Manager is a low-code platform
An example of such a low-code platform is the GBS Workflow Manager, which has been available on Docker Hub since September 2020. Besides the Docker image of GBS Workflow Manager are necessary just the images of the required database systems, but these are loaded automatically via a provided configuration file.
Three steps to an executable setup
- 1. The first step is to download and install Docker Desktop. Docker Desktop can be obtained from Docker website.
- 2. The second step is to load the image from the GBS Workflow Manager server from Docker Hub. This is easily done with entering the command “docker pull gbseuropagmbh/workflowmanager” in the command prompt on Microsoft Windows or in Terminal on Linux or the Mac OS.
- 3. The third step is to create the configuration file (“docker-compose.yaml”), as specified in the description on the page in Docker Hub, and execute it using a command (“docker-compose up -d”).
The third step also sets up the required database systems and starts the GBS Workflow Manager. After this, the GBS Workflow Manager can already be accessed directly via a web browser.
For updating to a new version of the GBS Workflow Manager, the container must first be stopped and shut down, e.g. via the Docker Desktop. Then, the image can be replaced with a newer version, which for example can also be obtained from Docker Hub. In the next step, the container can be restarted based on the new version of the image. The data is preserved because it is stored outside the container.
Productive use
If the developed application is to be used productively later, it can be set up in the IT infrastructure in the company. If containers are used in a company, this can quickly lead to a large number of different applications being managed with containers. This creates the risk of quickly losing the overview. To prevent this, container management software can be used. Some well-known examples are Swarm from Docker, Kubernetes from Google or Amazon ECS. Kubernetes is probably the best-known container management software. It is an open source software for automating containers and provides centralized management and orchestration capabilities to perform and scale the deployment of a larger number of containers.
At GBS, we have been implementing container technologies and container management software for a few years. In addition to deploying GBS Workflow Manager as a Docker image, we offer another solution based on a container infrastructure using Docker and Kubernetes deployments – iQ.Suite 360, which provides holistic protection of collaboration platforms, such as Microsoft SharePoint and Microsoft Teams.