Let’s peep into Docker a bit !

Nidhi Chaurasia
9 min readJul 11, 2021
“If deploying software is hard, time-consuming, and requires resources from another team, then developers will often build everything into the existing application in order to avoid suffering the new deployment penalty.”
Karl Matthias, Docker: Up and Running

As, we all know that Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud. So, today we would be discussing about Containerization in Docker and how can we get started to it?

Being an open source containerization platform Docker contributes developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

Let’s assume a containerized application as the top layer of a multi-tier cake:

Sounds Interesting!

  1. At the bottom, there is the hardware of the infrastructure in question, including its CPU(s), disk storage and network interfaces.
  2. Above that, there is the host OS and its kernel — the latter serves as a bridge between the software of the OS and the hardware of the underlying system.
  3. The container engine and its minimal guest OS, which are particular to the containerization technology being used, sit atop the host OS.
  4. At the very top are the binaries and libraries (bins/libs) for each application and the apps themselves, running in their isolated user spaces (containers).

Coming ,to the various Linux commands in Docker then the systemctl command is a new tool to control the systemd system and service. This is the replacement of old SysV init system management command. Most of modern Linux operating systems like CentOS 7, Ubuntu 16.04 or later or Debian 9 system are using this new tool.

This image shows that the docker is successfully loaded in the environment and it’s additional Client details could be observed .
sudo (Super User DO) command in Linux is generally used as a prefix of some command that only superuser are allowed to run.

sudo command in Linux is the equivalent of “run as administrator” option in Windows. The option of sudo enables us to have multiple administrators.
Those users who can use the sudo command need to have an entry in the sudoers file located at “/etc/sudoers”. Example :- sudo docker image ls lists all images present in the docker.

docker image ls [OPTIONS] [REPOSITORY[:TAG]] 
docker pull command pulls an image or a repository from a registry.
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
The “ping” command followed by the web address or IP address of the website is used to ping what we want to ping.

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

What is busybox? The Swiss Army Knife of Embedded Linux !

Coming in somewhere between 1 and 5 Mb in on-disk size (depending on the variant), busybox is a very good ingredient to craft space-efficient distributions.

busybox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in busybox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. It provides a fairly complete environment for any small or embedded system.

docker ps [OPTIONS] is used to list containers.
docker inspect [OPTIONS] NAME|ID [NAME|ID...] command returns low-level information on Docker objects.
Run a command in a running container by using sudo docker exec -it command.
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
cd command in linux known as change directory command. It is used to change current working directory. In the above example, we have checked number of directories in our home directory and moved inside the Documents directory by using cd Documents command.
The docker build command builds an image from a Dockerfile and a context. The build’s context is the set of files at a specified location PATH or URL. The PATH is a directory on your local filesystem. The URL is a Git repository location.

The build context is processed recursively. So, a PATH includes any subdirectories and the URL includes the repository and its submodules. This example shows a build command that uses the current directory (.) as build context:

$ docker build .

The build is run by the Docker daemon, not by the CLI. The first thing a build process does is send the entire context (recursively) to the daemon. In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.

To use a file in the build context, the Dockerfile refers to the file specified in an instruction, for example, a COPY instruction. To increase the build’s performance, exclude files and directories by adding a .dockerignore file to the context directory. For information about how to create a .dockerignore file see the documentation on this page.

Traditionally, the Dockerfile is called Dockerfile and located in the root of the context. We use the -f flag with docker build to point to a Dockerfile anywhere in your file system.

$ docker build -f /path/to/a/Dockerfile .

We could specify a repository and tag at which to save the new image if the build succeeds:

$ docker build -t nidhi/myapp .

To tag the image into multiple repositories after the build, add multiple -t parameters when you run the build command:

$ docker build -t nidhi/myapp:1.0.2 -t nidhi/myapp:latest .

Before the Docker daemon runs the instructions in the Dockerfile, it performs a preliminary validation of the Dockerfile and returns an error if the syntax is incorrect:

$ docker build -t nidhi/myapp .[+] Building 0.3s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 60B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition:
dockerfile parse error line 2: unknown instruction: RUNCMD

The Docker daemon runs the instructions in the Dockerfile one-by-one, committing the result of each instruction to a new image if necessary, before finally outputting the ID of your new image. The Docker daemon will automatically clean up the context you sent.

Note that each instruction is run independently, and causes a new image to be created — so RUN cd /tmp will not have any effect on the next instructions.

Whenever possible, Docker uses a build-cache to accelerate the docker build process significantly. This is indicated by the CACHED message in the console output. (For more information, see the Dockerfile best practices guide:

$ docker build -t svendowideit/ambassador .
[+] Building 0.7s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 286B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.2 0.4s
=> CACHED [1/2] FROM docker.io/library/alpine:3.2@sha256:e9a2035f9d0d7ce 0.0s
=> CACHED [2/2] RUN apk add --no-cache socat 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:1affb80ca37018ac12067fa2af38cc5bcc2a8f09963de 0.0s
=> => naming to docker.io/svendowideit/ambassador 0.0s

By default, the build cache is based on results from previous builds on the machine on which we are building. The --cache-from option allows us to use a build-cache that’s distributed through an image registry refer to the specifying external cache sources section in the docker build command reference.

When we’re done with our build, we’re ready to look into scanning our image with docker scan, and pushing our image to Docker Hub.

In order to build the application, we need to use a Dockerfile. A Dockerfile is simply a text-based script of instructions that is used to create a container image.

We need to create a file named Dockerfile in the same folder as the file package.json with the following contents.

# syntax=docker/dockerfile:1
FROM node:12-alpine
RUN apk add --no-cache python g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
Ensure that the file Dockerfile has no file extension like .txt. Some editors may append this file extension automatically and this would result in an error in the next step.
  1. Then open a terminal and go to the app directory with the Dockerfile. Now build the container image using the docker build command.
docker build -t getting-started .
This command used the Dockerfile to build a new container image. It could be noticed that a lot of “layers” were downloaded. This is because we instructed the builder that we wanted to start from the node:12-alpine image. But, since we didn’t have that on our machine, that image needed to be downloaded.

After the image was downloaded, we copied in our application and used yarn to install our application’s dependencies. The CMD directive specifies the default command to run when starting a container from this image.

Finally, the -t flag tags our image. Think of this simply as a human-readable name for the final image. Since we named the image getting-started, we can refer to that image when we run a container.

The . at the end of the docker build command tells that Docker should look for the Dockerfile in the current directory.

The client and daemon API must both be at least 1.21 to use this command.We use the docker version command on the client to check your client and daemon API versions.

The command lists all the networks the Engine daemon knows about. This includes the networks that span across multiple hosts in a cluster -

docker network ls [OPTIONS]
Stopped one or more running containers in the image shown.

Syntax goes like this :-

docker stop [OPTIONS] CONTAINER [CONTAINER...]
We have connected a container to a network here and thereby inspecting the json file .

API 1.21+ The client and daemon API must both be at least 1.21 to use this command. Use the docker version command on the client to check your client and daemon API versions.

docker network connect [OPTIONS] NETWORK CONTAINER
The cat command frequently reads data from the file and gives their content as output. It helps us to create, view, concatenate files altogether.

Here ,after creating the Dockerfile content through vim editor we have viewed the file ,using cat command .

The sudo docker-compose up command aggregates the output of each container .
Stops containers and removes containers, networks, volumes, and images created by up.

By default, the only things removed are:

  • Containers for services defined in the Compose file
  • Networks defined in the networks section of the Compose file
  • The default network, if one is used

Networks and volumes defined as external are never removed.

Anonymous volumes are not removed by default. However, as they don’t have a stable name, they will not be automatically mounted by a subsequent up. For data that needs to persist between updates, use host or named volumes.

After running docker-compose up we can access the app with curl command curl <server-ip or hostname>:6000/

And the app is deployed successfully and running on the server which we have selected .In my case I have used redis ,which is an open source key-value store that functions as a data structure server.

For connecting via redis-cli(command-line interface) -

$ docker run -it --network some-network --rm redis redis-cli -h some-redis

To start a redis instance -

$ docker run --name some-redis -d redis

For the ease of accessing Redis from other containers via Docker networking, the “Protected mode” is turned off by default. This means that if we expose the port outside of our host (e.g., via -p on docker run), it will be open without a password to anyone. It is strongly important to set a password (by supplying a config file) if we plan on exposing our Redis instance to the internet.

Thus , the Simplifying Configuration and enabling technology of Docker helped us in -

  • Code Pipeline Management.
  • Developer Productivity.
  • App Isolation.
  • Server Consolidation.
  • Debugging Capabilities.
  • Multi-tenancy.
  • Rapid Deployment.

Keep Learning and Exploring the Ocean of Innovations !

Meticulous Efforts By -

Nidhi Chaurasia

College -Maharaja Agrasen Institute Of Technology ,New Delhi.

Task#2

#regexsoftware #regexsoftwareservices #linux #docker #masterClassbyregex #dockerfile #dockercompose #commands Regex

--

--

Nidhi Chaurasia

CSE(Major) | Google Cloud | Programming | Python Developer | Technical Writer | Open-Source Contributor