Skip to main content

Docker in Linux


Docker Installation
Need 64bit machine and follow the steps available in below link,



What is Docker?
Docker is a tool that promises to easily encapsulate the process of creating a distributable artifact for any application, deploying it at scale into any environment, and streamlining the workflow and responsiveness of agile software organizations.
In a nutshell, here's what Docker can do for you: It can get more applications running on the same hardware than other technologies; it makes it easy for developers to quickly create, ready-to-run containered applications; and it makes managing and deploying applications much easier.

Difference between hypervisor and containers
  • The key difference between containers and VMs is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.
  • They are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application,”
  • With a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware

Docker Isolation:
It’s often the case that many containers share one or more common filesystem layers. That’s one of the more powerful design decisions in Docker, but it also means that if you update a shared image, you’ll need to re-create a number of containers.
Containerized processes are also just processes on the Docker server itself. They are running on the same exact instance of the Linux kernel as the host operating system. They even show up in the ps output on the Docker server. That is utterly different from a hypervisor where the depth of process isolation usually includes running an entirely separate instance of the operating system for each virtual machine.

To simplify this a bit, remember that a Docker image contains everything required to run your application. If you change one line of code, you certainly don’t want to waste time rebuilding every dependency your code requires into a new image. Instead, Docker will use as many base layers as it can so that only the layers affected by the code change are rebuilt.

Every Docker container is based on an image, which provides the basis for everything that you will ever deploy and run with Docker. To launch a container, you must either download a public image or create your own. Every Docker image consists of one or more filesystem layers that generally have a direct one-to-one mapping to each individual build step used to create that image.

What is a container?
Containers are a fundamentally different approach where all containers share a single kernel and isolation is implemented entirely within that single kernel. This is called operating system virtualization. The libcontainer project gives a good, short definition of a container: “A container is a self-contained execution environment that shares the kernel of the host system and which is (optionally) isolated from other containers in the system.” The major advantages are around efficiency of resources because you don’t need a whole operating system for each isolated function. Since you are sharing a kernel, there is one less layer of indirection between the isolated task and the real hardware underneath. When a process is running inside a container, there is only a very little shim that sits inside the kernel rather than potentially calling up into a whole second kernel while bouncing in and out of privileged mode on the processor.
But the container approach means that you can only run processes that are compatible with the underlying kernel.

Creating a container:
docker run is really a convenience command that wraps two separate steps into one. The first thing it does is create a container from the underlying image. This is accomplished separately using the docker create command. The second thing docker run does is execute the container, which we can also do separately with the docker start command.

Container Name:
When you create a container, it is built from the underlying image, but various commandline arguments can affect the final settings. Settings specified in the Dockerfile are always used as defaults, but you can override many of them at creation time.

docker create --name="awesome-service" ubuntu:latest

You can only have one container with any given name on a Docker host. If you run the above command twice in a row, you will get an error. You must either delete the previous container using docker rm or change the name of the new container.

Labels:
labels are key-value pairs that can be applied to Docker images and containers as metadata. When new Docker containers are created, they automatically inherit all the labels from their parent image. It is also possible to add new labels to the containers so that you can apply metadata that might be specific to that single container.

docker run -d --name labels -l deployer=Ahmed -l tester=Asako \ubuntu:latest sleep 1000

You can then search for and filter containers based on this metadata, using commands like docker ps.

$ docker ps -a -f label=deployer=Ahmed
CONTAINER ID IMAGE COMMAND ... NAMES
845731631ba4 ubuntu:latest "sleep 1000" ... labels

Hostname:
By default, when you start a container, Docker will copy certain system files on the host, including /etc/hostname, into the container’s configuration directory on the host,2 and then use a bind mount to link that copy of the file into the container. We can launch a default container with no special configuration like this:

$ docker run --rm -ti ubuntu:latest /bin/bash

This command uses the docker run command, which runs docker create and docker start in the background. Since we want to be able to interact with the container that we are going to create for demonstration purposes, we pass in a few useful arguments. The --rm argument tells Docker to delete the container when it exits, the -t argument tells Docker to allocate a psuedo-TTY, and the -i argument tells Docker that this is going to be an interactive session, and we want to keep STDIN open. The final argument in the command is the executable that we want to run within the container, which in this case is the ever useful /bin/bash.



When you see any examples with a prompt that looks something like root@hostname, it means that you are running a command within the container instead of on the Docker host.

To see the containers host name

root@ebc8cf2d8523:/# hostname -f
ebc8cf2d8523
root@ebc8cf2d8523:/# exit

To set the hostname specifically, we can use the --hostname argument to pass in a more specific value.
$ docker run --rm -ti --hostname="mycontainer.example.com" ubuntu:latest /bin/bash



Domain Name Server:
Just like /etc/hostname, the resolv.conf file is managed via a bind mount between the host and container.
/dev/sda9 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
By default, this is an exact copy of the Docker host’s resolv.conf file. If we didn’t want this, we could use a combination of the --dns and --dns-search arguments to override this behavior in the container:
$ docker run --rm -ti --dns=8.8.8.8 --dns=8.8.4.4 --dns-search=example1.com \ --dns-search=example2.com ubuntu:latest /bin/bash

Media Access Control (MAC) Address:
Another important piece of information that you can configure is the MAC address for the container. Without any configuration, a container will receive a calculated MAC address that starts
with the 02:42:ac:11 prefix.

Storage Volumes:
There are times when the default disk space allocated to a container or its ephemeral nature is not appropriate for the job at hand and it is necessary to have storage that can persist between container deployments.

Mounting storage from the Docker host is not a generally advisable pattern because it ties your container to a particular Docker host for its persistent state. But for cases like temporary cache files or other semiephemeral
states, it can make sense.
For the times when we need to do this, we can leverage the -v command to mount filesystems from the host server into the container. In the following example, we are mounting /mydata/session_data to /data within the container:

$ docker run --rm -ti -v /mydata/session_data:/data ubuntu:latest /bin/bash

For example, I am creating mounting mnt1/session_data to /data in container and creating a file. After exiting from the container you will be able to see the my file test.sh in /mnt1/session_data



Resource Quotas:
When people discuss the types of problems that you must often cope with when working in the cloud, the concept of the “noisy neighbor” is often near the top of the list. The basic problem this term refers to is that other applications, running on the same physical system as yours, can have a noticeable impact on your performance and resource availability.

Traditional virtual machines have the advantage that you can easily and very tightly control how much memory and CPU, among other resources, are allocated to the virtual machine. When using Docker, you must instead leverage the cgroup functionality in the Linux kernel to control the resources that are available to a Docker container. The docker create command directly supports configuring CPU and memory restrictions when you create a container.

Constraints are applied at the time of container creation. Constraints that you apply at creation time will exist for the life of the container. In most cases, if you need to change them, then you need to create a new container from the same image and change the constraints, unless you manipulate the kernel cgroups directly under the /sys filesystem.

There is an important caveat here. While Docker supports CPU and memory limits, as well as swap limits, you must have these capabilities enabled in your kernel in order for Docker to take advantage of them. You might need to add these as command-line parameters to your kernel on startup. To figure out if your kernel supports these limits, run docker info. If you are missing any support, you will get warning messages at the bottom, like:
WARNING: No swap limit support



CPU shares:
Docker thinks of CPU in terms of “cpu shares.” The computing power of all the CPU cores in a system is considered to be the full pool of shares. 1024 is the number that Docker assigns to represent the full pool. By configuring a container’s CPU shares, you can dictate how much time the container gets to use the CPU for. If you want the container to be able to use at most half of the computing power of the system, then you would allocate it 512 shares. Note that these are not exclusive shares, meaning that assigning all 1024 shares to a container does not prevent all other containers from running. Rather it’s a hint to the scheduler about how long each container should be able to run each time it’s scheduled. If we have one container that is allocated 1024 shares (the default) and two that are allocated 512, they will all get scheduled the same number of times. But if the normal amount of CPU time for each process is 100 microseconds, the containers with 512 shares will run for 50 microseconds each time, whereas the container with 1024 shares will run for 100 microseconds.

CPU Pinning:
It is also possible to pin a container to one or more CPU cores. This means that work for this container will only be scheduled on the cores that have been assigned to this container.

Memory:
We can control how much memory a container can access in a manner similar to constraining the CPU. There is, however, one fundamental difference: while constraining the CPU only impacts the application’s priority for CPU time, the memory limit is a hard limit. Even on an unconstrained system with 96 GB of free memory, if we tell a container that it may only have access to 24 GB, then it will only ever get to use 24 GB regardless of the free memory on the system. Because of the way the virtual memory system works on Linux, it’s possible to allocate more memory to a container than the system has actual RAM. In this case, the container will resort to using swap in the event that actual memory is not available, just like a normal Linux process.

Ulimits:
Another common way to limit resources avaliable to a process in Unix is through the application of user limits.

Before the release of Docker 1.6, all containers inherited the ulimits of the Docker daemon. This is usually not appropriate because the Docker server requires more resources to perform its job than any individual container.
It is now possible to configure the Docker daemon with the default user limits that you want to apply to every container. The following command would tell the Docker daemon to start all containers with a hard limit of 150 open files and 20 processes:

Useful Docker Commands
  • To remove all containers in a host sudo docker rm $(sudo docker ps -a -q)
  • To create docker image from docker file sudo docker build -t "opengrok:latest" .
  • To save docker image sudo docker save -o opengrokimage  opengrok
  • To load docker image sudo docker load -i opengrokimage
  • Sudo docker version   - To get docker version


  • Sudo docker info to get server information

  • Sudo docker ps   - to see the docker processes
  • sudo docker exec -t -i 2b7485dc185f /bin/bash  - to get inside a running container
  • sudo docker build -t "opengrok:latest" . To build docker image

Difference between Entrypoint and CMD

CGroups
Cgroups allow processes to be grouped together, and ensure that each group gets a share of memory, CPU and disk I/O; preventing any one container from monopolizing any of these resources.

The implementation of user namespaces allows a process to have it’s own set of users and in particular to allows a process root privileges inside a container, but not outside.

 Containers do not emulate a hardware layer and use cgroups and namespaces in the Linux kernel to create lightweight virtualized OS environments with near bare-metal speeds.
Docker restricts the container to a single process only. The default docker baseimage OS template is not designed to support multiple applications, processes or services like init, cron, syslog, ssh etc.

To run multiple processes in Docker you need a shell script or a separate process manager like runit or supervisor. But this is considered an 'anti-pattern' by the Docker ecosystem and the whole architecture of Docker is built around single process containers.

Comments

Popular posts from this blog

Base64 Encoding

The base-64 encoding converts a series of arbitrary bytes into a longer sequence of common text characters that are all legal header field values. Base-64 encoding takes a sequence of 8-bit bytes, breaks the sequence into 6-bit pieces, and assigns each 6-bit piece to one of 64 characters comprising the base-64 alphabet. Base 64–encoded strings are about 33% larger than the original values. For example “Ow!” -> “T3ch” 1. The string “Ow!” is broken into 3 8-bit bytes (0x4F, 0x77, 0x21). 2. The 3 bytes create the 24-bit binary value 010011110111011100100001. 3. These bits are segmented into the 6-bit sequences 010011, 110111, 01110,100001.

Unicode and UTF8 Encoding

Unicode provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language. Unicode officially encodes 1,114,112 characters, from 0x000000 to 0x10FFFF. (The idea that Unicode is a 16-bit encoding is completely wrong.) For maximum compatibility, individual Unicode values are usually passed around as 32-bit integers (4 bytes per character), even though this is more than necessary. The consensus is that storing four bytes per character is wasteful, so a variety of representations have sprung up for Unicode characters. The most interesting one for C programmers is called UTF-8. UTF-8 is a "multi-byte" encoding scheme, meaning that it requires a variable number of bytes to represent a single Unicode value. Given a so-called "UTF-8 sequence", you can convert it to a Unicode value that refers to a character. http://www.cprogramming.com/tutorial/unicode.html There are 3 types of encoding in unicode, UT

How to configure Microsoft SQL Server in EAP/Wildfly

Wednesday, December 11, 2019 9:19 AM Download the appropriate jdbc driver from https://docs.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver15 Extract the driver package to the modules folder under EAP Here is from where my EAP is running C:\Users\admin\EAP-7.0.0\ Create a folder named "sqlserver" under C:\Users\admin\EAP-7.0.0\modules\system\layers\base\com\microsoft Copy the extracted driver jar file (for example mssql-jdbc-6.4.0.jre8.jar) to C:\Users\admin\EAP-7.0.0\modules\system\layers\base\com\microsoft\main Now create a modules.xml file in   C:\Users\admin\EAP-7.0.0\modules\system\layers\base\com\microsoft\main with the following settings <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.3" name="com.microsoft.sqlserver">     <resources>         <resource