“Docker containers are created by using [base] images. An image can be basic, with nothing but the operating-system fundamentals, or it can consist of a sophisticated pre-built application stack ready for launch.
When building your images with docker, each action taken (i.e. a command executed such as apt-get install) forms a new layer on top of the previous one. These base images then can be used to create new containers.
In this DigitalOcean article, we will see about automating this process as much as possible, as well as demonstrate the best practices and methods to make most of docker and containers via Dockerfiles: scripts to build containers, step-by-step, layer-by-layer, automatically from a source (base) image…”
“Deis (pronounced DAY-iss) is an open source PaaS that makes it easy to deploy and scale LXC containers and Chef nodes used to host applications, databases, middleware and other services. Deis leverages Chef, Docker and Heroku Buildpacks to provide a private PaaS that is lightweight and flexible…”
“A recurring question on the Docker mailing list and on the Docker IRC channel is “how can I change the network range used by Docker?”. While Docker itself doesn’t have a configuration option to change this network range (yet!), it is very easy to change it, and here is how…”
“A few days ago we met with Jérôme Petazzoni from Docker and discussed some interesting technical issues. I already mentioned one idea here: Run Docker on any OS in a Headless Hypervisor. Today I’ll write about idea suggested by Jérôme — CRIU snapshots for LXC containers…”
“In this DigitalOcean article, especially keeping in mind those who host multiple web applications (e.g. multiple WordPress instances, Python Applications, etc.), we are going to create docker images to quickly start running (on-demand) Memcached containers which can be operated individually. These containers, kept and secured within their own environments, will work with the application being hosted to help them get better and faster…”
“f you’re just starting out with Docker, it’s super easy to follow the examples, get started and run a few things. However, moving to the next step, making your own Dockerfiles, can be a bit confusing. One of the more common points of confusion seems to be:
Where are my Docker images stored?
I know this certainly left me scratching my head a bit. Even worse, as a n00b, the last thing you want to do is publish your tinkering on the public Docker Index…”
“At work, we are evaluating Docker as part of our “epic next generation deployment platform”. One of the requirements that our operations team has given us is that our containers have “identity” by virtue of their IP address. In this post, I will describe how we achieved this.
But first, let me explain that a little. Docker (as of version 0.6.3) has 2 networking modes. One which goes slightly further that what full virtualisation platforms would typically call “host only”, and no networking at all (if you can call that a mode!) In “host only” mode, the “host” (that is, the server running the container) can communicate with the software inside the container very easily. However, accessing the container from beyond the host (say, a client – shock! horror!) isn’t possible.
As mentioned, Docker goes a little bit further by providing “port forwarding” via iptables/NAT on the host. It selects a “random” port, say 49153 and adds iptables rules such that if you attempt to access this port on the host’s IP address, you will actually reach the container. This is fine, until you stop the container and restart it. In this case, your container will get a new port. Or, if you restart it on a different host, it will get a new port AND IP address.
One way to address this is via a service discovery mechanism, whereby when the service comes up it registers itself with some kind of well-known directory and clients can discover the service’s location by looking it up in the directory. This has its own problems – not least of which is that the service inside the container has no way of knowing what the IP address of the host it’s running on is, and no way of knowing which port Docker has selected to forward to it!
So, back to the problem in hand. Our ops guys want to treat each container as a well-known service in its own right and give it a proper, routable IP address. This is very much like what a virtualisation platform would call “bridged mode” networking. In my opinion, this is a very sensible choice…”
“This post shows how to configure Docker an Open vSwitch to provide container isolation with the use of VLANs between 2 hosts running Open vSwitch. This lab is done to support the testing of the configuration described here but in a multi-cloud environment.
The goal is to set-up 2 VLANs linking containers on 2 hosts with a GRE tunnel set-up between the 2 switches. Here is a summary of the overall configuration…”