Le linuxien prudent n’aime pas les surprises. Pour limiter les surprises que peut faire une application, il faut tout simplement l’isoler.
Sans aller jusqu’à Bash (ahem…), ne pas vouloir que Skype, Chrome, Steam, Minecraft, … et même Firefox voient le reste du système est plus que légitime.
La solution la plus traditionnelle est le classique chroot, facile à mettre en place, mais à la réputation controversée (surtout par les gens utilisant les jails de BSD).
Apparmor propose aussi sa solution, mais sans faire rêver plus que ça.
Niveau buzzword, ces technologies sont quand même à la ramasse, pourquoi ne pas utiliser tout bêtement Docker?
Docker est tout simplement un outil conçu pour lancer une application dans un contexte (RAM, CPU, disque, réseau), de manière simple et économe. Personne ne vous oblige a y accoler le mot Cloud, que vous avez déjà tant de mal à expliquer.
Isoler une application consiste à ne lui donner accès qu’aux services dont elle a besoin, et de manière explicite.
You just built a container which contains a minimal operating system, and which only runs your app. But the operating system inside the container is not configured correctly. A proper Unix system should run all kinds of important system services. You're not running them, you're only running your app. "What do you mean? I'm just using Ubuntu in Docker. Doesn't the OS inside the container take care of everything automatically?"
Meet Ansible, a system orchestration tool. It has no dependencies other than python and ssh. It doesn’t require any agents to be set up on the remote hosts and it doesn’t leave any traces after it runs either. What’s more, it comes with an extensive, built-in library of modules for controlling everything from package managers to cloud providers, to databases and everything else in between. If you’ve spent more time writing cookbooks rather than using them, Ansible will be your cure.
Meet Docker, a utility for creating virtualized Linux containers for shipping self-contained applications. As opposed to a traditional VM which runs a full-blown operating system on top of the host, Docker leverages LinuX Containers (LXC) which run in the same kernel, no hypervisor overhead. This results in a more efficient usage of system resource by trading some of the isolation specific to hypervisors.
What we're going to do now is start from our debootstrap image and automate the creation of a system which will run ssh. The resulting image can be further extended in the future to run additonal services - and we'll demonstrate that by adding memcached to it, giving an image with both services running.
It is often told that Linux Containers (LXC) are not secure. This was definitely true 3 years ago, but they got much better. Here is a quick overview of current challenges, as well as ways to improve ...
This article is a brief introduction to docker, which is a utility for manipulating lightweight containers, allowing you to quickly spin up multiple isolated environments on a single host-machine.
To get started you need to ensure that you have a "modern kernel". Happily the kernel which is available in the Wheezy release of Debian GNU/Linux is recent enough to contain all the required features.
Today's announced collaboration between Red Hat and dotCloud, the company behind Docker, is exciting for a lot of reasons. As the release notes: "Docker and OpenShift currently leverage the same building blocks to implement containers, such as Linux kernel namespaces and resource management with Control Groups (cGroups). Red Hat Enterprise Linux Gears in OpenShift use Security-Enhanced Linux (SELinux) access control policies to provide secure multi-tenancy and reduce the risk of malicious applications or kernel exploits."
If you're just starting out with Docker, it's super easy to follow the examples, get started and run a few things. However, moving to the next step, making your own Dockerfiles, can be a bit confusing. One of the more common points of confusion seems to be:
Where are my Docker images stored?
Interactive commandline tutorial
The best way to learn how Docker works is to use it!
This hands-on tutorial is 100% online, so you don't need to install a thing. In about 10-15 minutes you'll be familiar with the basic Docker commands.
(Très bien foutu)
This project revolves around a vagrant file (and VirtualBox) for testing out the
deployment of a multinode saltstack based cluster. Node.JS and Docker
(with a salt-minion nodejs ubuntu image) are all included and built
during the "vagrant up" command.
The vagrant file deploys a salt master, and three salt minions. All four
virtual machines install docker with a salt-minion nodejs docker image.
Whenever this image is used to create containers, it will automatically
communicate with the master and exchange keys. This means all
spawned linux containers run salt-minion which opens up infinite possibilities
for rapid deployment of linux environments