Docker alternatives?

I've been using Docker on my server, mainly for two reasons:

Originally I was just going to run a VM for each service, but then somebody told me about docker and I figured it's more resource efficient (the cpu on my server isn't great). Docker has been working okay, but in practice fucking around with Dockerfiles is very hard, and I'm basically dependent on what other people make. It's really not practical to just write a dockerfile from scratch like their docs say, it gets a bit complicated because of how docker works. Managing the container ports and tracking resource usage is a bit of a headache as well.

Now I found out about www.boycottdocker.org and I'm wondering if I can do better. What are my alternatives in this case? If I just install KVM and make a bunch of VMs with various linuxes in them, will that be as efficient as Docker?

Other urls found in this thread:

freebsd.org/doc/handbook/jails.html
btrfs.wiki.kernel.org/index.php/Status
github.com/dokku/dokku
docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
en.wikipedia.org/wiki/Openvz
twitter.com/NSFWRedditGif

since docker is yet another attempt by Linux to NIH freebsd technology you can just use jails
since each jail is a full base system with its own init and cron and whatnot you will just end up managing multiple systems

I have no issue with freebsd but I like Linux and would like to keep using it

KVM would be real virtualization. Not nearly as efficient as containers. Luckily Docker is not the only container solution.


systemd's nspawn is a bit like jails, I think. I've heard both of them described as "chroot on steroids".

You put a system directory tree somewhere (preferrably /var/lib/machines but anywhere will do), like you'd use for a chroot. You can build one with debootstrap for Debianoids, yum or dnf for red hat-likes, or pacstrap for Arch - it's detailed in the systemd-nspawn man page.

Then you launch a container like this:systemd-nspawn -D /path/to/your/container # to get a shell in the containersystemd-nspawn -M container # for a container at /var/lib/machines/containersystemd-nspawn -M container /sbin/init # to "boot" the container's own init systemsystemd-nspawn -M container -a apt upgrade # -a puts the command in PID 2, so it doesn't have to function like an init system; always use this when you're not running an init system or shell as the command
There are lots of other options. For example, to make the container's default 22 ssh port available as port 23 on the host and restrict all other networking in the container, you'd add "--private-network -p tcp:23:22".

nspawn creates a whole new process tree, sets up /etc/resolv.conf to follow the host, takes care of the hostname for you, and just makes it into a nice container in general.

Any file changes in the container filesystem happen in the real filesystem too, just like a chroot. You don't have to fuck around with instances of container images, the persistence is all there in the tree. Note that multiple containers using the same file tree won't be aware of each other, except through changes they make to the filesystem.

enjoy your 20 copies of /bin/ls faggot

Run docker in a VM.
Cluster common services in single VMs.
It's no where near as secure as sperate VMs, but if your just running shit on a small LAN, it's fine.
Better yet, ignore VMs entirely if security isn't an issue.

It can solve exactly that problem with copy-on-write containers in btrfs subvolumes, actually. Check out --template in systemd-nspawn(1).

Why run docker at all if you have a VM?

Some fucking developers only release docker images.

Sure it can.... if you're willing to trust a filesystem whose devs seem to think that botching is ok, because with enough people and time, it will eventually just werk™

FreeBSD jails, Solaris Zones, or kvm's for OS of your choice. I use them for what you said, isolation and ease of configuration. I can spin up a jail from a template that I've made in seconds and inject whatever packages and files I want, from a seperate template too if I want or some kind of shared location. The fact that they're their own system makes config nice, you don't have to worry about conflicting with anything else or breaking something, you just configure it the way it needs to be and you're done. The flexibility is too nice.

While on the subject, other than providing downloadable containers, what does Docker do than BSD jails doesn't?

I've been pretty surprised by the rise of Docker, given it seems to be a BSD jails / Solaris zones wannabe 15 years too late.

hype

Your problem is that you're an incompetent who doesn't know how development like this works in the kernel.
Just because it's merged, doesn't mean it's finished.

So how do KVM vs. Qemu vs. VirtualBox compare? This is specifically for running 10-20 VMs on a Linux host (it's a Sempron so not very beefy), each one running some online service on some minimal server Linux. I'm concerned with CPU overhead first, memory overhead second.


I'm leaning towards either KVM or Qemu right now. A lot of things I want to install I don't really know how they work, so I learn as I go along. The instructions are always for setting it up on a normal Linux OS, so I figure if I just get a shell into the VM I can just pretend the host OS doesn't exist and install everything normally, and then the VM will isolate thing automatically without me doing anything special.

Having looked up jails on Linux, seems like people use chroots, but sounds like I'd need to understand how a program works (ie. what files it creates and where) to chroot it properly. Also does chroot isolate them in memory also?

But how long would it take for me to make that template?


Run on Linux?

I've used Docker a lot but not jails. From what I know they're not really very different, just two ways of doing the same thing. Docker is pretty easy to use, most things I want already have a dockerfile somewhere I can just use as is. Since those typically handle most configuration setting up a service with Docker is actually less work than setting it up on the host OS without any virtualization. The syntax is simple enough that I can sort of skim it and get a rough idea of what it does, without spending any time learning it.

Like says if you look at what people on the internet praise Docker for it's mostly hype and bullshit. But I think there's some real advantages (mostly for the novice, casual user rather than expert full time sysadmin) that people don't really talk about, because "you can be dumb and still understand our software" isn't a sexy reason to recommend it to people. But hey I like not having to think too hard.

Not very.
freebsd.org/doc/handbook/jails.html

It boils down to dropping the base system into a directory, picking which directories you want to be writable in the jail, creating a spot for the writable data to live, linking them, then adding the jail to the config.

That's the manual way though, you could probably automate it with ezjails but I've never used it, I wrote my own scripts for jail management. "jail make.sh" without error checking boils down to this.

mkdir /usr/home/Jails/$Namecpdup /usr/home/Jails/skeleton /usr/home/JailSpace/$Namemount -t nullfs -o ro /usr/home/Jails/mroot /usr/home/Jails/$Namemount -t nullfs -o rw /usr/home/JailSpace/$Name /usr/home/Jails/$Name/symlinksservice jail start $Name

My Jails live in /usr/home/Jails/*Jailname*
Their writable data lives in /usr/home/JailSpace/*Jailname*
mroot is a read only FreeBSD base with writable directories removed, the removed directories are turned into symlinks which point to `/symlinks` which will be writable to the jail, skeleton contains the portions of the base which writable. You can think of mroot as the read only template and the skeleton as the writable template. This setup was based around an older version of the linked document.

This kind of setup makes it real easy for me to swap out bases, config data, change mountpoints, etc.

This is just a basic example, it's so open ended I don't know what I can really say, you can set them up and use them however you want. It's not super complicated, it's essentially just installing an OS to a directory with your special rules.

btrfs.wiki.kernel.org/index.php/Status
we know which btrfs features are stable and which aren't. stop the drama bsd bitch.


kvm > virtualbox.
qemu is a CPU emulator, not a hypervisor. qemu allows you to run a binary compiled for a different cpu architecture. this could be a shell and a whole chroot.
kvm can interact with qemu to run non-native virtual machines.

so, in terms of speed (and inverse security isolation):
nothing > chroot > docker/jails/nspawn/snap/flatpak-type-of-shit > kvm > virtualbox > kvm+qemu >> separate hardware >>> airgapped machines >>>> IPoAC >>>>> don't use computers

Or maybe there's something wrong how development like this works in the kernel. Similarly complicated ZFS managed to have no data-loss bugs for its entire existence. There's no reason to use if we hack on it for long enough time, it will eventually work: the filesystem when we have similar fs which was written and designed by clearly more competent people.

sun didn't have masses of idiots to test their shit for free

LXC

No, you're just an incompetent who doesn't know how shit works.
The on-disk format was finalized.
Additional features come after that.

Stop running all your services as root.

That sounds unlikely. Do you mean "its entire public existence"?

Linux development happens out in the open. Many unfinished things are public. I would guess that ZFS did have data-destroying bugs in the stage of its development that btrfs is now in, but that Sun didn't ship it at that stage. But correct me if I'm wrong, I'm not a ZFS expert.

Docker really is the most mature, and has the most plumbing at this point. But that will probably change.

I think you're going about it all wrong. You shouldn't look for an alternative, you should look for something that fulfills your needs that utilizes it.

For example, github.com/dokku/dokku
Uses heroku buildpacks, basically your own infrastructure, pushable via git.

It has a few quirks, especially via shared files from host to container, but it just works.

For example, want to deploy something that already has a dockerfile? Want to use TLS with it? Let's try Gogs.

$ dokku apps:create gogs$ dokku proxy:ports-add gogs http:80:3000$ dokku docker-options:add gogs deploy -p 2222:22$ mkdir -p /var/lib/dokku/data/storage/gogs$ chown -R dokku:dokku /var/lib/dokku/data/storage/gogs$ dokku storage:mount gogs /var/lib/dokku/data/storage/gogs:/data$ dokku postgres:create gogs-database$ dokku postgres:link gogs-database gogs$ docker pull gogs/gogs:0.9.71$ docker tag gogs/gogs:0.9.71 dokku/gogs:0.9.71$ dokku tags:deploy gogs 0.9.71$ dokku letsencrypt gogs$ dokku proxy:ports-add gogs https:443:3000$ dokku checks:disable gogs$ dokku proxy:ports-remove gogs http:22:22$ dokku proxy:ports-remove gogs http:3000:3000

This is probably one of the most complex examples, deploying gogs.
After this, to upgrade, you just `docker pull`, `docker tag` and `dokku tags:deploy`. That's it.

Want to deploy a Ruby, PHP or whatever application, or something that doesn't have a dockerfile? You just push it to dokku via git. Maybe configure various options. It will automatically detect and use heroku buildpacks.

There are also various infrastructure helpers around docker, dokku is just the most bare bones and easiest to set up. Can get started on a shitty VPS.

Also, if you're multi-host or cluster, you're getting into swarm or deis territory.

And that's another reason to use ZFS over btrfs. They didn't rely on masses of idiots to eventually discover all the bugs, instead they properly designed, verified and bugtested their software, which is why it hasn't had a data-loss bug, unlike btrfs. It's a nice added value, but relying on this sort of "free testing" is stupid.


This is theoretically possible, yes, but no such bug got into any release.
I'm not sure how well that can be compared. ZFS was done and production ready in 4 years and even though it's still being worked on and new features are being added, no release has had data-destroying bug. Meanwhile btrfs has been in state of perpetual development for 9 years and even though the on-disk format has been stabilized, it still has serious bugs in it.

Now if btrfs was the only product of its kind, then maybe it could be said that it's good enough, but since we have ZFS, there's really no contest.

So you don't know.
ZFS was an ongoing effort with a team, btrfs is one guy who worked on ReiserFS and occasional contributions from others who are interested in it for the future, and are using it in production right now btw.

You don't have a clue what the fuck you are talking about.

Why are *BSDfags like you so insufferable? You just spout horse shit? Are you really this attatched to your precious filesystem?

Also, why are you even shitting up this thread? Don't you have a *BSD thread to go spout your superiority in, "muh filesystem was RAWK SOLID STABLE oh god please use my favorite operating system I'm so alone I need others to talk to ;-:"

I'm not even using BSD.

One more reason to prefer ZFS. As I said, if there wasn't a better contender, it may be considered ok, but as you said yourself, ZFS had "ongoing effort with a team" - and the results are apparent.

Right, nice fallacy that a team of software developers obviously produces better software.

Good, maybe you should start your own thread and jerk off to it there instead of derailing this one, you fucking subhuman faggot.

ZFS doesn't originate on BSD nor is it even exclusive to BSD, why would you cast shade on BSD users like that? I'm not that user and I think I've been pretty helpful to the OP while you poke fun at my OS of choice because someone else is talking about their filesystem of choice. That's rude, user.

Read: when OpenSolaris shat its pants and couldn't boot after an unclean shutdown, that was totally the sound card driver's fault and not ours goy

So much for "Go apps can be statically compiled to make sysadmins' jobs trivial".

It's a complex example because it requires persistent folders on disk, versus just a database.
Everytime you restart a traditional docker containers, those files would be gone, if you didn't use volumes.

And yeah, you could run gogs outside of a container. But the point of these threads is about containerizing applications.

Because it's always BSD fags jerking off to ZFS, even though it's traditionally Solaris. No one cares about his filesystem of choice, it's irrelevant to the thread.

Oh, and the SSH port bit.
Mapping 2222 to the gogs ssh port, because 22 will obviously be in use, is what makes it more complex.

Second the vote for LXC, it's more secure than Docker especially with a separate user namespace. Don't know where nspawn falls in there. QEMU&KVM go together, you don't pick one or the other. Technically you could run just QEMU but then everything gets virtualized in software aka slow af.

Thanks, that ladder explains it. What I was suspecting too.

Really? Virtualbox is faster? Interesting.

Sounds like I'm good with kvm then. I know perf will take a hit but I'm hoping it's not like 50% drop, since I see kvm folks claim it's almost negligible.

I never had performance issues with Docker as far as overhead goes, but my dockerized owncloud would push CPU to 100% by spawning some httpd daemons and I couldn't even figure out why because the container obscured everything. So I found that on paper Docker is faster because less overhead, but in practice the learning curve means you can't manage it as easily.

Anyhow, owncloud got split in half by drama now, and once I switch to full on VMs it should be much easier to get syncthing working, so I guess that problem kinda takes care of itself.

How easy is it for the host OS to backup data from the VMs? Or should I just treat the VMs as separate machines, and use network-based (in this case host OS fake-networking with guest OS) backup solutions?

Isn't LXC like a rougher and less user-friendly Docker? They always say it's built on LXC but I could never figure out what that means.


I don't really mind the BSDfags, even though as I said I don't want to switch OSes right now so I wouldn't go with their suggestions. It's nice to hear a different perspective sometimes, and maybe a year from now I will decide to switch to BSD because of it.


If you wanted persistent files why wouldn't you make a volume though? I always see docker critics bitch about this but it's really easy to make volumes and most containers come with config and data volumes set up by default.

Officially recommended Docker way is to use container volumes btw, which seems kind of complicated for simple cases.

Dokku storage plugin is using docker volumes to mount a host directory.

docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume

Just use OpenVZ. Most of the power of a VM but saving most of the resources. It hits a pretty good balance and is very fast and stable: en.wikipedia.org/wiki/Openvz
I used to use OpenVZ at work when we were hurting for money. It's a great poor man's virtualization solution when you can't afford to throw tons of money into computing resources.

Never use docker or anything like it for security. Linux from userspace is hopelessly insecure. Use KVM for security.

Fun fact, ZFS on its first release was stable, always has been, you look like a child trying to figure out how to shit when you use btrfs.

Use FreeBSD with jails

btrfs is hopeless. Give up on it already.

It's being run in production at multiple companies.
Subvolumes are already stable.
RAID is not stable, though.
Quit being a faggot and derailing the thread.

So is Gentoo; the existence of a few retarded companies doesn't make it a good idea.
Layering violation on LVM from 1999 which is a much better solution as it works on all filesystems.
That's kinda important. And another layering violation. mdadm works on everything and has for 15+ years. Why is it reinventing it poorly?
You can't make me stop sucking dicks.

qemu is emulating a whole new CPU in software (kinda). it gotta be slower than issuing a few hardware-accelerated virtualization instructions between context switches.

I constantly compile the same software natively in x86 and in qemu-arm on the same host. It's like 4 or 5 times slower. I'd bet virtualbox adds 20% of overhead at most, with proper CPU virt support enabled.

THE BSD SHILLS ARE AMONG US

You can't ship incremental snapshots over the wire to a backup server based on a parent. Find me a LVM solution that cleanly replaces zfs send/btrfs send and I'll suck your dick.

kvm+qemu is not regular qemu. It uses KVM, a more standard virtual machine, not qemu's emulation.

I don't trust it.

Nigga, I run my containers with single applications, created by Docker compose. There is literally no init system in any of them.

nice me me

Nobody uses "www" anymore. There's no point.

That's besides the point. The point is, there's nothing shady about following the standard.

wow

Thanks to Sun. Though the "stable" doesn't mean it's bugless even on FreeBSD either and on Linux it's a giant clusterfuck compared to btrfs.