Dockercon

Anyone watching?

2016.dockercon.com/

botnet

wut

botnet shill

Shipping half an OS for every program was a mistake.

I guess very few people here have worked on complex projects to understand the utility of a universal deployment tool.

designated shitting street

Trips don't lie

...

these neets?

I'm slightly disappointed. I expected an E3 vibe where we shitposted together and laughed at all the diversity and buzzwords.


The code of conduct made me glad I didn't assist, or else I might have been kicked out.

Anyway, the new swarm in 1.12 looks promising, I wonder what will happen with the Consul integration after this.

You're not alone. I simply don't have time to fuck with the various webservers+services out there and their special snowflake syntax just so I can get stuff working to bypass my entire company's hilariously deprecated system. For me, it's a way to test out services without having to junk up the host system with packages I'll end up having to remove later.


Space is only going to get cheaper. Sure it could be optimized, but the simplest approach gets shit done.

no one here codes or is employed

If you're going to embed everything needed for a program to run, why not simply statically compile everything or use a functional package manager, like GNU Guix?

I don't even know or GAF what docker is, but I do know that icon/logo thing they have was made by a nigger.

You have a weird definition of simple.

Because you normally don't get to choose your distro; it's either RHEL, Debian or Windows. Even then, you don't get much control over what you can/cannot install.

And you should know that gathering and removing build dependencies can be a painful experience. Surprisingly, Docker is also good for making consistent building environments.

*raises hand*

It's simple in the problem that it addresses. Sure, you could setup a debian server to run a bunch of services, tailor the ones with conflicting dependencies to work, build from scratch the ones that don't offer deb packages, have a backup plan for each one in case of updates/failures/outright replacements, and hope that the next system upgrade doesn't nuke the whole thing or introduce a new dependency conflict. Or run one physical / virtual server per service, and link them all together using one webserver handling the redirects and ignore the power bills/resource issues. Or run them all under the same system using chroot jails with meticulously provided resources using filesystem linking and other tricks, but still being forced to figure out dependencies if they break when the host updates. Or just use docker. It's not trivial to use, but it's a hell of a lot simpler than it could be in the problem it addresses: how do I get my system to do a lot of things with packages from varying sources without introducing either unmaintainable complexity or excessive resource utilization?

Get a real OS?
t. BSD sysadmin

...

Their default being stateless was a good idea, but their addition of state is really god damned haphazard.

Docker volumes are a cunt and you want to ensure they're in the right place, I'm shocked that db docker images aren't stateful by default, I bet you there are a lot of people who have lost data over that oversight.

I still like it though, I'm trying to deploy guacamole using docker, learning docker compose right now to see if I can make the database hookups more painless.

Currently I just compile the bugger because it's just that much more annoying to work with databases in docker.

The problem I have with that is that ultimately, you are just papering over issues which won't get fixed this way.

Just mount that shit on your filesystem. Also, what's so hard about databases other than volumes and ports?
Postgres volume: /var/lib/postgres
Cassandra volume: /var/lib/cassandra
Then publish the appropriate ports and you're set.


And won't get fixed any other way either.

No wonder the whole concept of Docker is fucking retarded shit no one asked for.

pets confirmed