Cluster/Server/Cloud/Parallel Computing

How the fuck do people write programs that run across multiple machines at once? Isn't there only one main loop? Do they just show up to the program as moar cores? What if there are multiple servers across different regions? Any good books on this topic?

Other urls found in this thread:

docs.julialang.org/en/stable/manual/parallel-computing
zguide.zeromq.org/page:all
zguide.zeromq.org/)))
twitter.com/NSFWRedditGif

It depends on what you want to parallelize. A common problem is calculating a matrix for which you later want to get the eigenvalues.
Usually the matrix elements are independent of one and another, so you can subdivide your matrix and have those segments calculated on different cores/servers. So you'd need a master program that assigns these segments to slave programs and then aggregates the results.
Of course there are libraries that abstract this process to varying degrees, but one language that was actually designed with this in mind would be Julia:
docs.julialang.org/en/stable/manual/parallel-computing

Here's a fairly basic intro: zguide.zeromq.org/page:all

Install Gentoo on the servers

OP do this


Whois of (((zguide.zeromq.org/)))
>Doron (((Somech)))
>(((Tel Aviv)))
You couldn't make this shit up if you tried.

Network protocols.

Networking. The single machine code is the same as everything else, then you have something that handles the rest of it. MPI used to be popular for that but I always thought it was kinda shit. The advantage of using it was industry supported it, so you could get various hardware accelerated networking devices for MPI.

a) what's your point?
b) it's called Whois protection. Look it up.

jesus

Plan 9 is better choice tbh.

...

So?

Google PVM (parallel virtual machine) and MPI (message passing interface)

You can also use a very coarse-grained approach using a server/client model where all computers share a common server. There is a shared directory on the server with two subdirectories; an in and out-box. The client pulls data from the out box and places the processed returns in the in box. The server handles shuffling files from those directories to divy up the workload. I've personally done this twice in practice and would not recommend it.

With a lot of effort for anything that isn't trivial. It gets even worse when you have several machines with different hardware and different accelerators (GPUs, Xeon Phi, etc).
You can but shouldn't be blind to that. For instance the other machine may not have the data so if you compute there you'll need to transfer it. There are runtimes which automate this though.
That'll only increase latency, other than that it makes no difference. So depending on the problem it'll be slightly slower or much slower if you need a lot of synchronization.

Gnu parallel can run several instances of a program on a cluster.

install guix

Thank you for all the helpful answers, this is a great start for some googling. Do any anons have any other info or stories they'd like to share about cluster computing?

kek

If you just want to fuck around, it's pretty easy (or, at least it was back in 2014) to get luxrender running on multiple machines. I had eight seperate computers of *very* different architectures rendering a single image, three of which were using OpenCL acceleration on top of the fact that they were clustered. It was a very fun little experiment.

Well fuck.

Just checked and LR is pretty much dead. The website is still there, but there's only one or two people who've touched the forums in years and the download links aren't working. Shame.

I used to work in supercomputing (which was often just cluster computing) over at LANL and so did my girlfriend. This was back when tech was 1% female and before cam sites so you'd sometimes run into a legitimately competent woman, maybe '95-'96? She'd not come around one night and had terrible excuses as to why and I figured she was probably being a cheating whore, but I learned later that her group of friends had planned on watching a movie after work and decided to borrow a DVD player from the lab. They were somewhat new and expensive at the time (would be several thousand today) so none of them had one. The lab had a big security team known as proforce (great name) who were looking for chinks smuggling out anything that might store data. So that had triggered a full security incident complete with cliche black vans driving them off the road, attack dogs, snipers, helicopters, and a day of interviews. She was a cheating whore, though. Don't trust women in tech.
I hope I helped you learn a bit about cluster computing!

fuck my life i use django. dotr when

It's all essentially just simultaneous multi-threading.

No, that's a totally different thing.

Is KVM a good way to practice with this stuff?

Ehh. You could, but you can just test it as a pretend thing with multiple processes per host each setting affinity to one full core, potentially sharing with more processes just to test. imo you're gaining nothing but trouble from trying to use VMs.

???>>829388