So why the fuck is DBUS actually so fucking complicated? Are all these fucking object paths, an internal object model...

So why the fuck is DBUS actually so fucking complicated? Are all these fucking object paths, an internal object model, full schema validation, XML interface description files, and other stupid shit actually necessary? Why not actually just be a fucking message bus where an application declares itself as an endpoint (and I'm even fine with the reverse domain name notation), and another application can check that endpoint to see if there is something listening to it, and then do a simple request-response where it sends a chunk of bytes and waits to get a chunk of bytes back?

I can't seem to find a real goddamn reason that simple introspection is

dbus-send --system --dest=org.bluez --type=method_call --print-reply /org/bluez/hci0 org.freedesktop.DBus.Introspectable.Introspect

instead of

echo '{"method": "introspect"}' | dbus-send --system org.bluez

where you just send a set of arbitrary bytes through debus to the application, which can handle its own serialization and deserialization (which is guaranteed to be faster, less bug-prone, and easier than trying to decode the fucking DBus type system), and it responds. I really don't get the fucking point of all these object paths and retarded fully qualified domain names for a desktop messagebus. Why does a messagebus need to do validation and have an internal object model (or a type model at all)? Why the fuck not just let it do everything a messagebus should do, like socket communication and being sure message are delivered whole or not at all, and being sure to give the client a response (and possibly manage some other network-related things like possibly authentication to the bus at the server's discretion)?
Why the fuck do we have redundant paths, where your service path is always com.foo1.bar, and your object paths are always /com/foo1/bar, instead of actually fucking encapsulating them naturally?

God I fucking hate DBus, and I really fucking hate having to use DBus.

Other urls found in this thread:

plan9.bell-labs.com/magic/man2html/5/0intro
news.ycombinator.com/item?id=10853219
freelists.org/post/nanomsg/Adopting-a-CoC
freelists.org/post/nanomsg/Dead,5
twitter.com/NSFWRedditGif

Because it's a freedesktop standard and cursed. It's a fucking disaster. I'm a systems guy and tried to use it to replace some of my existing IPC code. I hit a wall when replacing a component that opens a bound socket to a server and receives best-effort diagnostic data which is sometimes dropped when the channel is full. dbus has no way to rate limit, no EAGAIN-style feedback, it has no way to handle a sender sending faster than receivers receive and shits the bed. Amazingly, the community recommended using dbus to exchange a socketpair then use traditional I/O to do the send/receive. What the fuck is the point of dbus, then? A super-complicated replacement for bind()? And the bloat, the thing is at least 500k even when you use the libdbus bindings which no one does as they are AWFUL. Expect 5MiB for the glib ones. I scrapped the plans, it's the XMPP of IPC, burn it in a fire.

Honestly for IPC I just prefer using a shared address space if I control both the server and client. I just map an area in memory shared by both processes, and use that for data transfer.

Nothing is going to beat it performance-wise, and you don't have to use schizophrenic APIs adhering to useless standards.

There is some real merit to having messaging across servers though, or having your messagebus agnostic to the actual IPC mechanisms used. ZeroMQ is probably the very best messaging I've ever used in my life. It would be cool if, instead of DBus, there was a simple messagebus that just leveraged ZeroMQ to replace DBus in a not-retarded way. Basically just a nice standard service-discovery layer around ZeroMQ.

I have never used it, but i've heard a lot of good things about it. What specific problems does it solve?

...

It was designed by GNOME people. KDE used dcop.

How to design DBus:

Step 1: Be fundamentally wrong about IPC
It's important that you misunderstand that practically all communication is done in a server-client model. You need to introduce a new daemon to be the server, and effectively enforce a client-client model, so as to maximize confusion and minimize performance.

Step 2: Nothing must ever be a file
To design a good IPC system, you must ensure it cannot work with common tools such as cat or echo. You need to design it in such a way that nothing ever touches the file system and, by extension, everything must go through your tool. This is Unix. A tool may only serve one purpose and IPC is one thing, so don't let others encroach on your territory.

Step 3: Tools are complicated - users need to feel that
In order to convey just how complicated your introspection tool is, you need to --use-many-long-options and --change-them-frequently. If you don't, people might be able to remember how to use your tool and that's obviously a problem.

Step 4: "Servers" must be difficult to write and require special metacompilers
If you make it easy to write servers, people will actually write servers. This is the opposite of what you want. You need it to be so exceptionally difficult that people would rather produce a full distribution image with their software in it than try to make simple service oriented programs. It's much better to link in all things you need than have things run out of process after all. A good way to achieve this would be to require all calls to be defined ahead of time in a format like XML and use it to generate code, while requiring people to use introspection at runtime to actually do anything.

Step 5: Have multiple instances of the unnecessary facilitator daemon running at all times.
To ensure maximum pain and minimum usefulness, you need to have multiple daemons running. You also need to make connecting to the daemon transparent so that you control which one your users connect to. By doing this you can, seemingly at random, connect them to the wrong one and spam their machines with incomprehensible errors. Punishing users for mistakes they didn't even make is a great way to generate support business.

Step 6: If performance is bad blame everyone else
Suppose people start complaining about performance, what you need to do is to find a scapegoat. The kernel is a good one since few people actually understand it. It also means you get to commit code to the kernel. If, heaven forbid, someone should protest this, you've actually won. You will have successfully shifted the focus from criticizing your terrible IPC mechanism to criticizing your irrelevant kernel patches. In the future you might have to keep creating new, incompatible codepaths to your system to keep people chasing you, or they might remember that its your IPC mechanism that's unworkable and not the latest hot topic item you created.

I never used ZeroMQ but queues solve a lot of routing problems in distributed systems. Basically instead of figuring out what worker you want to send job to on your own you send job to queue and let workers consume jobs as fast as they can.
Also, if your queue starts growing you can easily add more workers to consume jobs, and remove them as easily when demand decreases.
Another usage for queue is to put it in front of some scarce resource, then instead of bombing scarce resource node with requests you let it consume jobs as fast as it can.
There is probably a lot of other usages but I think that those two cases are most common.

it's just complete idiocy and bad design

and we're stuck with it

i hate software

ZeroMQ is a super thin library (hence why it's Zero MQ, as your application contains the message queue). It gives you the fundamental building blocks of some very basic messaging paradigms and expects you to build the bigger framework on them.

The main goal of ZeroMQ is to be a small layer on top of IPC, to be the lightest layer necessary in order to allow you to not worry about socket communication, discovery, disconnects, and framing, and just lets you pass full messages between applications (with some extra cool things like queueing messages, automatic determination of what kind of IPC is needed, and specific communication models including pub-sub, multicast, broadcast, etc, all with completely optional and configurable parameters).

Another nice thing about it is that it intelligently determines the transport layer, automatically using same-process communications, shared memory, TCP, UDP, or whatever else is the most efficient for the routing you request.

It's about as nice, simple, and fast as you possibly can get for any messaging framework, and it's all in a light C library, with bindings for every language you could possibly want.

KDE4 uses D-Bus.

This is good writing. It's very clever and funny, while being poignant and informative of the actual problems with DBus. If you just wrote this up, you should be proud.

This is pretty dumb as you'll eventually wind up replicating sockets. Need to wake the receiver? Oh, we'll just exchange an eventfd and.. oh now we need spooling? Well just do it like the dark days of winsock and fork into the background while waiting for acknowledgement and.. oh it's a large amount of data or an undefined length? Well lets just add some serialization/fragmentation code and..

When I evaluated ZeroMQ some years back it struck me as the PHP of IPC. It's garbage.

What the fuck does this even mean? ZeroMQ isn't bloated, it's not inefficient, it's incredibly stable, and it is what it was designed to be.

That's a claim you really have to qualify.

You know how PHP/mysql devs used to not really care that errors weren't caught properly and data wasn't ACID and they insisted these things were not issues until 5+ years later when everyone suddenly woke up and asked "what the hell have we done"? Same for ZeroMQ users and reliability/durability. They leave almost everything to do with it to the user to implement. And you're about to insist these things are not issues.

So I'm not that guy, but first of all it's C++, not C, and second it looks god damn huge. 35k lines of C++ to send messages around is obscene. The plan 9 kernel core is 75k lines, with 54k being fun stuff and 21k being the entire TCP/IP stack. I wrote a 9p server lib in 3.7k lines, and with that all programs can communicate with me through the file system, using just regular syscalls.
I don't know what problem zeromq is intended to solve, but if it's IPC then it's an order of magnitude larger than it should be.

But it's literally the opposite.

ZeroMQ has durability and reliability as the very utmost importance.


I was wrong about that; it's a C++ library with a C API. And 35k lines isn't obscene when you consider all the ACID requirements of proper message passing and queuing, and possibly swapping in high-water-mark scenarios. Find me a reasonable alternative reliable and transactional messaging library that doesn't require a broker server and offers a C API at fewer lines of code. I'm not seeing anything that fills the use case of ZeroMQ in a better way.

plan9.bell-labs.com/magic/man2html/5/0intro

here

I found out that nanomsg is probably better than ZeroMQ, and is all ANSI C (as well as being more compatible with the BSD sockets API), though it is still 18K lines. I'll have to dig at both of them more to see the merits and failures of each. ZeroMQ is likely more stable and mature, and appears to run faster when a consumer is slower than a producer, but I don't know what tricks it does to achieve that.

Either way, transactional, durable, transport-agnostic, portable messaging is hard, and does take a lot of code.


Yes, plan 9 is fantastic. Call me when it has a real degree of software and hardware support.

9p has been supported in the linux kernel since like 2005.

"plan9 and linux" is not the same thing as portable.

Nix that, nanomsg is pretty fucked up at the moment, with internal drama.

There are multiple FUSE implementations. You can mount 9p on Windows, OpenBSD, etc. or if you're so inclined, piss around with a client directly on an MCU running Node.JS.

How's the performance? How's the atomicity? Does it handle common routing cases with thousands of servers and clients? Does it handle backlogging, queueing, and swapping to disk if the client becomes temporarily unavailable or operates too slowly? Does it work directly without blocking?

I don't see how a simple network protocol could preclude the need for proper messaging.

It takes a PHP-tier dev to not know his message queue lacks fully integrated durability and persistence and was specifically designed for performance instead of these things.
And thus it and its community remind me of PHP.

It was the same CoC bullshit. Goddamn.

news.ycombinator.com/item?id=10853219
freelists.org/post/nanomsg/Adopting-a-CoC
freelists.org/post/nanomsg/Dead,5

Fucking social politics ruining my technology.

Quite good
That's up to your server
IP does that. The kernel does IP
That's up to your server if you want
That's up to your server if you want
Your OS does that
I mean if you want to punish yourself with non-blocking IO go ahead. It's just a protocol that's really easy to implement.

Plan 9 was a sandbox from which almost nothing of value emerged but suddenly became memetech long after it was relevant in due to slashdot geeks in the '90s needing to be the most unique snowflakes and FreeBSD no longer being exclusive enough. The same kind of idiot that praises the "genius" of Terry Davis today was pushing the genius of plan 9 back then. It's safe to ignore everything about it unless you've run out of things to laugh at in old copies of xfree86.

So it doesn't do anything but what I could already largely do with length-prefixed raw socket communication.

And we saw how many years that took to recover from the technical debt.

yeah neat whatever

wake me up when I can dump nfsd/sshfs for it, or do an actual plan9 thing with it

Definitely going to try it out next time I need IPC, thanks

Yeah. Files are overrated. Better to have special tools and custom protocols for everything.

Yes, it is better to have honed special tools and custom protocols that already do the things you need rather than having to implement the same yourself for either one use in one project, or as your own library that just does what the library you avoided using already does.

Is the idea of a programming library out of the realm of your understanding?

...

So, what method does Holla Forums prefer for local IPC?

What's wrong with mkfifo?

Hehe, yeah, Dennis Ritchie and Ken Thompson are idiots. Never improve.


This brilliant attitude is the same thing that drives the GNU community. Fuck having standards, my code is the standard, bugs included.
If you can't even write your own implementation in a reasonable time frame or worse yet: have no defined protocol, then what you're doing is completely unworkable. I've written transactional databases as 9p servers that, all libraries included, are over 3 times smaller than zeromq.


Your question is wrong.

Were.

Why, because everything should be single-threaded? Because everything should be futureproofed to be able to be used in cluster computing? Because you are a hipster?

Because if you differentiate between local and remote you're doing it wrong. Same as people who differentiate between builds targeting the host machine and builds targeting other machines. You create only pain.

I do bound AF_UNIX sockets and do it myself. It's so similar to network programming and NETLINK programming that it feels natural and shares a lot of code and design in a single program.

You're a moron. Local IPC is used for things lile exhanging file handles, shm segments, credentials, and other local resources. None of that shit maps to the network. You're looking at it from the perspective of a webdev who thinks IPC is just grandpa's message queue.
I take it you've never cross-compiled anything.