I Have No Efficiency And I Must Scream

Imagine, for a moment, the web was not how we know it right now.

Mr. Browser tries to get a resource web.net/resource. Mr. Server gently replies with the solicited document, with a document version identification number. "Keep it well", said Mr. Server, as Mr. Browser went away to do some other things.

Mr. Browser finally opens the envelope, and finds out a website that's probably outdated. He realized the last news were from yesterday, which is why he visited Mr. Server again for an updated version of web.net/resource. During the meeting, Mr. Browser shows Mr. Server the envelope, and says "you gave this to me yesterday, but I want a new version". Mr. Server, always willing to help, reads the envelope. It was yesterday's version, already outdated, so he looks up the newest version and gives Mr. Browser a small excerpt from it. It's a smaller envelope, incapable of holding all the information of the old document, but Mr. Server says "put this document between these two lines and all will be good".

Mr. Browser complies. He reads the document as Mr. Server told him, and it all made sense now. He didn't have to carry two big envelopes this time, just a big and a small one, which helped the world not waste unnecessary paper. This made Mr. Browser happy.

Next day, Mr. Browser took a look at a small excerpt of another document he had. He knew that exact excerpt was outdated, since he had read something different in another document, so he decided meeting Mr. Server to ask for a newer part of that excerpt.

"Hello, Mr. Browser. How may I help you today?", said good Mr. Server. "Hello, Mr. Server. It's this document here, web.net/document2. Section a/b/c is wrong. Could you kindly pass me the new version?", replied Mr. Browser. Of course, Mr. Server could do that, so he gave him a small envelope containing a/b/c, as requested by Mr. Browser.

Now, we have git or other VCS software (the most basic way to implement this), we know websites will change on certain points, so why the fuck do we keep resending the whole fucking document? Can't we save the planet by wasting exponentially less bandwidth?

Yes, it's called iframes.

This is the most retarded shit I heard today.

Now, why haven't we implemented bulk HTTP requests?

The web is shit, don't make up excuses for it.

We have, it's called HTTP/2.

If you are talking about HTTP2 multiplexing, as far as I know, it's just a slightly improved version of HTTP 1.1 pipelining, which doesn't really allow compressing several requests into one. Each reply must be compressed on its own.

You'd need to have a huge cache on your disk for the local copies used to compare the document. Why?

I don't think you've thought your cunning plan through very well.

For one, it does keep cross-request compression dictionary state for headers. That's like 70% of the efficiency of HTTP2 right there. If you start putting a single compression stream on every response body you've just broken parallel pipelining and are wasting shitloads of CPU recompressing static files. Also you've now exposed your clients to BREACH and your server to slowloris. gg faggot

Not only that, but every webserver would have to cache every version of every webpage it has served so that it can properly give the browser the diff, unless you want the browser to include its whole document in every request, which would make everything worse.

It could just hold the N most recent versions.

If your pages didn't have 3 megs of javascript each, then you wouldn't need to compress anything.
Fuck the web, Gopher is the future now.

No, just the latest version and N diff versions in history. Just like Git does.

Slowloris is an attack on the server connections pool. Making one, bigger request would help mitigating this. You open yourself to traditional DoS, though.

Because the web is how we know it right now, and the bandwidth of passing an HTML document back and forth is DWARFED by the bandwidth of passing the 45MB Javascript linked in that document so you can load a broken slideshow in the middle of the page.

Although, for example, Holla Forums's JS file is monstruously larger than this page, the JS file only gets transferred once, whereas the thread gets transferred whole once every 10 seconds. In the end, the HTML consumes a shitload more network resources.

I guess you could solve this with even more JS, but do we want that?

Consider the additional fact that most websites DON'T use an auto-updating scheme like image boards do. It's not really a standard feature built into the HTTP protocol to allow maximum efficiency when doing this because HTTP is stateless by design.

It's the same as if you sat there pressing F5 every 10 seconds.

Many big websites do use autoupdating schemes (from the top of my head, pretty much every social media website uses something similar), and those that don't, most of them will suffer several changes in a small window of time, think PHPBB forums or the likes. HTTP doesn't incorporate something like this because it's old as shit and designed during an era where pages were expected to be updated once a day at best.

More like a sideshow, since the web is now a circus.

I'm not using js, so thread only gets transfered when I visit it again, or hit reload. Image thumbnails are cached too.
Adding more js or other complications isn't the solution. Making things simpler and less bloated is better.

While fragment loading isn't simpler, it's definitely less bloated. You could be reducing the ~10KB this page weights to 1KB or less. For client side, for this page alone, maybe it's not that much, but for server side, or for client side as a whole, it's a lot.

Sure, you can do this with JS, JSON and other tricks, but the idea would be not relying on JS for everything. JS is pretty much the definition of bloated and less simple.

HTTP was never intended to serve dynamic content. Today's web is
is like fitting a square with corners cut in the circle hole.
Using javascript to make it better is like trying to clean the shit on
the floor while you're taking a dump at the same time, it's using an
error to correct an error.
If you want to have a good performance in the web environment, just
try to keep pages as simple as possible, small amounts of CSS to make
it easier to read and trivial JS while keeping your site useful even
without it. There's no other way around, everything is too broken to
be fixed, it doesn't mean you can't use it, just you can't improve it
without breaking it even more.

This physically hurts to read.

Looks like nobody in this thread have heard about Websocket protocol.

Just use curl.

What does that have to do with anything?

Thanks for assuming I, the reader, am retarded.

That being said, the concept, you retardedly tried to convey, is somewhat similar to how some distributed file networks are build.

Using Zeronet i.e. I get packages from peers (we don't need servers were we're going) and keep my copy.
When I "refresh" my copy, it gets compared, and every change and addition is overwritten onto my copy.

Think of the web like a 24/hour dildo shop.
Mr. shop receives a huge order of dildos. They fetch the boxes of dildos from stock, but each strapping young deliveryman can only carry one box, so they all carry their box across the city to Mr. Slave. First they hand him an invoice, and if that's ok, start re-stacking the boxes.
If Mr. Slave is satisfied with his dildos, he can post a review to the shop 10/10 would spank again.

And that's pretty much nothing like how the web works.

I can't believe I read all that shit

FUCKING SEXIST PIG

post wasn't taking about the web

The web must die already.

...

post wasn't talking about OP

Look into IPFS & IPLD projects. Only fetching blocks of a dag structure you need from whoever has them, even if local, much like how Git works. If you already have the content you don't fetch it again, you just fetch what you need which should be just blocks that changed.

The text is insignificantly small even for slow connections. It's the images and other multimedia, for which diffs aren't viable, that make the bulk of the load times.

Also calculating diffs isn't free at all.

You want us to keep a copy of the internet on our own computers?

yes, true, it *may* that the cost of CPU+HDD power vs the cost of bandwidth is an equal amount of dollars. but perhaps bandwidth is cheaper, especially moreso as the client is becoming CPU weaker and with more limited power (mobile tech revolution)

Please kill yourself before you accidentally breed

the iphone 7 is literally as powerful as a quad core 2.7GHz Xeon :^)

Bumping this from the dead. When I get home, I will show you an example of something similar to this that I have worked on.

Bandwith is MUCH more expensive than processing power and storage.

have a bump good sirs

Stopped reading as soon as the kindergarten talk started.