./netrunner

Netrunner dev thread.

netrunner.cc/
gitgud.io/odilitime/netrunner/

Other urls found in this thread:

auth0.com/blog/beating-json-performance-with-protobuf/
netrunner.cc/
twitter.com/SFWRedditImages

...

not an argument

still here

if your going to start from scratch why start from scratch with bloat

fuck off

/g/ user detected.

I am really happy someone is doing this. Thank you for all your hard work. Don't let negativity demotivate you. No matter if the end result is able to compete with mainstream browsers it is still a solution to escape WebKit/Blink/Gecko hellhole.

I am not a developer, but if I were making my own web browser I would only make basic rendering engine + ES5 JS engine. Everything else like bookmark manager, password manager, history, downloads would be implemented in some kind of API for plugins (maybe Lua?)
The whole GUI would be minimalistic, maybe just tabs, everything else would be accessible from integrated "drop-up" terminal like in qutebrowser. Mostly because this could be finished at some point and maintained without getting bloated and abandoned.

...

There's already a Linux distribution and a TCG called Netrunner.

I actually really like servo. The project accomplishes what I set out to do when making netrunner, except it is a full browser engine. That is why I stopped developing netrunner.

Not an argument.

back to /g/

kys

>>>/out/

Dead on arrival. The web has become so bloated that it's impossible to create a rendering engine without massive funding. You're better off taking an existing rendering engine and building your browser around that. Have a layer between browser and engine so you can swap out the engine. If the browser catches on you could then work on your own engine and eventually swap it in.

Why don't make new web? We need new protocol, new website standard (something that isn't html) and new browser for all that. We need to cut ourself from bloated www and only by doing something new we can accomplish that.

you could potentially embed a new protocol in netrunner.

It's literally plaintext
Anyway, have run on ReddiNet with the 6 other users

...

Saged and reported.

Gopher is closest to plain text. Even HTML 1.0 doesn't come close. And recent versions are unreadable when you add in all the obfuscated javashit that's all over the place.

To the trash it goes then.
Text will never be an efficient format.
Should have used something like protobuf or similar.

It's all going to get compressed anyways. The "inefficiency" is mostly theoretical. After for example JSON is compressed it is about the same size. The parsing still wastes a bit of time but most binary protocols also have to be parsed. There are some new "new serialization / deserialization" protocols. These mostly just actually leave the conversion to access time though a function you call to access an element.

* "no serialization / deserialization"

"parsing" a good length-prefixed binary format is a completely different thing than parsing a fucking JSON.
and since this is what will be used to communicate between local components, it could become the bottleneck easily.
imagine what will happen if you start using JSON for sending graphic bitmaps back and forth

The overhead would not be that high sending base64 encoded images inside JSON in a compressed context. Serialization is almost never the bottleneck.

auth0.com/blog/beating-json-performance-with-protobuf/

In many benchmarks that have been done its pretty similar performance with compression enabled. And you are going to be compressing those binary protocols all the same otherwise you are a retard wasting space with them to.

...

BRING BACK THE MAC TONIGHT LOGO!

...

factually incorrect

Really? Then why don't you post the benchmark code.

it's not intended to be publicly available as is and I don't think the reward of extracting it is worth it.

Compressing already compressed data usually doesn't offer much of an improvement.

A bitmap is not compressed. Binary serialization is also not a compression algorithm.


Well I linked you to some benchmarks demonstrating that there is no in practice difference between them.

are you retarded?

I literally linked to benchmarks showing exactly this

Deserialization is almost never the bottleneck. Network or disk speed are. Even then the benchmarks I linked prove that deserialization is almost as fast for JSON except for a couple types like floating point doubles.

>auth0.com/blog/beating-json-performance-with-protobuf/
You BTFO yourself you retard. This blog post shows that Protobuf is way faster and smaller than JSON. Only when you pipe it through a shitty compression algorithm can JSON catch up.

There exist better compression algorithms than those webshitters are currently using. Stop being a retard.

Well lets see that was linked to in what context

Wrong. If you are not using compression you are wasting a shit ton of space binary or not.

Huh so it was in the context of "These things are all the same size when compressed". Wonder what "Only when you pipe it through a shitty compression algorithm can JSON catch up." has to do with it.

Now on to your next comment:
Other compression algos exist but they bottleneck before the network does so they are useless and decrease the performance of the entire system.

They either come pre-compressed (FLIF, etc) which is better than any blind general purpose compression, or they don't need to be compressed to be transfered locally.
Are you unironically saying that the interprocess communication inside one machine needs compression? This is not even funny anymore.

The discussed project aims to use that shit for local communication between components, that pretty much excludes network and disk unless they are even more retarded that they look now.

Fucking retard. Your link shows clearly that Protobuf BTFOs JSON. Even when compressed.

If by "local" you also mean on a filesystem then compression is still needed and will improve performance


Only a trivial amount


Why are you serializing at during IPC at all? Just send a struct directly. If it is IPC then its always the same architecture, if you need to save it and use it on a different computer then might as well use JSON.

holy shit dude. just fucking end yourself

Who are you even quoting here yourself?

Your fucking blog post that you posted to proof that text based formats are as fast as binary formats you nigger.

You know that statement means "to be used in a web browser" right, not that protobuf is json inside

The shit you posted shows clearly that Protobuf is vastly superior to JSON. Stop being a retard and admit that text formats are shit.

Not it shows that protobuf is only slightly faster / smaller than json in real world scenarios. Not at all worth the fact that you have to add all the protobuf toolchain shit vs literally every language having json support.

so... will it be as good as qutebrowser?

Have you not read the fucking blog post you posted??? The difference is not small. It is fucking massive.

Not in real world scenarios

...

uhhhh nooooooo: loading from disk, loading from network, using compression. Who the fuck uses something like protobuf for IPC? That is a retarded use case.

At least Protobuf has use cases. JSON is webdev cancer.

JSON has the use case of being a common exchange for literally every programming language and not requiring a protocol compiler.

$ curl netrunner.cc/ | grep -c "/g/"4
>>>/g/

But JSON is slow. Also there are other binary formats.

Na its not slow the benchmarks linked earlier show that in the real world (disk, network, compressed) its pretty similar in performance.

A binary format is going to be much more annoying to work with in many different languages at the same time. If everything you are using supports a particular binary protocol go for it but if not its a PITA.

...

Not the bottleneck waste of time setting it up