Redoing the internet

Could the internet be improved if websites were served as machine code or something else instead of a text document that the browser has to read and interpret?

I mean, imagine if the server could hold treat the website like an MMO of sorts, so when someone requests a page, it just sends the information as is directly to the user's browser without having to read files or parse shit. It would essentially be already in a machine usable state when it arrives.

If we were to start internet from scratch, what kind of things could be done differently to improve the internet overall?

Other urls found in this thread:

en.wikipedia.org/wiki/WebAssembly
loper-os.org/?p=1390
philipwalton.github.io/solved-by-flexbox/
twitter.com/NSFWRedditGif

No women allowed.
No niggers either.

Count me in

That's an absolute security nightmare since it's impossible to generally decide whether code is "safe".

One class of things I'd like to see gone is non-normative standards. Two implementations are either compatible or not. What's the point of a standard if I can't rely on it?

I'm torn on whether to keep a scripting language for the web. On the one hand I'd like to see its use simmer down, but to nix the entire thing seems excessive.

the web would be improved by simplifying everything. right now it's too complex.

machine code isn't a good idea. javascript is terrible too. something basic like lisp would be better.

People who are developing things for mobile phones took approach similar to what you are suggesting by creating mobile apps instead of websites. They are doing it because mobile phones have very limited resources and loading webpage that uses a lot of js would take too much time. You will probably get similar results if you did it on desktops, speeding up things that are hitting hardware limits while running in browsers.
On the other hand you will create a lot of problems for people developing things. First, you cannot assume that client has library or program that your application needs so you will have to distribute it in some sort of package that contains all stuff that you need. Second, web browsers act as compatibility layers allowing you to write website once and run on every OS. Well, it is possible to solve this by making Java (or similar language that has VM that provides compatibility across the platforms) but then you are loosing performance that you gained by removing browser. Someone might suggest distributing source code instead of compiled code but compilation would take too much time. Third, if web developers moved to your model they would probably create layer upon layer of needless abstractions like they did with JS and re-create mess that JS is currently in. Fourth, possibly most important thing is that you will lose ability to link stuff and create a lot of walled gardens for your users.
In my opinion there is too much cons and too few pros about your system for it to be replacement for web that we currently have.

This was supposed to be the final destination for Java but it never happened (mainly due to bizarre elitism from Sun). Microsoft tried with .NET with Silverlight and failed, too.

The web is already nearing death though - you've probably not noticed. Ask a normalfag what "sites" they visit and how they do it and you'll see they very frequently interact with them via native apps, now. The web today increasingly only exists for legacy reasons and PCs. This is why everyone's hot to find a way to get phone apps running on your desktop.

Build the Net from the ground up with security in mind, instead of hastily tacking on solutions decades after the fact. It's the cause of a whole lot of problems for everyone. A security-minded electronic world is a better one for every person and business, except the ones whose business it is to spy on other people's business, and I'm quite okay with them being left out.

Good news!
en.wikipedia.org/wiki/WebAssembly

aka "no more adblocking", "Muh DRM" etc

I saw an user a long time ago that posted an idea I agree with.

S-expressions for web. Every browser would basically be a scheme interpreter and one mega s-expression would be received from the server and handle everything HTML/CSS/JavaScript does.

No having to learn a bunch of different languages and standards. No shitty JavaScript.

But then since Scheme is an inherently simplistic language you could easily write compilers from Ruby/Python/JavaScript...etc -> Scheme which then get turned into s-expression and bam you have an entire website.

It's important to have separation of concerns. That's why each concern has its own domain language. HTML for structure, CSS for style, and JS can go fuck itself.

In response to OP, I'd like to just cut the internet off at HTTP, and start over from there. Mainly, everything needs to be designed with scalability in mind from the start. Whatever markup language is used for hypertext needs to enforce standards better than HTTP. Client side scripting needs to be easier to extend laterally instead of vertically. What I mean is, instead of stacking abstractions on top of JS, just add a new dialect or entire language. That could be done by providing a standard jit compiler across all browsers that can be configured with different parsers that are downloaded along side the page if it hasn't been cashed yet.

I meant HTML

Yet neither are programming languages. Using a minimal extremely extensible language that can make a DSL for whatever you want is different.

Website devs could have their own custom language for whatever domain they want, sky is the limit. It won't matter when it all gets converted into s-expressions the browser interpreter uses.

What a load of bullshit. The push for native applications happened because the retards in charge bloated up their sites so much they became unusable on phones. Nothing's dying, sadly.

The internet's current problems don't stem from the code or language used in web pages, but from the politics and centralization of hardware.

A robust, strong, and healthy internet is a distributed internet. Many servers in the hands of many individuals spread around the planet. What we're moving towards is the opposite. Everybody pays centralized servers for "cloud" storage, "cloud" applications, "cloud" everything. The NSA just walks up to server businesses and slaps them with $5000 per day fines if they don't install spying boxes, and whether they do or don't they also just walk around back and tap it in secret anyways.

The internet needs to be developed from the ground-up to be more like bittorrent, peer to peer. Distribution everywhere, tracking nowhere because by nature it's impossible to know who is really who.

Those are the opposite of what I imagine when I hear "distributed internet". That would fundamentally affect how our own computers work.

People would constantly be accessing data from your computer and using your computer resources to look at epic frog memes, and you would have to store vast amounts of "internet files" on your hard drive in order to sustain the internet.

Imagine something like youtube, even a decentralized version of it would cause data storage issues because the amount of video it stores and distributes is fucking absurd. Ask yourself how many gigabytes of random youtube shit would you be comfortable storing in your personal hard drive. How many people would want to store some obscure doorknob installation videos.

And even if you distributed all of youtube equally between all internet users, probably 80% of it would be unavailable most of the time because people turn off their computers or their internet connection is shit or they clear their hard drive causing files to completely disappear from existence at random.

I'd much rather take a "single point of failure" over "randomly inconsistent data availability".

What is Javascript mostly used for? Take it's most common use cases and if they're not harmful stick them in HTML or CSS. Want to add Flash content? whatever.swf. Want a drop down menu? . That way there's only two systems to deal with.
From there, all you need from a web server is a programming language with an OnAccess{} function, which is run whenever someone accesses the server and outputs to them. I think that's how HTML servers already work though.

Your eyes are so firmly closed. Do you see users moving towards greater use of mobile browsers or native apps? The web is dying.

The ability to change the CSS of a specific element when you click on something else. That would replace probably 50% of the JS I write at work. It's commonly used for dropdown boxes and opening extra info things and contact forms. You'd need to be able to somehow "reset" the CSS on a variable amount of elements though (e.g. all elements within a container).

Another extremely common JS function I use is smooth page scrolling. Single-page websites are very common for small businesses and they all require a menu that scrolls the page when you click the links. The #id link trick is very jarring and cheap looking in comparison.

Next is a gallery that enlargens the image when clicked on. That's impossible without the ability to load images dynamically. You could kind of solve that with something like loadifhidden="false" which would prevent images/videos from loading if the parent element is set to display:none or the image is not on screen. Then you just need the 'click to change css' thing to make the image visible.

Lastly the ability to detect page scroll position with CSS. There needs to be a way to play CSS animations and shit when you scroll to a certain position relative to another element. Often times I need to change the menu bar (e.g. make it visible/invisible) when you leave the top of the page, or play some animation when you scroll to the bottom, but those all require javascript.

Losing the auto-update and quick reply and post hiding/filtering on imageboards would suck cock though, so I wouldn't actually want to remove javascript.

regarding programming functionality on the web, true machine code as opposed to a bytecode would force every web user to a single CPU ISA, or web developers would have to serve binary pages for different targets.

as for a binary data format I don't think it would make much of a difference. GZIPed HTML is already a thing supported by all browsers. Some space and coding time could be saved if we got rid of those ultra-verbose tags and replaced it with something simpler like s-expressions or meaningful indentation. Tags have their strengths though


no youtube celebs allowed
kill the facebook


javascript was originally supposed to be scheme. what really sucks about javascript is the type semantics. scheme, and lisp in general, doesn't have excepcionall good typing, but at least it's less braindead than javascript.

that's not native machine code. those things are interpreted by software

you can have separation of concerns and a single language.

remember the old lisp adage: code is data!
you could write your content data in lisp and save it in one place, write the presentation data in lisp and save it in another place, etc.

imagine if HTML and CSS looked like JSON or something. you'd only need to learn one language: Javascript

by the way...
THE WEB IS NOT THE INTERNET
fucking plebs, learn the difference if you wanna stay at Holla Forums for a long time

Actual machine code, as in binaries, would be a fucking terrible idea. I can't imagine how fucking unsafe would it be to let browsers implement their own binary sandboxes to avoid direct syscalls or linking to unwanted libraries (this usually requires platform-specific stuff, sometimes drivers/kernel module fuckery ), not to mention x86 processors sometimes have some weird security bugs that are hard to fix. It would also make webs platform-specific, which means you couldn't open a website designed for phones in your desktop, for example.

Now, machine code for a virtual machine or bytecode would be more like it, but that wouldn't solve much. The problem isn't that JavaScript is shit, it's that everything about the web (except HTTP, which is acceptable) is shit.

While it's true it would be hard to make web development easy if you had to rely on a single language, the good thing about Lisp is that it's more or less like coding with XML, but without all the boilerplate. Although it is considered a bad idea because of how horrible it looks and because there are much better alternatives, there are some languages that allow actual programming (as in, defining algorithms and not just data) that use XML, because tree/list structures can be used to represent programs. That's pretty much the whole basis behind Lisp, and it's also why it is so powerful.

Lisp is basically S-expressions where the first element of a node defines the function it executes. S-expressions, as you know, can also be used to represent data if you just simply define the first element of a node to be the "name" or type of the node instead of the algorithm that should be executed. This means "(sum 1 2 3)" could either be the sum of 1 + 2 + 3 in Lisp programming or a node called sum with the children 1, 2 and 3 in S-expressions, and they both use the same syntax and therefore parser. This means you can build the AST for both data definition and code definition, and the only difference between what they do is which execution engine do you feed them to.

This would ideally be the case, but HTML is no longer used for structure. Well, it is, but at the same time it isn't, so it is a fucking mess. Most modern websites just use HTML for data, and CSS for layout and style. If you have worked a bit with CSS, you will know why it is so fucking horrible, and why not many people dare to write a CSS from scratch.

If you haven't, a good analogy would be comparing CSS for layouts to using strings (CSS positioning properties) to attach the "data boxes" (divs) and then pulling each string until everything is in place, sometimes having to fight against strings that have accidentally tangled, or trying to compensate for strings that are pulling in another direction. To make things worse, all strings are "loose", in the sense that they are not easily grouped; to know what strings are pulling from each data box, you need to look at said data box and all the data boxes containing/surrounding it to know what's causing said data box to have a seizure whenever you pull a certain string a few centimeters. CSS is acceptable for colors and the likes, but ever since Web 2.0 design/HTML5 became the standard, we have been doing more complicated things with CSS, and now we have some Frankenstein monster so complex that it is fucking Turing complete.

Most sane GUI languages use layouts. Qt, CSS, AWT/JavaFX... they all have one layout where the programmer can put objects inside "boxes". There are no floating boxes, and nothing can get out of their containers. But CSS doesn't, because it is hip and cool to make outrageously flashy websites with animations and therefore we have to use the most complicated an unintuitive piece of shit technology ever designed for UI definition just so we can support their animations and crazy moving divs. Fuck, we used to use tables and iframes for layouts, which were piss easy, but for some reason that isn't okay for the Apple SanFran retard hipster webdevs that kickstarted HTML5 so now it is considered a bad practice.

If it was me, I would separate data from style (colors) from layout. The layout file would define the structure of the website with a tree syntax, probably by dividing each node in a similar way to how i3 and other tiling WM handle divisions, then it would pull data from the data file to display inside each box. The optional style file (everything should be so isomorphic that clients should be able to reuse the same style sheets for several websites to make everything look "more native") would define the color scheme and other optional things that don't really change the structure of the page.

That's a terrible idea unless you can find a way to make an extremely generic parser generator that allows the client to generate the parser and feed it into the virtual machine or AST interpreter. I don't want to download executable code just because some guy felt like using a special snowflake language.

Our best bet would probably be defining a virtual machine or a language that is easy to transpile to, but JavaScript is not just bad for being what it is (which is also a valid argument because JS is bad on itself), it is also ugly because the DOM is fucking retarded and no language can make look operating on it not bad.

HW don't be a fag

If it was me, I would suggest adding a script interpreter to the standard, but only expose it in the future to the websites if it's seen as necessary in the future. Instead, it would be used to handle some internal web browser stuff, and also to let web developers use libraries inside their data/layout for, for example, validating input before being sent, infinite scrolling/autorefresh/delta update, and perhaps some simple triggers like describes (although fuck smooth scrolling and floating menu bars, that shit is unnecessary).

Also, standards should be enforced strictly. A browser that doesn't do everything in the same way the standard defines it should be considered non compliant, but any browser that adds custom functionality should also be considered non compliant. This is to avoid shit like custom engine attributes (-webkit-* and -moz-* shit) and another IE6.

That would be fucking cancer.

No kidding, no one would be stupid enough to do ActiveX again, especially in such a diverse environment as we have now.

So... what we have today with obfuscated mandatory Javascript on almost every site?

Except you can browse with javascript turned off. What OP suggested would turn the entire website into a proprietary blob and render any browser addon that manipulates the content on the web page (adblockers, noscript, uMatrix, etc.) worthless.

At least you can dissect that. We are forgetting that this is already happening. It's called apps and they can suck an asshole.

WebAssembly isn't not machine code, but it's close. It's about the same level of abstraction as LLVM IR. Security comes from the fact that it lives in the JS sandbox and has the same capabilities as regular JS. So to guarantee security, you only have to guarantee that WebAssembly code will execute in a the same way to the corresponding JS could would have.

Google had a thing called NaCl which was true machine code with some restrictions for security. They replaced it with PNaCl which is basically LLVM IR.

It's just another respin on Java bytecode or .NET bytecode for the web. Java bytecode failed because Sun was.. Sun. .NET bytecode failed because MS was way late to the party and tried to tie it all to Windows. So webassembly is nothing new, it's the future of 20 years ago we'd have had were it not for gross incompetence and greed.

You're glossing over the technical difference that WebAssembly doesn't expose any API that isn't already available in JS. This is important because it means that security is fundamentally the same as for ordinary JS. It is also easier to standardize since there are no new APIs.

AFAIK security and API standardization were big issues with all these other systems you're referring to.

How often do you try this? Many of the sites I visit don't work without it anymore.

I can dissect bytecode just about as easily as I can dissect obfuscated js.

I'd say at least 90% of my browsing is done with scripts turned off, cookies too. I've been doing it that way since 2008 or so.

You might as well just kill yourself right now.

Was the other way around, initially. Javascript was a security disaster with a bad API and Java was supposed to provide real sandboxing in a common way rather than have to spend 20 years trying to make yet another domain specific language secure across a dozen implementations and extend it to be more useful. Yet here we are.. Anyway, that's all ancient history by now and Javascript is now that monstrous re-invention of the wheel, we just need bytecode access still and that's really all webassembly seeks to provide. Nothing new, just the future of yesterday.

How to save the internet?
Well we just have to revert to the old "sane and stable" one.
Current web is like firefox 40 featuring australis and moz services / google safebrowse botnet.
The sane web is like firefox ~20 which does not include all the bloat.

That would be the worst case scenario.
Hackers would definitely love that stupid idea.

Remember 2005? Ten years ago the internet wasn't this shit and the internet before used to be fuck slow.
You don't need to optimize the whole web structure to work with muh GHz. That's a stupid idea. Back when WML was the popular shit on mobile and JS was only used for redirection, the web was insanely fast.. and mobile devices were still on shitty 100-200MHz processors @ 2MB RAM.
Now every website out there right now would always redirect you to a botnet playstore app (by js redirection to the market protocol). Now the jews are much more greedy this time, they want the "botnet installed on your device at all times, anytime anywhere."

What the fuck happened?
Ads back then used to be just cool GIF shit. Ads just went viral because muh "rich content web 3.0 shit"
The truth is we have to give up all this shit and go back to vanilla html4.0 (html5 is XML microkike shit) + jpg/gif/png/midi/ogg...

We just need to turn off all these tracking and adshit but even both the major browsers today are making it easier for the ads to sneak in and the tracking shit (lets say google safebrowse) are integrated inside both the big two.

We don't need more code and shit. What we need are designs and protocols to follow. A good code with bad design is always a bad code. We need information architecture and engineering.

Cool, you're hired user.
-t.NSA

This

meme magic is truly a mysterious power

Check out Urbit

Check out loper-os.org/?p=1390

OK. I'm on board with s-expressions. How can we isolate expressions to determine what each one does, and choose whether to run it or not?

why not do it like ssh but with vector graphics and shiet?

It's simple: the browser interprets a file as it is told. It sounds stupid, but it's truly the best way.

Say, a standard page includes the following files:
Each file has a header (shebang style)/file extensions that tells the browser how should it be interpreted. After reading the header or the file extension, the browser will interpret it one way or another.

No problem. Instead of headers, you can make level 1 nodes that point out to the browser how it should be interpreted (say, the tag would point out to the browser how to interpret its children instead of being an useless thing put there to explicitly satisfy the need for a root node).

This is not a good idea since it opens the browser to vulnerabilities such as XSS and other tag injection cracks and everything that can be done with this can be done without it. Still, if you are sure you want to do stuff like calling scripts from the script file inside the data or layout files as some sort of "expand here" macro or associating elements to some script with everything it entails, you could specify script names by opening the node with a special character (like "($muhfunction arg1 arg2)").

Still, it is not recommended. Decades of experience with the actual web and we STILL suffer from the same vulnerabilities because escaping everything is not easy (look OWASP; escaping HTML is easy as long as you don't want to escape Unicode, but dig any further and you will start to lose your sanity), and it provides no advantage other than syntactic sugar.

Some considerations:

Other considerations not directly related to S-expressions:

You casuals still use the internet? I've been on an intranet connection with my friends for years. LAN only, libre boot and NO jewish menace or NSA.

Another fellow Web 1.0 traveler. I too miss the old days, user.

More like:

Why the shizzle is html and js and css sent in ascii instead of minimized binary equivalents?

I mean why the fug would you use xml to send data over a network.

Yeah there is gzip but meh it really should from the start be compressed.

You wouldn't want botnet to be executed directly via your browser, would you? Imagine the increased privacy nightmare if what you said was true. At least now with plain text it is easier to audit whatever is being sent to us.

I genuinely want to work on this now. Let's start working on a logo.

Shit man, kicking it back to 2006.

What about forbiding third party ressources? It would promote redundancy and make tracking more difficult.

bump

op ilu

...

fucking art

this is the most important issue


yes, but are they?

SO like, that sounds very, very exploitable.

xml makes java the "acceptable" lisp

Yours truly is a great idea, but I have no paranoid-for-NSA friends, so it's impossible to try to convince them.

Ban all autists so Holla Forums could be great again and infinity NEVER EVER would not have had the chance to waste $12k.

The reason why not to use tables for layout is because it abuses the semantics and makes the website shit on screen readers for blind people.

The solution for the longest time have been awful hacks which required pointless page elements and hardcoded sizes. But with Flexbox it is now pretty easy to make responsive layout that's actually flexible and does not require any hacks.
philipwalton.github.io/solved-by-flexbox/

But I still agree that using CSS for layout was a horrible idea. It's called Cascading Style Sheets for a reason. Seriously, for styling CSS is pretty nice, it's all the other things that have been bolted on that are cancer. There really needs to be a separate layout language.

Why stop there? The internet should be for men only, whites only, and straights only. Keep fags and trannies off my internet.

You would disallow based milo yiannopoulos?

Gays actually had a big part of creating Internets. McKusick, Allman, and heaps of others.

A very old flame war was threatening to create "Jesuix"-- a WASP-only Unix system withe no homos allowed. As the Holla Forums-level tards suggesting this were all Fidonet dropkicks and didn't have college degrees or experience, it's been 30+ years coming.

...

...

...

And weebs. holy shit have they turned the net into a cesspool.

Wish I could revive the BBS glory days somehow, but it is too far gone, along with the tech.

This would make creating websites considerably more difficult.
You very well COULD do this if you wanted to—create your own "browser" that doesn't parse HTML, and instead renders what the web-server tells it to.

But on top of this probably being slower to start, and harder to build sites for, it would also require more processing power for every individual user, and it wouldn't just be when the user makes a request—the server would need to maintain a connection and handle all events that occur, until the user leaves the page.
And of course, there's the whole issue about security of what is being sent to you.

A parseable mark-up language is really the only reliable thing you could do.

I remember the first days of the internet. When computers weren't easy enough for women and niggers.

I remember that most people outside of academics preferred anonymity over namefagging and this being the status quo. Most governments had not yet catched on in many ways as they were comprised of people who never touched them for various reasons.

I took a certain pride in being part of the internet, mailing lists, IRC, BBS and newsgroups were part of my daily habit of checking up upon.

The internet as we have it now actually serves the demand of most users. And most users want maximum convenience to access their needs. The kind of solution you propose has been tried many times before, look up VideoTex and the minitel machine for example. It failed, even with government support and subsidy. As they were too static to grow with the ever growing expansiveness of the net, its protocols and standards.

The thing that makes for successful systems has always been extensibility of the said systems. Either through hardware or software.

add a faggot stick man with a hat and make the quote more pretentious.

s/(.*)/\1, m'lady/

The latter is annoying, the former (the one we have now) is a political and societal nightmare.