Current processor situation and building a PC to last

I'm sure a lot of people here have heard about Intel's IME that's been in all new Intel motherboards since ~2009, AMD's PSP that's in all their post Piledriver processors (including the new Zen), and the problems with them.

I've been getting a bit worried lately by how much of an improvement Zen is supposed to be over AMD's Piledriver and earlier processors with claims of a 60%-100% improvement over the 8 core FX line, as if that's true then even a low end 3 GHz Zen processor will likely outperform even AMD's full retard FX-9xxx line for multi core performance. If the current trend of unoptimized bloated shit continues, this could pose a major problem for anyone using one of these older processors, modern JS heavy web pages can even make a computer with a 7-8 year old higher end consumer processors hang and newer web browsers have been moving to multiple threads which will allow web devs to make sites even more bloated.

How possible is it currently to build a computer to match those performance claims without using an IME/PSP enabled processor or other manufacturer's equivalents? Furthermore is it possible to do so cost effectively? Only options I'm really seeing currently while staying with x86 are multi CPU builds with Piledriver based AMD Opterons but I'm not sure how well multi CPU performance scales and have heard claims that used server processors don't last.

Other urls found in this thread:

weboob.org/
twitter.com/NSFWRedditImage

fuck x86, powerpc will be back again, more powerfull than ever

Did a bit more research on how multi processor performance scales and still haven't found a good answer. Looking at benchmarks it's anywhere between 50% and 100% for a second processor (or going from 2 processors to 4), though the systems with another processor have about 2x the RAM (sometimes more) to go along with having a second processor.

Are you retarded? It depends on the application. OSes themselves have good SMP performances, nowadays.

I figured it would be similar so long as the applications could take advantage of multiple cores.

Just use an old fucking CPU you retard, and run Gentoo. In fact, run any Linux, it doesn't matter

People lived with 500MHz in the past, you can live on 2.6GHz

running gentoo on 500MHz

Bring back MC68060
Make Amiga Great Again

Currently using a Phenom quad core clocked at 2.2 GHz. My ISP's website needs so much bloated JS to even view that it makes my browser hang for several seconds, even with a browser that supports multiple threads I would only be able to open 4 pages like that at a time before sitting and waiting. More and more websites are becoming dependent on JS for formatting or even being able to look at the site now days, I'm worried that simply sticking with older processors will stop being viable for using large parts of the web (mainly worried about news).

No worry, my overclocked high-end processor is also choking on news sites :^).

Shits yeh, brah.

Waiting for China to bring a competing CPU is also probably useless. Even if they make one, that thing will have more open backdoors than a bus full of catholic schoolgirls.

Get yourself some ublock and noscript, son

I've been blocking all JS that isn't necessary for what I want to do for probably 7 years now. The example with my ISP's website is when I'm only allowing the JS required to view the site.

...

This may also help.

cpubenchmark.net lists dual CPU builds and shows most of the dual CPU AMD Opterons builds benchmarking about ~50% higher than single CPU builds with the same model. It also shows the most powerful Opteron processor being comparable to the FX-9590. I would say that's way past the point of diminishing returns currently, maybe if/when used Opteron 6380 come down in price but I wouldn't pay more than twice the price of a new FX-9590 for used processors that only give a 50% performance improvement.

Forget about it

I gave up on "future proof" a long time ago, saved money and still had a good experience overall

You don't know what you're talking about. it totally depends on the task.
For example, if you do some work with GNU parallel (e.g. parallel + imagemagick) or x264/5 (don't use x265 without AVX2, you'll suffer), it can scale as long as the RAM bandwidth is saturated.

Actually it's pretty easy. Plain text is the ultimate format. text consoles, ASCII files, the basic Unix commands that manipulate them, text protocols, text mode browsers, Gopher, Usenet, Mail, IRC, etc.
Once you deviate from that, you'll forever be a slave to the upgrade jew.
But if you stick with plain text, even an 80's or 90's computer is usable today.
Pic is a Unix-like shell for 8-bit CP/M computers.

Only problems with that are sites that require JS to even work are becoming more common and some sites are unusable without CSS (sites with posts sorted as trees, reddit being a common example). Also for using really old machines, you're going to want a 32 bit processor if you want any encrypted connections and a few MB of RAM if you want to deal with multiple large web pages (HTML alone for a 4chan thread at 300 posts is 405 KB for example). You aren't going to be doing any practical web browsing on an 8 bit machine.

perhaps it is not the processor but ISP buffer bloat.

It's my processor. When I enable the necessary JS and refresh the page I can see one of my cores hit 100% when I open the site and stay that way until my browser starts responding again with htop. I'm using Iceweasel by the way.

The website is charter.net if you want to try it.

I WANT THAT FUCKING DESK

For encryption, you can use proxy on a cheapass ARM board. In the case of an 8-bit computer, you'd probably need to use that board as Unix shell account via serial link.
But Amiga and Atari computers should be able to run Lynx or equivalent natively. They can have a decent amount of RAM and even the low-end M68000 chip is 32-bit internally.
The CSS and javascript stuff is a problem though. Some sites have an API you can use instead of being stuck with only web interface. For the ones that don't, maybe an equivalent is possible, like with this framework:
weboob.org/
Ultimately, all information on servers is text stored in DB, It's totally stupid to keep upgrading computers just because they keep making more bloated web stuff every year. If they don't want to give us the source text directly, we have to build our own tools to get it back, or something close to it.
Pic is UNIX System V running on PC 7300 with 16 MHz M68000, 2 MB RAM, 36 MB disk.

That setup looks comfy as hell. Polite sage for off topic.

holy shit that website really is bloated

only enabled one of the cloudfront Javascript, it used 25% CPU and stuttered my browser window whilst I was moving it. Processor is i5 680 3.6GHz

there was another 10 Javascripts yet to be enabled, I won't dare do it because there's probably malware.

Running Debian with LXDE on 1.35Ghz AMD E1 APU. It works faster than Windows 10 with FX 8350 4.6Ghz. The only downfall is fucking facebook, it is too big-fat-ass for my APU, but it still works (but slow as hell)

Calling BS, my Openbox is using 29MB

That infographic is old. dwm is light but uses more than a meg

Yeah, i3 by itself uses about 9 MB and with the i3 status bar uses 19-22 MB on my machines.

i'm living on 800MHz

underrated post tbh.

only living on 4.6GHz * 8 cores
hard knock life famalam