Circuits are implicitly parallel, and there is quite a lot of parallelism involved even in a single core, all to create the illusion of sequential steps. You as the programmer express intention, but you are not concerned with how the instructions are actually evaluated, only that they produce a correct output in a certain time. I imagine future designs will rely less on the programmer to make the decision of what should be executed in parallel, and an interplay between the compiler and cpu. If that's not an option, we're going to have to turn everything into a CUDA like style.
7nm 5nm 4nm the end
How does it feel being brain damaged?
Besides rust, name one.
Oh, you're a rustfaggot. That explains the damaged brain. Probably lacking in the frontal lobe department. But anyway, Fortran, Pascal, Ada, Lisp, Scheme, Forth, Haskell, Prolog, Parasail... I could go on, as at a minimum all languages that predate C count, but you get the idea.
Oh good. Now name one that accounts for parellelization to not jump as an excuse for parrellelization at the assembly level. Or a proccessor that allows for such to begin with and where the heat doesn't destroy performance or energy effeiciency of said parellelization. Hint, it's not x86.
How about every language I just mentioned? Although Parasail tops the list when parallelism is involved, having implicit parallelism absolutely everywhere. Your brain damage is preventing you from understanding that as long as a language has some sort of parallel semantics, it can be compiled to involve parallelism at the assembly level should the architecture allow it.
And you know nothing of CPU design as well. What are pipelines, faggot? What is multicore? What about hyperthreading? What is out of order execution? What are multiple ALUs even used for anyway?
Slow instructions executed in serial that then
Have to be put into order by
A slow scheduler that takes up die space which causes large amounts of heat as it slows down the execution of
This. Those logic units are general, and not dedicated to a task. Taking up further space on the die and creating more heat then neccessary. No one needs ancient and slow assembly calling an ALU that is only on the die for backwards compatibility.
Right but most, x86, x86_64, iatium, mips, and powerpc don't and what you see is out of order execution that then takes up cpu cycles that could be used for something else. ARM kinda does but it's shit due to backwards compatibility ala h264 on the die along with (((kikezone))). RISC-V is the future to compensate for all the above. But that means a language that accounts for parrellelization of commands at the assembly level and not a dedicated out of order scheduler on the cpu die.
Like bees against the honey, right?
You americans are stoopid, having only 2 jew-controlled parties in a country really warps reality.
>ARM (((kikezone))) being embeded on each chip further reducing security and energy effeiciency
You have no idea what you are talking about, nigger. Come back when they start teaching you hardware architectures and virtualization at your college
So what, 16-bit microcontroller with (((le cuck licence))) in desktop form? R-right.
That's called javascript
10/10 bait
This is so bullshit. I can do 10 billion sha256 calculations per second on my fucking desktop now, and that's still scaling out of control. Just because Pajeet can't get his Visual .POO going any faster doesn't mean WHITE MALES aren't making it happen erry day with different methods.
It's time to learn how to write code that scales. That is what will set you apart from the flood of Hour of Code pythonistas and pink haired tech evangelists. The Cray X1 was one of the fastest supercomputers in the world in 2004 at 5.9 teraflops which is midrange for goddamn VIDEO GAMES today. Scaling is still happening, ignore the Intel Jews who want you to accept the engineering process they stalled with $300M of diversity.