So I had a few questions about CPU design and production, and I was wondering if there are any actual semiconductor engineers lurking here who could really explain things:
Why not just increase the die size? I know that freaky quantum shit starts happening if you make the transistors too small but why not just increase the surface area of the silicon they're burned into?
If size is really an issue, why not just have multiple dies intercommunicate on a single chip?
Is 3-Dimensional architecture possible? Can you layer transistors on top of one another?
There's research on 3d but nothing you can buy yet. It has plenty of problems too, like while volume increases as a power of 3, heat dissipation increases as a power of two. I guess the problem with increasing size is the same: you'd waste more power, which means more heat.
Jose Cooper
...
Dylan Morgan
Because we have been shrinking them, less size, less current needed more heat expelled.
What exactly does this solve, we already have multi core CPU's.
It is three dimensional, everything is.
Whether or not you can layer transistors I don't know most CPU manufacturing is propriety.
Heat is the major barrier to faster CPU's.
Adrian Sanders
Id be willing to pay extra
Gabriel Cruz
...
Justin Martinez
For what though?
Adam Cruz
Better performance? More instructions executed per second? 128 bit architecture?
Benjamin Bell
Then IBM POWER8 chips are for you. They have gigantic die sizes(659 mm²), up to 12 cores with 8 threads each and cost a fortune. The solid copper heat sink it needs costs several hundred dollars, the CPU itself used to be the size of a beer coaster but newer versions are smaller.
Levi Hill
That's a big CPU...... 4me
Angel Cox
This. IBM still makes the most technologically advanced hardware. I used to play work with POWER8 hardware. Hands down the most powerful computers I've used.
Anthony Gutierrez
One off m8
And the 10+ core Xeons completely destroy the Power8
Jose Thomas
I am aroused
Noah Hall
They are also the only CPUs which support NVLink
Ethan Nguyen
Depending on the workload, the Xeons are competitive. Database-wise though, the Power8 mops the floor.
Josiah Howard
...
Luke Green
You can't communicate data across a big chip fast enough. Assuming speed of light and 4GHz clock a perfect signal could travel though air 75mm. In reality to send a signal on a chip, a transistor has to cut on and sink or source enough electrons down a tiny copper filament that has some resistance and capacitance per unit length to result in a large enough voltage shift in all the connected transistor gates (capacitive inputs) well before the next rising edge of a clock. Say that voltage swing is between 0.9V and 0.1V. It just takes time.
So to do more work on a bigger chip, the computational elements either have to run at a lower frequency or you needs islands of computing that communicate with neighboring islands at lower frequencies - at some point just make another chip and solder it down to a pcb.
Besides the architectures that would do computation in this distributed fashion are nothing like a modern very complex pipelined, speculative, superscaler, OoO cpu core.
Modern cpus are kind of an idea from the 40s' or 50's with features invented in the 60's and 70's carried to their logical extreme with insane fabrication techniques.
The attempt to do more computation faster is trying different architectures - see gpus and their related different programming models. To do more with the existing cpu designs is going to be very hard. Even if you have faster transistors, can you communicate the data around fast enough and far enough around to related transistors to do any useful work.
But yeah, gluing multiple dies down on some sort of interposer is a thing, particularly when combining logic and RAM. This is great because dropping memory latency and increasing bandwidth for cache line fills and spills drops the time existing designs spend waiting for data to process. But again, this is an optimization of older design.
Also heat.
3D architectures: see heat.
Alexander Gomez
Your comment about speed of light is good because that's one of the techniques being investigated as to how to make processors faster. One of the professors mentioned something like a 200x speed increase from changing from electrical transmission to optical, but that's almost certainly going to be less in practice.
And wasn't Intel's Core 2 Duo design basically two dual core processors on a single chip?
Nathaniel Powell
like said, $, however, I suggest downloading a book on semiconductors design for some more insights on the manufacturing process. the main issue is yield. Even if there are several consumers willing to pay for it, getting the yield to acceptable levels takes years for any single process. Basically, it wouldn't even pay for itself. You might be interested in supercomputer processors. manufacturers do it for prestige, not for money. You also have to take latency into account. the larger you make the die, the longer signals would need to travel. propagation speed is about 1/2 speed of light, add buffers along the way (otherwise the signal disperses), it does take a while. You also can't just use high voltage because you'll burn everything. what do you want? more cores? just by a server chip. just reminded myself - servers already have bigger dice. they're past 10 cores now manufacturing process isn't there it.I think HP declared they were working on something like that in 2014. again, read on the manufacturing process. ( I suggest the relevant chapter in Sedra & Smith ), but the short answer is no. Transistors aren't "burned" into the silicone, they're grown on it, and the interconnecting layers of metal are grown on top of that. You'd have to reinvent the manufacturing process to the degree of how TRANSISTORS are manufactured to get there, so it might take a while. neither. you're just lacking some of the relevant knowledge.
Jose Jackson
Daily shill reminder that the TALOS WORKSTATION is a real thing by a real company that needs your help by showing interest
Grace Hopper explained the limitations of the speed of light in computing perfectly with a short length of string. We really need to have someone like her around to eloquently describe quantum computing.
These odd POWER workstations aren't anything new. They're always horrendously expensive, for the same amount of money you could get a dual xeon system that beats it.
POWER is useful when you have tens of millions to spend and need to crunch numbers 24/7 with a dedicated team that writes special software for it. Not for personal computing.
Ayden Ramirez
Pretty much this, is not about money, Intel already dumps billions on R&D and it doesn't help.
Jaxson Murphy
What's the point? You'll just end up with even more bloated and inefficient code, and no real progress. So basically just extra wiggle room for botnet to thrive in.
Liam Adams
uh, ok.
Xavier Fisher
I'll spend the extra money .t liars
Blake Garcia
Quantum computing is equivalent to πfs.
Caleb Gray
OK
From these replies I gather that making smaller transistors and increasing the die size and 3D architecture is all limited by physics, money, and time. That being said, what can we do to improve communication speed between multiple CPUs on a single motherboard? Also, would asymmetrical opcode execution be a thing? Like having the CPUs take turns executing instructions, would that make things faster or more bloated?
Jason Morris
...
Xavier Hall
Are you high, son?
Connor Murphy
How would you even do anything? Register reordering/renaming, pipelining, branch prediction. Syncing all that shit is going to be hell. The compromise is how modern CPUs work, they fetch and redistribute the operations between multiple units (ALUs, LEAs, FPUs, etc) after doing the microop translation (Mostly in CISC) reordering/renaming etc, they do not execute one instruction at a time, fetching is done in chunks, and all this happens at the same time with pipelining, when you decode instruction A you are already fetching B for example, it's one of the most important parts of CPU design.
Grayson Perez
.. they add mossad snoop experience, useless gpus and diveristy boosters.
Zachary Barnes
Suicide sounds more fun. What I'm trying to understand is why you need bigger processors when you already have server and supercomputer dice. If you want to distribute a big task, that's what farms are for. why do it all on one machine? it's actually less cost effective. That's kind of how a single core works. read about out-of-order machines
Xavier Richardson
Since ppl here are retarded I had to answer this one.
Die size has to do with the speed of the electron flow. Yes, it is said that electricity travels at the speed of light, but if you use wires to transfer them, the wires have impurities and even on the small area of the current die sizes it does bring a difference.
Also, because wafers are made out of silicon and it's hard to produce bigger pure wafers, there are obviously also cost reasons involved.
The closes architecture that we have today comes from the mobile market. It's called a System On Chip (SOC). You have multiple separate chips and subsystems in a single package - High-performance CPU, low-energy CPU, RAM, GPU, etc. All of those are interconnected.
The High Bandwidth Memory (HBM), featured in the newest GPUs is an application of this process.
No. ASML is the company making photolithographic machines for Intel. Simply said, those machines are like copy machines for CPUs. With the very small manufacturing sizes, it becomes very hard to avoid all the quantum effects and to even produce such chips physically on a big scale. en.wikipedia.org/wiki/ASML_Holding
There is an ongoing graphene chip meme that should solve the limits of silicon wafers. However, this is still far from having at least current level CPUs.
You are wrong CPUs don't wiggle, they are solid state devices
Grayson Johnson
No. There are different kinds of quantum computers and each of them is good for something else. Most of those applications don't fit the needs of the casual desktop usage.
that's exactly what we did during the nascent days of consumer multi-core computing with the likes of the Pentium D and Core 2 Quad, it still continues in the form of MCMs most commonly used by IBM
it's not at all optimal
then you're just increasing die size vertically, hence adding more heat and decreasing yields
intel has nothing to gain from purposefully gimping moore's law and consumer demand for new products
Jose Gonzalez
At least D-Wave uses Linux.
Ethan Diaz
WAT.
The last time I checked, IBM sold those CPUs for pretty cheap, on the order of $1k for 8-core models (which is less than half the price of comparable Intel gear). My guess is you're either talking about some limited production ultra high-end models, or have seriously outdated knowledge.
According to the benchmarks I've seen, it takes (on average) a 14-core Xeon to compete with an 8-core POWER8. The only workloads where the Xeons have an upper hand are the single-thread bottlenecked ones (a rarity in the server world).
Leo Perez
When you enter an alternate universe where quantum computing isn't utter bullshit
Connor Fisher
128 bit would be useless. We already have SIMD up to 512 and soon 1024, it's only usefull if your algorithms can be vectorized.
Colton Cook
Hardware support for quadruple precision would be nice in some contexts.
Landon Diaz
128 bits really means 128 bit pointers, or the ability to address 2^128 bytes of ram. They would not make anything faster to use them for normal use - and in fact would make things slower because they would take up more ram themselves to save, and the logic to deal with them would be slower.
Once you need to access a larger address space, then is the time to make your pointers longer otherwise you have to jump though messy hoops in software.
RISCV has planned for 128bit registers & pointers. They did this because they argue next gen massive supercomputers may be getting close to needing a larger address space. Also there is something called tagged pointers that use extra bits above the implemented address space to communicate extra information about the pointer. To do this you need more bits.
But yeah for normal use 64 bit pointers and SIMD registers for computation are going to last a long damn time.
Lincoln Cooper
Underrated.
Levi Fisher
He's probably talking about MCMs with 2-4 CPU dies each, like they have in Watson.
Camden King
GPUS.
Andrew Rivera
but openCL looks fucking ridiculous user
Dylan Rivera
bump
Evan Russell
sage
Aiden Price
fuck u
Aiden Murphy
no u
Joseph Fisher
This is how you spot a tech casual. The thing about quantum computers is they are non-deterministic, thus not useful for 99% of algorithms.
Parker Lewis
wafer size is optimized for production, die size is optimized to waste the least wafer space. in short what another user said: $ the interconnections would add so much complexity (money poured into metal) and signal propagation delays it isn't feasible. yes of course, physics allows it if you keep the quantum animals locked in the zoo, but the transistor production costs for anything within our current technological capabilities would be too high. only the military gets to afford it in limited quantities (i have 'heard' they have chips that are multilayer). cutting edge technology is always held from the market to maximize profits by lag effect. intel is hardly the only company doing this. maybe not directly implicative of this, but their tick-tock release for architectures kind of leads thought towards natural cynicism: ask yourself this, would you milk the idiots of their money or just give it all to them at once if you had the opportunity