How do computers work? I would ask /tech/ but i'm afraid they'll laugh at me.
How do computers work? I would ask /tech/ but i'm afraid they'll laugh at me
I think I know what you want. But it's not simple and everytime you ask an expert he will answer with something like "it's all about ones and zeros"
I'm studying physics and I should be able to explain it but I'm not (like most of my colleages)
In principle the computer is a machine that exploits the physical laws to alter information coded in bits.
I'm able to program, but this doesn't help much. I don't understand how the computer is processing my code. I would need to go deeper and learn about assembler language …
From there you should be able tp grasp some connections to the smallest computer parts: Gatters.
We loosely learned how to make Gatters from Transistors, but this was extremely difficult for me. I'm not able to explain their functionality in any way.
But this is the starting point if you want to know how computers really work.
A processor, as far as i'm aware, is built out of logic gates, which, when powered, either block or let the current supplied run through.
Which might not be that interesting on a small scale, but if you remember, before the microchip computers were the size of a small office… they managed to shrink it down to a small chip.
I do IT at my local college, one of the best teachers in my life told me this:
"Don't be afraid to ask questions, if you never ask, you will never learn."
A processor, like said, is made out of logic gates.
Logic gates are themselves made up of ==transistors==. Transistors are electrical components which allow current to pass if it exceeds a specific voltage and do not allow it to pass if it does not exceed that voltage. As a result, they are capable of switching between two states – "off" and "on" – depending on whether or not they allow current to pass. On all computer architectures in use today, transistors only have two states. These states are where all that stuff you see about binary comes from. The transistor is either off (a logical 0) or on (a logical 1). There is no in between; no "somewhat on" or "kinda off" transistor.
==Logic gates== are circuits that perform basic functional operations, such as producing an output if one of several inputs is true, producing an output if all inputs are true, not producing an output if any inputs are true, etc. It does this by arranging transistors in such a way that the final transistor in the gate is only "on" (only receives voltage required to let it pass electricity onward) if the desired input condition is true.
Examples of logic gates include:
==AND gate== (only passes current if all inputs are true)
==OR gate== (passes current if either input A OR input B (or both input A and input B) are true
==XOR gate== (eXclusive OR gate; passes current only if input A or input B are true – not if both are true)
==NOT gate== (passes current if no inputs are true; does not pass current if any of the inputs are true)
==NAND gate== (Not AND gate; passes current so long as not all of the inputs are true; the opposite of an AND gate)
==NOR gate== (Not OR gate; passes current so long as none of the inputs are true; the opposite of an OR gate)
==XNOR gate== (eXclusive Not OR gate; passes current if none of the inputs are true or if all of the inputs are true, but not if only some of the inputs are true; the opposite of a XOR gate)
The logical operations performed by logic gates are often summarized through what is called a "truth table," which is a table of which input values correspond to which output values, with "0" representing a logical "off" and "1" representing a logical "on." An example truth table for an OR gate would be:
INPUT A | INPUT B | OUTPUT
0 | 0 | 0
1 | 0 | 1
0 | 1 | 1
1 | 1 | 1
Chaining these logic gates together produces a processor, which is essentially a complicated circuit capable of taking inputs, performing some logical operations on them which in aggregate lead to some mathematical function being performed, and producing an output, which may then be stored in memory for use as a future input or sent to some long-term storage location, such as a hard drive or USB drive.
You can imagine a processor as being like any other circuit, for example like a simple circuit that transmits power from a battery to a light, with the exception that the processor has variable inputs and variable outputs. A circuit between a battery and a light has one input and one output – the battery is connected and the light turns on. A processor can have many different inputs applied, and its output will vary as the inputs are changed.
A modern processor has two main components – registers and logic gates. The registers are essentially storage regions. Values are stored in a register before being sent through the processor. They are the inputs. After being input, they are sent through a series of logic gates and produce an output, which may be itself stored in a register, stored in memory, stored in long-term storage, or discarded.
The input stored in the register may be sent through different sets of logic gates depending on the exact ==instruction== being carried out. Instructions represent to the hardware which set of logic gates to send the input stored in the register through. The specifics of these instructions are proprietary – only the processor manufacturer knows them. They represent a link between the software programs that the /tech/ shitposter doesn't know how to write and the actual operations carried out on the processor. This is required because any modern processor is too complicated for a programmer to specify which logic gates to send an input through, so instead the programmer might write an instruction (in assembly, which I will go over next) such as:
add eax, ebx
Which adds two numbers that are stored in two registers (eax and ebx). You don't need to specify which logic gates to send the numbers stored in eax and ebx through; the processor can translate your instruction ("add") into a hardware instruction which it already knows how to handle.
The set of instructions a processor can execute is called an ==instruction set architecture==, or ISA. There are several different instruction set architectures in use today, such as:
==x86-64== (the ISA you use to shitpost on Holla Forums on your desktop/laptop, no matter if you have an Intel or AMD chip – it's still x86-64)
==x86== (the ISA you used to shitpost on halfchan when you were in high school)
==ARM== (the ISA you use to shitpost on Holla Forums using your smartphone when mommy takes you out to get your tendies)
==PowerPC== (the ISA you used to watch your faggot porn if you owned an Apple prior to 2006 – if you watch faggot porn on an Apple produced after then, you're using x86-64)
==MIPS== (the ISA you might use if you have a specified server computer – most just use x86-64, though)
That's all the hardware stuff. I have to go now, but if you'd like, I can get into the software details later.
Not sure if trolling or just extremely stupid and in over his head.
The other faggot seems to have posted the first day or two of a cmpsci intro course. You happy OP?
If not, just go fucking buy an introduction to logic systems textbook. It's not that fucking hard.
Shit, why the fuck did they make me take that shit in college if everything's x86 these days anyway? That was one tedious motherfucking class.
When your computer is switched on part of the motherboard sends a set of instructions to the processor telling it to wake up and get to work. The processor then sends a set of instructions to the BIOS, which returns information that allows it to make use of your motherboard's other capabilities and start loading your OS. Your OS contains more instructions for the processor that tell it to do everything else.
i'd like to hear about software
First replyer here.
Well, I agree that I am stupid (not extremely though) and I'm in over my head when it comes to explaining the inner workings of a computer. A LOT of people are better than me. Just look at the one quality post above.
The problem is that even knowing an electrical engineering textbook by heart won't let you see the exact line between software and hardware. I guessed that would interest OP the most. That's why I decided to contribute before someone delivers the standard answers.
My point that it all comes down to assembly language is still valid.
How does the computer keep track of its instructions or how does it differentiate between a char and a double in assembly?
What's the process of a multiplikation in assembly?
Can you call it a "computation" when a char or a video-pixel-value is read from the disk?
I think it's fucking complicated and I never saw anyone trying to explain this in laymans terms. You would have to go way beyond an introduction to logic systems.
I used to ask the same question when I was little. I think it was completely fucking crazy that you could press a button, it turns on, and you can control the mouse and click shit on a screen.
If you ask how does a computer work, in the sense how does it turn on, how all the hardware functions, the software starts and works, it becomes a huge puzzle that requires incredibly deep knowledge.
It's best to start at a low level in the brains of a computer, at a low level logic gates with a current running through it can be used for basic operations. It's very complex because they're billions of transistors used for different things, but if you use different logic gates in tandem or different combinations, you can do things like add, multiply, subtract, divide, or simply execute logical statements, or compare two values. To understand the specifics of this requires a textbook probably. In a microcontroller there's memory compartments where some of these instructions can be executed. To go deeper than this, such as how different logic gates put in tandem can do the operations required for a modern day computer would require significant knowledge in CPU design. GPUs would be similar to a CPU in the way they used transistors for processing.
At a medium level, there's the software that's close to the hardware, essentially the firmware. Firmware knows how to talk to the CPU, or the GPU, or the hard drive. At a high level, you have code, which is the most abstract part. C for example is one of the most low level programming languages, because you can manipulate memory directly and it's close to the hardware. Computers don't understand code though, so it's interpreted into 1's and 0's by the compiler, those 1's and 0's are processed and their instructions are carried out.
In terms of how the fuck the operating system works, it's complicated as fuck. I have little knowledge of operating systems.
Who is Kennedi Cotarelo?
its like a board game.
the memory is like the tiles of the board, each with a different rule.
each tile has a rule to effect the state of the game.
most tiles you proceed to the next one, or you can have choice.
go back five tiles,
go forward five if you roll an eight.
you can express a flow chart (algorithm) using a board game, which can be expressed using a linear memory representation (an array of numbers).
How does the computer keep track of its instructions
Among its registers the CPU has a "program counter" or "PC" for short, that contains the memory address of the next instruction. The CPU reads from this address until it has a complete instruction (a three-byte instruction, for example), then adds three to the PC register so it points to the next instruction. To jump, load a new address into the PC register, and if you want to return later, save the old PC address somewhere else.
how does it differentiate between a char and a double in assembly?
There are different opcodes for adding 8-, 16-, 32-, or 64-bit numbers, and for performing similar operations on floating-point numbers. This makes hundreds of possible opcodes, but they follow regular patterns that make them easier to remember.
What's the process of a multiplication in assembly?
Look up "russian peasant multiplication". If you write the numbers down in binary, you'll see that it's actually a very simple algorithm. You only need this if you're programming something tiny like a Z80 microcontroller – modern CPUs have a "multiply" opcode that does the job for you.
Can you call it a "computation" when a char or a video-pixel-value is read from the disk?
A computer isn't good for much if it can't move bytes from one place to another, so I'd say input/output is a vital part of computation.
what are you, Asian?
If you are serious i would recommend Code by Charles Petzold. He talks about Morse code and the telegraph and positive and negative switches in circuits and how all this stuff evolved into computers. Its really interesting if you want to get your autism on.
How do a quantum computers work?
Particles can be in two states at the same time. Two calculations can be done using a particle at the same time. Gives answer. Use more and more particles and your processing power increases exponentially.
it's just like a thing that just switch transistor like 0 or 1 and it's pretty much it. Like there the ram over theere and it's like consciousness.the HDD is like the unconscious. CPU is like the shit that do math quite fast. The GPU is just an extension card for graphical stufff cuz CPU can be quite shit to it.
That sounds more like a finite state automaton.