Assebmly

If you really want to have 100% control over your computer, you should use fuckin' ASSEMBLY and you FUCKIN' KNOW IT! NO MORE AND NO LESS MACHINE INSTRUCTIONS! FUCKING POSERS!

Other urls found in this thread:

github.com/jmechner/Prince-of-Persia-Apple-II
hackaday.io/project/18491-worlds-first-32-bit-homebrew-cpu
homebrewcpu.com/
youtube.com/watch?v=MWdG413nNkI
twitter.com/SFWRedditImages

k, thanks for the blog update

Totally not a waste my time. Before this, my day was just a waste. Now it had a purpose. Thanks op!

you are right. and we should use 16bit processors also.

we should write a book. "build your own fucking computer." it would cover the assembly(heh) of the actual computer (from very basic, generic, unbackdoored components). then it would walk you through, step by step, in the writing of a simple operating system, basic device drivers, basic utilities, a simple text editor, and a small and very stripped down c compiler (for when you need to script something).

the whole book and all the code and instructions included within would be peer-reviewed and audited by the best minds in the field.

only then can we truly begin to move away from the pozzed shit we are drowning in.

i've been learning assembly (AT&T syntax) and i can't help but feel that learning it is ultimately pointless outside of debugging compiled languages. it seems to be better at writing assembly than a compiler, you need to have a set of rarely taught skills which are a subset of an already rarely taught skill, writing assembly.

do employers actually hire people to write assembly, specifically for x86 family machines? the best i think i can hope for is someone from HR thinking that writing machine code sounds difficult and mysterious

or use C...

I don't want 100% control over my computer.

It's not so bad writing in assembly. You use libraries and functions too. Once you've written a routine you re-use it and the same with data structures. OP is fag though.

Why people do this.. oh wai, GNUUUUUU AS

why are you using the image i screencapped, OP?

Real men use Holy C. Get in touch with your GOD you fuckkin CIA niggers.

Why would you willingly subject yourself to that
Start writing replacements for the software on your system in assembly

Microcode tho

Enjoy never writing any software more complex than DOS.

Well aren't you 1337. Thanks for the blog post, you turd.

there is very little opensource assembly software but...
you can make mods for some old games github.com/jmechner/Prince-of-Persia-Apple-II

Debugging compiled shit is a highly useful skill. Also, knowing what will the compiler do with your high-level code is very useful when it's Ramadan and you need to make your code fast.

Memes aside, this is actually the correct answer. There is not greater bare-metal OS than TempleOS. If you really want to work with the hardware with the least amount of abstractions, if practicality takes a backseat to computational autonomy, this is your best bet.

...

Already being done friendo. This guy is building a RISC-V out of off the shelf ICs:
hackaday.io/project/18491-worlds-first-32-bit-homebrew-cpu

You guys do know that a computer built with discreet chips would be shit for most uses. While you'd be able to throw as much RAM and any other modern shit at it that you may want, you are still going to be massively limited with the speed you will be able to get the parts to run at due to its size. Here's a 16 bit computer that someone designed and built themselves, it only runs at 4 MHz (originally 3 MHz):
homebrewcpu.com/

I grew up on 8-bit computers running at 4 MHz, so that's fine. Those machines were a lot more fun than any modern shit.

You didn't need to deal with encryption for network connections on those 8 bit computers, which is necessary today. Also keep in mind that you need autistic demo tier optimization to display this level of video with a processor running at ~4 MHz:
youtube.com/watch?v=MWdG413nNkI

But hey, "It's Biblical! See Luke,5:37."


This.

My shit-tier 8 year old netbook can do chacha20 at roughly half a gigabit, you could divide that by 2000 and it's still more than enough for usable SSH.

And it's not really accurate to compare this to IBM's overpriced underpowered 8-bit trash. Try an equivalent 32-bit RISC computer like Acorn, Amiga, Commodore, etc.

You don't need encryption on 8-bit machine. At most you're going to connect it via serial cable to a local ARM SBC with Unix, so you can login to shell account and run Lynx from there. There's not enough memory on 8-bit computer to run Lynx.
As far as videos, I just don't give a damn. I already block youtube in DNS, along with facebook, google, and twitter. I'm down for some old school animation and pixel art though.

So you plan on building a processor from discrete components that has comparable performance per clock cycle to an Intel Atom? If not then you're going to be dividing that by a lot more than 2000.

The 8088 is 16 bit.


Then your simple computer is nothing more than a glorified keyboard and monitor, completely defeating the purpose of building your own computer for general use like what was originally talked about here: .

mit nigger

Why would you want to use C? I understand the need for a non-assembly language, but C is disgusting. I don't want any null-terminated strings or retarded "do { } while(0)" macros on my computer.

Just because I use assembly doesn't mean I want bugs, retarded "strings", undefined behavior, and security vulnerabilities.

It's not hard to implement your own kind of strings in C. The thing is, 0-terminated arrays of values are extremely easy to program with, especially in assembly and C. You're probably going to be doing that anyway.
Since you're going to be working in assembly most of the time, the only programming language that really interops with it properly is C.

And remove every chip and buss that has dma

If you ever wrote any assembly or used a debugger, you would know they're retarded. There are so many pointless loops looking for the end of a string. It also means there's a character that can't be stored in strings, which is just stupid.

You're looking at it from the C perspective. Assembly can interop with any language. C is not easier for assembly interop than Pascal or Fortran or other languages. Some languages are harder to call from assembly because they only document the C FFIs. Assembly can use the native (no FFI) calling conventions.

A lot of noobs think C is a foundation or close to the machine, but you really don't need it at all. I especially don't want it for "scripting" where I'll be doing string and array processing (which C is worse at than using assembly).

...

The only reason to have data in arrays is to loop through them. When you're doing that, you have only two methods of knowing when it ends:
- Terminate it
- Know the length beforehand
I'm not going to argue about one being more "efficient" than the other or not, because there's pretty much no difference. it's just a tradeoff between having to loop through it to find out it's length, or having to allocate memory to store the length and a register to keep up with the remaining length. Besides, you don't often need strlen for much, since most of the time you'll be looping through the string anyway.
To me, I believe it's much more convenient to terminate them, but I guess it's a matter of taste. Why would I care about being able to store a useless character in a string or not?
If you want your fancy strings, since you're going to be implementing the compiler anyway, you can easily inplement them yourself.

Sure, you can. The thing is that C is pretty much a direct abstraction of commonly used constructs in assembly. There's really nothing else to it. You can even write assembly directly in a .c file, and bind variables in C to registers and all that jazz. How much closer to assembly can you get?

KILL YOURSELF

kys dumb dashchan poster.

What if you create an N-length fixed size string with remaining capacity at the end of it? Say N=32, then:
HELLO WORLD!\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"19"
This way you can fit a maximum of 31 chars, and since the 32nd byte stores the remaining space it becomes a null-terminator when the string is full:
ABCDEFGHIJKLMNOPQRSTUVWXYZ12345"0"
That "0" at the end is the remaining space indicator. It automatically becomes a null-terminator. I think I heard a talk where they said Facebook uses something like this in their fbstring implementation but not too sure.

I don't think that fixes any problems, it has the worst of both worlds: You have to loop to find out the length and you have to allocate extra space to store the "remaining" length.
Besides, this method, as you specify it, is literally impossible to implement.
Let's look at it, shall we?

So, when you encounter a 0, you have to look at the next byte.
If that's zero as well, we assume the "remaining" length is the first non-zero byte.
If that isn't zero, we assume the string ends here, and the remaining size is zero.
Now, with the above two rules, consider this string:
ASDF\0"1"
Can you spot what would go wrong?

Okay, nevermind, I can't think straight right now. Abandon thread.

what if the length of the string was at the beginning? for most applications 2 bytes would be able to store a large enough unsigned integer

Just before I leave:
While you can make the exception of checking if the byte behind the 0 is 1, you can't guarantee that the byte behind the 0 in a string with 0 length remaining isn't 0. If it just happens to be 0, your whole idea is fucked. To mitigate this, you'd have to further terminate the string with something that isn't 0 or 1. Implementing all of this for simple strings is just overcomplicated which makes your idea retarded.


That's pretty much what I was talking about with "Know the length beforehand" in .
It's the most convenient and easy to implement method of doing this. The only limitation would be that "most applications" is usually a bad assumption. To cover all use-cases you'd have to use a size_t, which means 8 bytes on 64 bit.
Hell, it's how most "string" libraries in C do it, and it should be easy to implement in a custom compiler and C library.

How did you know?

It puts a space after your meme arrows by default

Easy, N is constant. You define it at the start of the program, and use that to check the string length. i.e.
#define SIZEDSTR_LEN 32...char sizedstr[SIZEDSTR_LEN];memset(sizedstr, 0, SIZEDSTR_LEN);sizedstr[SIZEDSTR_LEN-1] = SIZEDSTR_LEN-1;if (sizedstr[SIZEDSTR_LEN-1] < bytes_we_need) { /* upgrade to nul-terminated string... */} else { strncpy(sizedstr + SIZEDSTR_LEN-1 - sizedstr[SIZEDSTR_LEN-1], your_string, bytes_we_need); sizedstr[SIZEDSTR_LEN-1] -= bytes_we_need;}
Or something.

shit

Of course there's more code that adds things to the string before the if. It's 12.30 AM eurocuck time, don't blame me.

If you use them like lists (array lists), but they're meant for indexing too. If you don't have the length, you have to use them like lists. You might have to loop through them twice so you can get the length.

In the world of C, strlen can look through gigabytes of memory for a null character. If reading gigabytes is more efficient than reading 4 bytes on your computer, go ahead and use a null terminator.

Because it supports all the characters. It can be strings of anything, not just ASCII. If you happen to work with data that has a 0 byte, you won't be able to use 0-terminated strings.

Another reason is because CPUs have string instructions. A lot of computers like x86, Z80, and a lot of mainframes have string move instructions. Every single one of them uses a length because it's easier and faster. Your CPU can copy as many words as it can because you gave it the length ahead of time.

In their own calling convention without an FFI. Assembly can call Haskell with the GHC calling convention, Go with the Go calling convention, Ada with the GNAT calling convention, OCaml with the OCaml calling convention, etc. If you know anything about these languages, you know they don't use the C calling conventions. You can do all that in assembly. You can only call them from C because they added C FFIs.

Nope. Only people who don't know any assembly think that because the OSes they know use C APIs. Assembly APIs are like BIOS, MS-DOS and OS/360. They take arguments in registers because that's more convenient in assembly.

The C language itself is not like assembly at all. The stack is in x86 to support block-structured languages, not the other way around. x86 has ENTER and LEAVE for nested functions like Pascal. That doesn't make Pascal a direct abstraction of x86, but it does mean C isn't the intended language for x86. There isn't anything in C that is similar to assembly, except pointer arithmetic, but C is not the only language that has it. There are a lot of things in assembly that are missing from C and single instructions that do more than a complicated C function, loop, or expression.

You can do that in C, but you can do it in Basic, Pascal, Ada, and Rust too. That's not a special C feature.

With assembly, not C. You start with the C calling convention, then you adopt null-terminated strings and C APIs and it's no longer an assembly OS. C is the first-class citizen on Linux and assembly is second-class. UEFI was designed to be called from C and it's terrible to use from assembly.

I'm posting through a breadboard so big that would make you shit yourself