POSIX retardation

As a quite experimented POSIX sh user (kill me), I'm trying to understand why POSIX did (and still) authorize newlines in filenames without standardizing the GNU extensions to allow the use of \0 as a separator for all coreutils/findutils/sed/etc...

I mean, what am I supposed to choose between shitting the bed with such files or using GNU specific stuff (also, fuck most *BSDs for not following)?

Anyone found a better solution to this shit?

Other urls found in this thread:

jwz.org/doc/worse-is-better.html
docs.racket-lang.org/reference/parameters.html)
okmij.org/ftp/continuations/against-callcc.html
blog.kingcons.io/posts/Going-Faster-with-Lisp.html
dwheeler.com/essays/fixing-unix-linux-filenames.html
twitter.com/SFWRedditGifs

You put filenames in an array. POSIX sh has arrays. Well, one array. It's "$@".
Alternatively, you could maybe encode all filenames in base 16 or base 64.

The real answer is that traditional Unix people aren't interested in writing correct software. If it works most of the time that's enough. When it crashes, you holler down the hall "hey, reboot it". Reliability is for other people.
jwz.org/doc/worse-is-better.html

Unix is a disease.

I know of that trick, but it's simply too cumbersome.
1) POSIX isn't UNIX; they standardized a lot of useful shit.
2) The UNIX philosophy is mostly sound, even if the implementation was a big hack.
3) What do you propose?

In fine, I just use the GNU stuff. A least, it works.

POSIX is closer to being correct than Unix. If something is missing from POSIX, like here, expect it to be missing from Unix as well.
I think it takes minimalism too far. You could make it decent with only minor changes, though.
GNU's approach from the start instead of patching over the holes for compatibility's sake. GNU is better than Unix. Unix designed from the ground up with GNU's approach to correctness would be even better than that.

UNIX can't be made good or decent. The kernel, C, the shell, every command, every library function, every system call, every directory path, the whole philosophy, it's all cancer. And that includes GNU and Plan 9 too. Anything remotely good about UNIX was stolen from Multics.

hmm good point user

Surprised you didn't blame the Jews for the von Neumann architecture.

Wow great argument fagtron you sure convinced me with those hot opinions.

Everything in UNIX is shit. Believe it or not, there are operating systems that aren't made by Bell Labs or based on UNIX or written in C.

I'm not going to waste time explaining why C is shit or why all the commands are so inconsistent or why the shell treated as a "scripting language" is worse than PHP or why GNU "correctness" isn't correct at all. It doesn't take a rocket scientist to see that these things are all crap despite the standardization and over 40 years of "improvements".

Everyone knows that, but can you name one that is a viable alternative to the current crop?

I just love his arguments there's nothing besides he think it's shit it would bloody help us to understand his view tho.

Well he's ruled out BSD because based on "SHITTY BELL LABS UNIX C". I'm expecting he'll come back and name some obscure realtime OS that is very special purpose and not viable as an alternative.

fuck POSIX, use windows instead

I used to wonder where this "everything considered as harmful" shit came from.
Then I remembered pic related.
That's right it was the self-hating lispers. Parentheses never lie.

(It isn't.)

Lisp was good, it still is good. You'll never run into the "my filename has a newline character in it so now my software thinks it's too filenames" problem when using lisp because Lisp has proper strings not newline deliminated nonsense.

Unix won because it was cheap, because the US Government forbade AT&T from selling it because they had a telephony monopoly. There was no philosophy underpinning it, only "this hardware sucks, maybe we'll write some software that sucks too since the hardware isn't worth anybody spending real time developing proper software"

Later generations forgot or never learned this, so they think Unix was always hot shit because it was better than DOS. Retards, all of you.

It's a hard pill to swallow, but it's true. Everything went to shit when Lisp lost to Unix. The fall of Lisp was much like the fall of Rome in many respects. We're living in a dark uncultured age of computing.

...

So use another CL implementation retard. Or even better, use a scheme dialect.

Scheme is godlike yes, but again, I'm still waiting for the alternative to Unix based / created in C.

How about a filesystem where you can't name any file con.foo or aux.bar (Windows 10), or one that silently mangles unicode into a different byte representation (Mac OS 10)?

Unix sucked all the air out of the room, suffocated all the competition. Nothing could compete with free and shitty but technically works most of the time.

I mean for fucks sake, just look at how many post-C languages gave themselves terrible C-inspired syntax for no reason other than C was popular and they wanted to go with the flow.

Just trying to imagine what technical justification might exist for such an arbitrary limitation sends shivers down my spine.

C syntax is fucking brilliant.

The file extension doesn't matter. It's just con and aux. And it's because Windows itself doesn't want you naming files after device nodes (CON and AUX)

Although I do prefer Go syntax, excluding the braces on one-line if statements

Also it is possible to force the creation of these files and nothing happens so it's probably just for some bullshit legacy or security reasons

Watch. I'm betting he comes back with Windows.

It's moronic, and the gross inadequacy and ugliness of the C preprocessor proves it. It only becomes even worse when you try to use it in languages with more complex grammars. Just look at C++. Absolute trash.

Why are they trying to prevent that?

Because some other OS they ripped off their FS design from 35 years ago had device files

Linux/Unix don't have device files?

C++ language has great semantics, I like using them. The problem is that when I get to that level of C++ programming, it's probably easier and more reliable to use a language that wasn't designed to be backwards compatible with a very old language. My preference is Scheme but I prefer to use Java over C++.

Linux never stopped me from creating a file named sda1....

You are now aware that undelimited call/cc is an abomination and the functional equivalent of goto. It's also a feature of Scheme.

Linux adopted an already established filesystem structure. Windows has to deal with legacy crap

All schemes worth a shit support delimited continuations. You also talk like a redditor NSA faggot.

I'm now predicting that he has nothing to offer, other than act like a contrarian attentionwhore.

Use racket, it's got deliminated continuations. They're very nice.

Also:

It's popular to talk smack about lisp and scheme giving the programmer too much power, but in a discussion about Unix/C, that's a laughable angle. C gives you more than enough rope to hang yourself, but that's excused because with that rope comes a lot of power if you use it properly. Continuations (and for that matter, macros) are the same. They give programmers the opportunity to tie themselves in knots but they're also immensely powerful, particularly when the two are combined...

I don't use C. In fact, I hate C. My complaint is more that Scheme (and other functional languages) hold undelimited call/cc up like it's some awesome powerful brilliant control structure when it's dangerous garbage. Not fond of dynamic scoping in Common Lisp either.

Don't use it then, no one is forcing you to.

The greatness of a programming language comes down to how the programs written in run on current hardware.
Functional programming languages are generally slower than imperative languages on current hardware, and thus inferior.

It is.
More to the point, it facilitates the creation of your own control structures.
Easy to abuse or misuse, sure. Dangerous? I think you'll survive...
Powerful and brilliant.

Common Lisp has lexical scoping and dynamic scoping. Normal variables in CL are lexically scoped, as in scheme. "Special" variables are dynamically scoped. Lexical scoping is plainly superior in most cases, though dynamic scoping does have it's uses. I greatly prefer the way Racket handles this, all variables are lexically scoped but there is "parameters" that provide dynamic binding (docs.racket-lang.org/reference/parameters.html) Historically dynamic binding came first, which is probably most of the reason CL.

Most people like C but are unable to explain why. I found out through intensive introspection lel that the only thing that makes it superior is that it has the PERFECT abstraction level.
That may look unimportant compared to stuff like syntax or UBs, but it is crucial.

The Lisp circlejerking is pretty impressive in here, by the way.

You know you're wrong and retarded.


My thread went a strange way.

I was takling about call/cc, but carrying on this thread with you seems like a waste of time.

Dumb Schemeposter.
okmij.org/ftp/continuations/against-callcc.html

You know nothing. Lisp compilers that produce machine code competitive with C have existed since the 80s and are still competitive today.

The "high level or fast" meme is a false dichotomy perpetuated by fools. "C is low level and fast, Perl is high level and slow, therefore this must be a rule of nature" Absolute shit. The truth of the matter is that C was actually very resistant to optimization; optimizing compilers for C didn't start to get good until the fucking 90s. Before then, if you were writing "fast C code" you were writing 90% assembly language wrapped with C's function syntax. Optimizing compilers existed for better engineered languages decades before C proper got fast. But they got pushed to the wayside mostly for economic reasons and the field of compiler optimization was set back decades.

Fuck, forget Lisp for a moment. Just look at LuaJIT. An implementation of a high level language that will beat the pants off C or C++ in many scenarios unless the opposing developer spends a lot of effort being quite clever. Static compilation vs a tracing JIT... the speed of LuaJIT shouldn't come as any surprise but to many meme-loving fucks it somehow does.

> We argue against call/cc as a core language feature,
Frankly that's a political matter. If you're not implementing scheme, it's nothing to whine about. And as another user mentioned above, chances are whatever scheme implementation you're using comes with more than just undeliminated continuations anyway.

LuaJIT is written in C, so in the end it's just a C program beating another, poorly written one.
You are the one who is a cuck. If Lisp was truly competitive, it would show up at the top of benchmarks and would be used for scientific computing, but it doesn't because it's not and you lie!

Dumb Schemeposter is dumb.

Yeah, if you write your C code to compete with LuaJIT by actually writing a JIT compiler and putting all your application-specific logic in a DSL targeted by that JIT compiler, then you'll stand a chance of smacking down LuaJIT...

But that's not what most normal people do when they're writing a program in C, is it? No... Write a non-trivial program in C "the normal way" and then compare it to a "normal" LuaJIT implementation. The meme-lovers would have you believe that involving a high-level language in your program would make everything slower than pure C, but very often they'd be dead wrong.

Who's benchmarks? Who is performing this analysis, and did they bother to even consider CL? Which CL compilers did they check? Which scheme compilers did they test, if any?
By that measure, C and C++ both sucks ass and Fortran and Python are the only languages worth knowing.
Don't take my word for it, check out SBCL or Chez yourself.


The article is just whining about undelimited continuations when they're hardly relevant in real world scheme programming. It's uninteresting unless you're tasked with implementing scheme yourself.

SBCL can produce extremely tight assembly, competitive with GCC.
The main issue is that in order to achieve it the programmer typically has to provide type-declarations and other assurances to the compiler, this can make code rather ugly and isn't really in the spirit of CL.
Benchmarking sites also never go to the trouble of doing this, they just use plain code, even though in a real application it would be perfectly reasonable to optimise the performance critical parts of a programme.

I read an article a while back which illustrates this very well, including providing example disassembles.
blog.kingcons.io/posts/Going-Faster-with-Lisp.html

To return to the original subject, I've found this interesting page:
dwheeler.com/essays/fixing-unix-linux-filenames.html

Basically, if you want to write real secure shell scripts, you're gonna have to use something like bash/zsh. Coreutils by themselves aren't enough, as some shell builtins like read are still vulnerable to newlines.

Lisp was and is absolute garbage. It was Jewish poison in academia back when they were also creating feminists and antifa. They stuff it into your heads in college as it prevents you from thinking about computer problems from a computer perspective which makes you entirely ineffective. We have 60 years of hindsight on that now and it's clear from nothing of value ever being made with lisp that it had no value.

Bullshit. Go to >>>808650 and prove otherwise.

That's a funny way to spell execline.

Error and data handling in the shell is hopeless. It's why everyone but first year CS students no longer write shellscript.

Read the thread, maggot. I'm obviously talking about something POSIX compatible (not compliant).
Now, I'm not against something different, like this or plan9 rc, but it's gotta to be good?
Now, if only there was a simple sh/execline script comparison to get started.

Unfortunately, the pipeline is a tool so useful and sh being a POSIX requirement makes your statement quite retarded. Sh scripts (like any script) shouldn't be used for more than "scripting", anyway.