Go vs Python

Pros vs cons of Golang vs Python?

Whats your pick?

Other urls found in this thread:

benchmarksgame.alioth.debian.org/u64q/compare.php?lang=go&lang2=python3
opensource.googleblog.com/2017/01/grumpy-go-running-python.html
techempower.com/benchmarks/#section=data-r13&hw=ph&test=plaintext&l=3vtypr&a=2
python.org/dev/peps/pep-3146/
talks.golang.org/2012/splash.article
blog.plan99.net/modern-garbage-collection-911ef4f8bd8e
golang.web.fc2.com/
github.com/InaneBob/sum_bytes
benchmarksgame.alioth.debian.org/
golang.org/ref/spec#Comparison_operators
golang.org/doc/code.html
stackoverflow.com/questions/25734504/different-pointers-are-equal
stackoverflow.com/questions/25749484/equal-pointers-are-different
stackoverflow.com/questions/15955948/how-does-this-lambda-yield-generator-comprehension-work)
thebreakfastpost.com/2015/05/10/sml-and-ocaml-so-why-was-the-ocaml-faster/
docs.python.org/3/tutorial/classes.html
libre.adacore.com/tools/gps/
blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/
blog.famzah.net/2016/09/10/cpp-vs-python-vs-php-vs-java-vs-others-performance-benchmark-2016-q3/
raw.githubusercontent.com/famzah/langs-performance/master/primes.py
twitter.com/SFWRedditVideos

go is faster
python is more fun
their both shit

go is made by google
expect it to live 5 years and then abandoned

Proofs?

benchmarksgame.alioth.debian.org/u64q/compare.php?lang=go&lang2=python3

Now fuck off.

Just learn C++

How are these two comparable in the slightest?

Isn't Go supposed to be a C competitor? Comparing it to Python is stupid, because they're different tools used for different purposes. Go/C is for when you want to get more performance, and Python is for when you want to make something quickly.

I used to be really like the idea of Python until I actually started using it. Python is not easy not learn like many will make you believe, rather it is forgiving therefor easy to dive in, but in reality hard to master.

- richer ecosystem
- better tooling
- bigger community
- better support for GUI programming
- faster prototyping

Go is a very unique language, it is high abstraction language, but still compiles to native executable and also has garbage collection (sub-ms pauses). The language by itself is very simple & opinionated, meaning that there is always a preferred way of doing things. There is no Java-like OOP, there is not high functional programming features. Some may say that's bad, but I personally love that about Go. I love reading other people's Go code.

- native executable by default
- faster
- better garbage collector
- easier to master
- sane language design without OOP insanity and functional programming cancer

There you go. Even though if you are beginner I would advice not learning ether of those two. If you're studying CS, study concepts and get to know the basics of the language they teach in. If you're self thought then just focus on one platform you want to work in.

Android -> learn Java
iOS -> learn Swift
web frontend -> learn HTML, CSS, JavaScript, jQuery...
web backend -> learn PHP or Python or Ruby or NodeJS
Windows applications -> learn C#
embedded devices -> learn C
game development -> learn C++ (don't actually learn self tough game development)

Are you retarded? Go was released in 2009, now it is 2017. 2017-2009 = 8 years (actually more like 7 if you account for months).

Go is not a consumer product, but a developer one. Google discontinues their projects because they can't monetize the specific platform.

They are slowly rewriting all their legacy Python 2.7 codebase to Go.

gophers pls *go*

gophers pls *go*

...

I have said it here before and I'll say it again: Go is perfect for what it was designed for: server programs.


No, it's supposed to replace Java and C++ for server programs.


Not only that.
Google has written a new Python runtime in Go that transpiles Python to Go code:
opensource.googleblog.com/2017/01/grumpy-go-running-python.html

...

I have not seen pedobear in years.

web backend -> learn Rust
fixd

Python:

Very high level
Very well designed
Supports OOP and FP
20-30 times slower than C
Mature, no big changes expected in the foreseeable future
Widely used in industry
Outstanding third-party libraries
Arguably wider scope of application
Community is somewhat fragmented
Main competitors: Perl, Ruby, Lua, JavaScript, PHP
You can actually get paid to write Python code

Go:

Lower level than Python, higher level than C
Lots of design choices are questionable and have caused much controversy
Largely a procedural language
2-3 times slower than C
Go 2.0 will break everything (and either make Go a smashing success or sink the ship)
Niche language used by a handful
Not that many libraries available yet
Arguably suitable only for certain use cases
Not much fragmentation in Go's community
Main competitors: C, C++, D, Rust, Java
Finding a job may be challenging (but will pay better)

...

CHECKMATE ATHEISTS

DAE LIKE GENERICS LE REDDIT ARMIE, pls go

C and Go are the only sanely designed languages.

I wish Scala-tier pajeets would stop ruining programming.

Isn't go explicitly designed as a pajeet-friendly language?

Explicitly designed? I don't think so. Pajeet-friendly? Yeah it is.

Go is antithesis to what modern programming has become.

In a previous golang thread, someone posted a quote from one of Google's devs explaining that go was designed to accommodate brainless, thoughtless programming for their interns/outsourced coders. I'm sure they anti-go folks will be happy to repost it.

Some people would want you believe that Go was made by Google to employ more pajeets. But reality is that Go is far from just being Google's toy project. There was a recent controversy where Google wanted to add a feature to Go that would make easier for them to maintain certain projects, but it was a half-baked solution. Community complained and Go developers immediately dropped the feature.

Go is controversial because it is very good for niche programming fields like servers and cli utilities, but doesn't excel at everything.

t. Rob Pike

This is how Rob Pike sold the idea to Google to get resources. Go is nowhere near Pajeet-friendly as Java or Python for example.

...

How is that an advantage?

sub-ms garbage collector

Let me reiterate for you:
How is that an advantage?

no u

The point is not freeing you from the hassle of remembering to "free" your memory, but consenting you to share memory between "threads" without the hassle of maintaining a owner that must free it when all the other users have stopped using it. In fact the latter part of my last sentence describes a gc, and in that context is mandatory (in lot of simple cases you can come up with an ad hoc solution that is probably better then a generic gc, but in the end the gc is fucking convenient and simplifies even those cases)
Even with channels, for performance sake, you want to share memory; channels maintain their second purprose: synchronization

Look on yt "static void main" from Pike. They designed the language to be ergonomic for them (unix patriarchs). Incidentally a well written language is good even for pajeets and that's how even I would sold a similar project to management.

Go is pratically usable everywhere.
Nobody has never said a shit about 2.0
Go has lots of things that you would consider "functional", but sometimes has preferred different compromises (like multivalued return and for cycles instead of tail recursion). Those choices are not mindless given how easy it is to read other people go code (which is shockingly one of the best language features), and that Turing machines are still the basic model for computation at the hardware level and not lamba calculus (inb4: derp, von Neumann, lamba calculus cannot).
The rest is less inaccurate.

Too much readability is not good too. Code should have a barrier against those who are too dumb and must not be allowed to touch it.

What constitutes a barrier to changing code that does not belongs to you is your own ignorance: specifically of the application domain, not of the code. If one is so stupid to change code that don't know why is there, he won't be stopped by simply don't understanding the code itself.
The right solution for the problema is not creating unreadable languages, but setting permissions on repositories...

This is Poe's law tier.

What kind of shit argument is this? Holy shit.

Rust is purely a fad. At least Go is out of beta and will still be around in a couple years.

Rust will suffer from the same problems of C++ because the design principles are nearly the same (from runtime cost free abstraction but not so programmer /compile time friendly, to a preference for having always the right functionality at hand instead of minimalism and orthogonality, passing for the usual love for simple interface even at cost of complex implementation instead of full stack simplicity).
It is only updated to the state of the art of what that community (lot of times rightfully) thinks are best programming practices right now.

I don't know how people stand Rust syntax.

...

I really don't mind, if it becomes as complex as C++, as long as three things eventually are all true:
- the compiler proves that the complex code I've written is safe
- the compiler optimises the abstractions down to C++ performance in almost all use cases
- the compiler compiles at least as fast as modern C++ compilers

Currently only the first one is true. Waiting for Rust to mature. It's a good concept, but all software written right now hinges on the assumption that Rust will eventually become a safe C++.

Rust isn't forced to carry a massive cinderblock in the name of backwards compatibility.
It also won't suffer from the same problems of C++, because the design principles are not the same. Take a look at how Rust implemented RC/Mutex for dynamic enforcement of alias rules.
Without such enforcement, Stroustrup's model for type/resource safety is fucking useless. It all relies on being able to statically proven to not alias on function entry.
Protip: you can't do that with all mutable references in modern C++, with shared_ptr for example.

Im slowly learning Python and enjoying it without having any prior real knowledge of programming or scripting.

Fight me.

Python is a pretty good language, it would've been infinitely better if it used curly brackets and semicolons instead of whitespace though

What about various devices used in conjunction for a robot?

python would be a good language with do blocks instead of braces

Doubtful. It was designed to replace C++ in most applications, but using it in place of Python is perfectly acceptable too, considering Go's focus on readability.

Why? You'll end up indenting in any case if you value readability, so why mark code structure twice (in ways that don't necessarily agree with each other) when you could do it once? Seems unnecessarily complex.

The design principles are instead nearly the same I repeat.
The features will be simply more polished, updated, and better working. Lot of shit won't be there. But in the end it will suffer from the same problem, namely complexity and on the long run deprecated things or more way to do the same things.
That community is willing to accept the trade offs like
but not all programmers feel the same.

More on the current state of Rust vs Go/C++:
- The compiler is nowhere near as optimising as it should be. Currently some code will be faster than both C and C++, some will be on par, but the majority will be Go-Java speed.
- The compiler is not even multithreaded. The final compilation step, which I assume to be linking and optimisation, can take extremely long and all on a single core.
- Libraries, or, more importantly - high performance libraries, are lacking. As an example Go's standard library's HTTP server is ~16 times faster than Mozilla's Hyper HTTP server ( techempower.com/benchmarks/#section=data-r13&hw=ph&test=plaintext&l=3vtypr&a=2 ) and it even drops some of the requests in the benchmark. I am comparing the goto fully-featured servers, not some stripped down special purpose libraries. Work is underway to make Hyper use nonblocking I/O and such, but is not yet there.
- Rust is still changing. Go is a relatively mature language. It's had plenty of time to accumulate backwards compatible improvements, but Rust is only a year old since 1.0. A lot of important features are still in development and many libraries rely on the nightly compiler to even work.

tl;dr: Rust needs another 4-5 years to become usable in mainstream production.

In fact, what I don't understand of Rust (in reality of every language) are the zealots.

Brand identity or, in more severe cases - religion. This has existed among humans for millennia.

It will not survive that long. In short order it will be surpassed, abandoned, and forgotten, like Clay.

Generic programming was a mistake.

It will only go under, if Mozilla goes under. But the inverse is also true. If Mozilla does not fix all their memory leaks and data races with Servo, they are pretty much dead.

Maybe I should learn Ada.

No one's really a zealot here, it's just fun to watch you sperg out over simple facts or a bit of sarcasm. So much so retards like you ramble about "marketers" and such.

hurr

Lexing and parsing (and interpretano in the sense of interpreter) simplicity.
The possible strabism of indentation can be cured with a tool like gofmt (which works due to go extremly regular grammar)

If you cannot write lexer and parser for indent-based language, it doesn't mean that it is hard.

What's so complicated about parsing indentation? I get that it's slightly harder than parcing curly brackets, but only very slightly - not enough to factor into decisions about syntax. It's not tricky.

Python's fun. Once you get a grasp on it I would suggest looking through all the what's new in python docs to see some shit you probably wouldn't have found on your own and the reason they were put in.

If you can it doesn't mean that you should.
In a scale of complexity that goes from regular to recursively enumerable, Python grammar is context-sensitive especially because of semantic indentation (to be fair the scale is not discrete and the step to be context-free is little). The greatest part of the programming languages grammars are at least context-free.
Go is one of the few to be almost regular. This simplicity make the life easy to anyone that wants to write a tool to do something with the code, like formatting it, check that it respect some constraint, inspect it, generating anew etc. The compiler good performance is due even to this property of the language.
This is for the "complexity" part of the problem.

Now comes the controversial part: semantic indentation is prone to introduce subtle bugs, that are especially hard to find on large code bases not written by you. Think of a line that should be outside an if block, but instead stay on that indenation level for some reason (these kind of problems happens more with these languages). Maybe it causes a bug that will only happens sometimes in production and you will cry for nothing. And you will need to understand the entire surrounding code to understand the problem.

I said that this point is controversial because your experience may vary (it depends maybe by how many pajeets you must bear at work), but the first point is a clear win for the go grammar, while the advantages of semantic indentation (less parens?) are negligible.

...

That's retarded. They should have gone Python 3 + Cython if needed.


this is interesting, however transpiled code usually sucks dicks (not really good for subsequent reading/editing by humans) and compiling it the Cython way is already tried&tested thing.

Parsing Python is really fast in practice.

there is no such thing as improperly indented code in Python. isn't it neat?

Are you trying to say that this won't be obvious?
Like in-your-face obvious.
If a project is being done by such incompetent monkeys, it's doomed to fail regardless of language used.

do you also separate every function into it's own file

I'm not sure what you mean. Do you mean that Python doesn't let you use a gorillion layers of if-statements without getting really long lines? If so,
-- Linus Torvalds

Google apparently tried to modify Cython for their needs or do an extension and were rejected, so they're just going to make their legacy code fast this way.

Also, here's what they were trying to do:
python.org/dev/peps/pep-3146/

They tried to modify CPython (which is the standard implementation, while Cython is a Python superset which lets you mix in C), and they were not rejected - instead, they retracted the proposal when their own interest in it dropped for various reasons. Did you read that page?

It's not only a question of parsing but of creating tools that do static code analysis or modifications/refactoring, work on the code in a simpler way with tools like regexp, generating easly go code (this last one is really underrappreciated)...

Semantic indentation issues are well known, for some programmers that's not a problem, for others it is.

Here the semi official reason for not using indentation to declare block:

talks.golang.org/2012/splash.article

"As a simple, self-contained example, consider the representation of program structure. Some observers objected to Go's C-like block structure with braces, preferring the use of spaces for indentation, in the style of Python or Haskell. However, we have had extensive experience tracking down build and test failures caused by cross-language builds where a Python snippet embedded in another language, for instance through a SWIG invocation, is subtly and invisibly broken by a change in the indentation of the surrounding code. Our position is therefore that, although spaces for indentation is nice for small programs, it doesn't scale well, and the bigger and more heterogeneous the code base, the more trouble it can cause. It is better to forgo convenience for safety and dependability, so Go has brace-bounded blocks. "

This is the kind of thinking that got us Rust.

I'm using indentation-bounded syntax from now on.

gophers pls *go*

Are you dumb or what?

golang is surprisingly ergonomic and fun to write in, at least for the things it was designed for. you have to drag the language kicking and screaming if you want to do any "clever hacks" in it, but that's working as intended.

golang is the soldier's programming language.

ps rust sux and python a shit

go is controlled by google, stay away unless you enjoy getting buttfucked

it is also a useless language that would never would have seen the sun without the massive shilling from Google and paid shills on imageboards, hn, reddit etc.

lol

no generics

Go is free software. Anybody is free to make a new implementation of the language or otherwise fork Go to do something different.

So? The language is still controlled by Google in practice. It would take someting very dramatic to get people to properly fork it.

Neither, I'd use Perl instead since it has served me well over the years.

Everyone who's serious about programming should learn Ada.

oh no, google controls an implementation of their software


have you used it or just memes

Assuming you also know Python, are there things you find much easier/better in Perl than in Python? I'll probably learn Perl at some point, just out of interest, but I already know Python well so I'm wondering if learning it would be useful in a practical way.

you just restated the problem

Dunno, I'm guessing it's probably better at text processing and regex stuff. But it's probably not worth learning just for that, if you're already comfortable with other scripting language.
Maybe Perl 6 has more to offer, but I never got into it.

what problem? that's not a problem

it is the sad reality. why would i use go over java? not that i would want to use java. i use rust

that's what interfaces are for. sure, it's not like a drop in replacement, but it's a simpler and more efficient way to get much of the same effect

When you know Perl well enough, doing text processing with something else feels like punishment. Also Moose makes you wonder why the other languages haven't done OOP this way.
If you are into hacky and autistic stuff, you should learn Perl.
Python is too neurotypical to be seriously considered here.

generics has nothing to do with interfaces. you know java 1.4 has interfaces and no generics and is much more trusted than go

not directly no, but they serve some of the same function of code reuse

kill yourself

I recently saw Francesc Campoy in person and I can confirm that he is indeed
T H I C C

blog.plan99.net/modern-garbage-collection-911ef4f8bd8e

okay thanks for the warning

tldr go gc is a shit compared to jvm gc

Rust isn't going anywhere until they severely improve the borrow checker. And that's a race against the halting the problem so good fucking luck with that.

Go is extremely performant, is extremely easy to learn, 3rd party code is almost always easy to read and there's pretty much no program out there that demands both perfect memory safety and high performance. So Rust is pretty much worthless for anything that isn't writing universal crypto libs/tools and browsers.

Gengc looked like the best idea 20 years ago, these days... not so much. It's very cache unfriendly, it wastes a lot of memory, making the generational hypothesis work in practice is a lot of work and optimizing for throughput is the worst idea unless your program happens to have amazing scaling factors.

The approach taken by go, with well designed value types, aggressive escape and liveness analysis and a concurrent mark and sweep approach focused on latency is much better than the dead end that java is stuck in. That's why go beats java in many real world applications *already* despite being far younger. There's a reason they finally had to retrofit the JVM with escape analysis in 2010.

PS. I bet there's still people out there that believe JIT compilation gives java a speed advantage over AOT compilers.

this is your brain on go. not even once.

try sounding out the words next time if your reading comprehension is so poor

I disagree with the "Better tooling" part of the python advantages over golang.
Golang has a great tooling system.
Also it's a lot easier to learn due to the well designed simple nature of it.
Golang also has an actual purpose over Rust.

...

dodged bullet again! top pragmatism

typical gopher

how does it feel to have half a brain? generics aren't some holy grail and if a language doesn't have them doesn't mean it's automatically shit you mongoloid.

fuck, even you with your half-brain should be able to understand go's type system since it's so simple, if you could stop shitposting for a solid hour

cry more go defense force

...

Depends on how you write it. You can speed things up a lot by using generators instead of lists, caching your data, and profiling different ways of doing the same thing will tell you how to optimize it further.
I've written an expanded Python implementation of some popular C program, and it's only twice as slow while doing almost twice as much, and can be imported and easily expanded as a Python package rather than clumsily parsing C version's inconsistent stdout.

chill my friend, it sounds like you would be interested in visiting my temple golang.web.fc2.com/

Why would you ever do that?

Problem with Python is the same problem every language has. It's great at what it does so people want to use it for stuff it's bad at.

As for learning the implementation details of a higher level language. idk doesn't that defeat the purpose.

consider me triggered

Everything I mentioned is nothing special, it's actually standard. Generators are a basic feature of Python, simple caching with the help of a dictionary before you do your operations instead of repeatedly retrieving same data during them is just a better algorithm. In the end it's all about knowing what Python actually has by looking at their docs and suggestions.
I think it's actually more true that people simply don't know well the language they're using and so they just use a few most elementary types and statements that don't even scratch the surface of the possibilities it offers and of what's actually the preferred way of doing something.That's what I'm seeing fairly often when reading other people's code.

no

Did you try also using Cython for code where speed matters? How is it?

Not yet. I actually didn't go that much into speed in the above project, I just did some profiling and rethought some of the logic so that its speed is not too far away from the original C version. But if an uglier way of writing something was faster I still picked the more readable one.
When you need to optimize your Python code you can easily inspect it using "timeit" and "profile" which are part of the standard library. Try this though:
$ touch empty.py$ time python empty.py
There are certain speed limits you can't beat because the interpreter itself takes some time to initialize. I agree with the above post that at that point you should probably be using a different language if speed is that crucial.

Go does not have that python 2/3 bullshit

Go would be nothing without paid Google shills. If you have a great product, you should not need to shill it. That is all you need to know about Go. Yes it is SHIT.

That's because it's much newer. No one cared about Python 1.x just as no one cares about Go 1.x

Go will have to make so many changes and include so many additional features to be competitive that Go 2, if it ever becomes a thing, will make the Python 2 to 3 transition look like a bugfix release.

...

Technically Python 3 was a bugfix release. Bytes/str was convenient in Python 2 but it was a huge fuckup in hindsight.

worse is better :^)

It wasn't. Not all flaws and other fuck-ups are bugs.

I challenge you to write fastest program in Go that would, for example, read 128MB file with random content like this:
simply computing the sum of all bytes' values.
This takes 0.09 seconds in Python (with Cython) on my machine, for 2nd and later runs.

Can Go make it faster?

import mmapfrom mmap import ACCESS_READfrom libc.stdint cimport uint64_t, uint8_tdef read_and_sum(): cdef uint64_t result cdef uint8_t * bytes_pointer chunk_size = 2 ** 12 with open('benis', 'rb') as f: m = mmap.mmap(f.fileno(), 0, access=ACCESS_READ) total_size = m.size() remaining_size = total_size result = 0 while True: size_to_read = min(remaining_size, chunk_size) remaining_size -= size_to_read if not size_to_read: break ch = m.read(size_to_read) read_size = len(ch) bytes_pointer = ch for i in range(read_size): result += bytes_pointer[i] return result

Python 2 is pretty much unnecessary now. Even PyPy got Python3 support.

Yeah I know. Just saying that Cython may allow you to even do some number crunching without leaving Python land, if you do it right. When bottle neck is not in your own code, then plain CPython is bretty good.

Am I missing something?

t.Go fanboys

It is statically linked. You can additionally reduce size by using flags.

Also the binary size will always be bigger then C and C++, since Go has GC included, you don't need any runtime unlike with Java/Python/Ruby...

This means Go doesn't have that many problems with backwards compatibility. With Python you always have to hold yourself back from using the newest features because not all distros have the newest interpreter in their repos. Which is sometimes hard because Python is still adding useful features that are related to pretty common tasks.

that's simply stupid

also, python interpreter is not that big, if you develop something big, then including it is not a big deal.

So, all who shilled for Go here can't actually write some Go code, right?

i made a naive program in rust.
your cython program (with chunk_size set to 64 * 1024) takes about 0.19 seconds
my rust program takes about 0.13 seconds

use std::fs::{File};use std::io::{BufReader, BufRead};fn main() { let mut reader = BufReader::with_capacity(64 * 1024, File::open("benis").unwrap()); let mut i = 0; loop { let len = { let buf = reader.fill_buf().unwrap(); if buf.is_empty() { break; } for &b in buf { i += b as u64; } buf.len() }; reader.consume(len); } println!("{}", i);}

made it 0.02 seconds faster

use std::fs::{File};use std::io::{BufReader, BufRead};fn main() { let mut reader = BufReader::with_capacity(64 * 1024, File::open("benis").unwrap()); let mut counts = [0; 8]; loop { let len = { let buf = reader.fill_buf().unwrap(); if buf.is_empty() { break; } let mut i = 0; while (i + 8)

Ignoring the only language actually went through a proper development cycle.
Most of the problems that Go, and Rust for that matter, are trying to solve were discovered in the late 70's, and mostly solved in the early 80's. And subsequently refined in 1995, 2005, and 2012.

brought it down to 0.09 seconds

extern crate memmap;use memmap::{Mmap, Protection};fn main() { let mmap = Mmap::open_path("benis", Protection::Read).unwrap(); let bytes = unsafe { mmap.as_slice() }; let mut counts = [0; 8]; let mut i = 0; while (i + 8)

At this point just use the SIMD crate bro.

got it down to 0.082 seconds on my old as fuck machine
i uploaded the source to github + instructions on how to easily bench it yourself
github.com/InaneBob/sum_bytes

now lets see some go code faggots or c

what about original size though?
on my machine increasing size didn't deliver any more speed benefit.

anyways, gj

is it a compile-time thing? like, storing entire content right in the compiled blob?

You mean, Haskell, OCaml, Erlang or what?

Also which C compiler did you use?

I meant Ada, but I don't know the development history of the languages you mentioned.

microbenchmarks are literally retarded for comparisons

this

benchmarksgame.alioth.debian.org/

yet in every rust thread the benchmarksgame gets posted. Holla Forums if full or larping autists.
rust is masterrace

This is not a microbenchmark. This is a complete, well defined task. It measures both I/O and number crunching capability, both of them were said to be Go strong areas.

I don't have rust or cython installed so I'm not going to do this, but I will tell you what would happen if I did.

First I would make sure that I write a zero allocation function, of course.
The second big performance improvement would be rewriting the code to use mmap, I can do this because go has mmap packages too.

Once we get to this point it's all about vectorization of the sum. I don't know if the compiler is going to be able to vectorize automatically here (almost certainly no), if it can't I would rewrite the thing in assembly, it's kind of cheating but you had to write so much weird code to get Rust to vectorize that it would probably still be just as readable.

Now we have to programs that do exactly the same thing and of course have more or less the same performance. What does this prove? Nothing.

Somebody said that Python is slow. I called it bullshit.
If the performance is the same, this disproves that statement (about python being slow).

Cython is not Python. Also a single benchmark doesn't tell the whole story. Over a more representative set of benchmarks you would see that python and cython are slower than go and that rust is faster than go.

Honestly what I find disappointing is how slow is rust compared to C. Rust doesn't have garbage collection and doesn't have to do the stack checking that Go does for every function call, yet it manages to only be slightly faster for most things and 2x slower than C. Sad.

like what?

Cython is de-facto tool for doing Python extensions and they are Python, of course.

>it manages to only be slightly faster for most things
For example?

generics

Why?

What are you even talking about? Go 2.0 is not actively planned. Go promises compatibility with future version. There will not be Go1 vs Go2 like it was with Python (any time soon).

Go developers are open to idea of Generics, but it needs to be done correctly or it is not going to be done at all.

It is not just a simple matter like adding Generics in Java.

If programming without Generics makes your life so inconvinient then you should probably just stick to Java.

The whole point of Go is to get rich standard library, fast compilation, good enough running speed, garbage collection, native executable (no runtimes) while being

as simple as possible.

If you want BLOAT go use Scala.

hello go defense force

HAHAHAHA. it's EXACTLY like adding generics in Java, and the solution if ever done will be largely indistinguishable aside from the intracacies like subtyping, bounds, capture conversion, etc

every popular language (such as Java 1.4) has this, by definition.
every language that is not C has "fast compilation", including Java 1.4
Literally Java 1.4
Literally Java 1.4
this is meaningless, you can use an ahead of time compiler with Java to get the same effect
There's nothing simple about Go in contrast to current mainstream PLs. Okay to be fair it's probably simpler than Python, but not by a large amount.

So we just proved that Go is a shitty attempt to re-invent an older version of Java.

don't respond to your own post, stupid.

Go is leagues simpler than 95% of PLs, and much better designed too

so you have no point other than "hurr durr go is bad hurr hurr"

Python sucks. Nesting in whitespace is cancer

I don't think you know what a standard library is. C doesn't have a rich standard library, and its standard library is standardized, so no amount of popularity will make it richer.


Can you elaborate? I assume your non-Python code doesn't "nest in whitespace", since that's "cancer", so can you show us what it looks like?

...

How would you implement lambdas in a library?

However boost did it.

>>>/g/


Typical autistic neckbeard reply. Yes, *C* has a small standard library, but the other 50 languages don't.


fuck off retard. you probably don't even understand how the equality operator in Go works (It's much more complex than any other popular strongly typed language such as Java, C#, ML, Haskell).

Can you explain how "every popular language has [a rich standard library] by definition"? The "by definition" part completely mystifies me.

something can have complex internals but still be simple mongoloid.

but please, tell me how unassailably complex and esoteric this 100% opaque, uniquely confounding language feature is golang.org/ref/spec#Comparison_operators

your vanga skills need improvement

I suppose this is the best place to ask.

Is it worth it to set up the 'Go environment'? That is: setting $GOPATH, using the directory structure and all this stuff: golang.org/doc/code.html

It seems like a hassle. At the moment I just compile my code with gccgo and disregard the directory structure entirely, but I have only been learning for a few weeks and haven't needed to import a non-stdlib library yet.

no dont. install rust.

congratulations, you fell for meme language
delete this crap and learn some real thing — rust, python 3, erlang. unlike go, each of these three is useful and has real reasons for existence.

Makes it convenient. I have shit in my rc file for golang, perl, rust, node, and php

[[ $- != *i* ]] && returnif [ -f ~/.bash_aliases ]; then . ~/.bash_aliasesfiif [ -f ~/.bash_prompt ]; then . ~/.bash_promptfiexport EDITOR=vimexport SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/ssh-agent.socket"export PATH=$PATH:$HOME/bin/:$HOME/.local/bin/:$HOME/scripts/# Language-specific package manager faggotry# Golangexport GOPATH=$HOME/golangexport PATH=$PATH:$GOPATH/bin# Rustexport PATH=$PATH:$HOME/.cargo/bin# Perlexport PATH=$PATH:/usr/bin/core_perl/:/usr/bin/vendor_perl/# PHPexport PATH=$PATH:$HOME/.composer/vendor/bin# Nodeexport npm_config_prefix=$HOME/.node_modulesexport PATH=$PATH:$HOME/.node_modules/bin

etc...

"rich standard library" basically means it has a bunch of shit in the standard library which is "useful and good" according to some asshole's definition of "useful and good". Off the top of my head that would be: Python, Ruby, Java, C#, PLT Scheme, PHP, and I'm not even continuing this argument further because I don't even give a shit what is in the standard lib of a PL. It's basically a way of comforting retards into your language. What is or isn't in the standard library never really matters in real code.

pathetic.
I'm not saying Go has complex internals. I'm saying it's a complicated language for what it does (almost nothing), and might as well be replaced with Java 1.4, or some decades old version of Algol, or maybe Ada or Pascal. I remember these two surprising edge cases about equality:
stackoverflow.com/questions/25734504/different-pointers-are-equal
stackoverflow.com/questions/25749484/equal-pointers-are-different
Not only are they obscure edge cases, but once you learn one of these edge cases, you'd assume the other doesn't exist, yet it does. I'm not interested in "high level" languages that can't keep something this basic simple. The only serious mainstream PL that exists is Standard ML. Everything else is script kiddie junk thrown together by some asshole who's only seriously used one or two other languages and never studied any theory. Laymen programmers go around calling PLs simple because the small part of it they understand and use day to day is easy to remember, but once they deal with other people's code and version compatability, and malice, all of which are problems that multiply with each other, it becomes a whole different story.

Because everybody likes defining their own type and implementation for stuff like UTF-8 strings?
Did you ever wrote something in C, for example?

Also note python has sane defaults:
pip install --local

you're quite queer in your opinions. Why is SML the only srs bsns language? perhaps you lack the real world experience to understand the needs of systems in production. but no, it's neat that you're passionate enough about your CS 203 class to make silly claims on a Philippine Pictogram BBS. have a gold star.

wew. so you're admitting you don't know anything about go?

the programming community at large disagrees with you. everyone from the cubicle code monkey to our lord and savior Ken.

there are legitimate issues with the language, but you've failed to bring up a single one

Never used go but I cant stand not having control over my bits in python. I need pointer math to be happy.

If I cant tell my program to xor a bmp file and then run it as native code why code at all?

And in line assembly code is like crack.

are you a masochist by chance?

If you need pointer math to be happy you've been so damaged by low-level programming that you can't reason about your programs on an abstract level any more. You need to use some other programming languages to get a better perspective. Doesn't have to include Python, but they have to be languages that don't let you write code the C way.

It's the only mainstream PL with serious thought put into the design. Particularly the type system and execution semantics. It hasn't been made into a piece of shit in order to pander to retards like C# or Java for example. I'm not passionate about SML but it's the only example I can give to explain how to properly design a PL with hopes of someone understanding.
standard fucktard reply. i have more experience than you. I've written 10s of thousands of lines of code in many high leven and low level languages from ML to Java to C to Python to Perl to assembly. If you seriously think Go is any more fit to some task than SML you are seriously retarded. But nevermind, you obviously never spent a minute learning SML. Nobody who knows a fuck seriously thinks some bullshit cookie cut HTTP libraries or whatever other garbage from the standard lib of some modern mainstream PL is a differentiator of anything. You can keep appealing to "programming community" but they are a bunch of fucking monkeys who can't even walk 5 steps without introducing a remote code execution vulnerability despite having a memory safe programming language.
I didn't go to a CS class to learn about SML. I found it after exploring 7 other mainstream PLs.
Well it's not like I actually code Go unironically. 5 days is an exaggeration, but from what I've seen the spec there just changes every time I look at it (says last update date on the page somewhere), do they even pin versions of the spec to specific releases of the compiler?
the programming community at large is retarded. my issue is legitimate, and you haven't provided an argument against it.
well, at least he recognizes that im not in one of those other standard programming cliques. i typically get marked as a retarded pajeet-tier Java coder or an autistic C slinging neckbeard when I argue on the internet

The current C code I'm writing doesn't have a single string in it aside from some temporary prints for quickly debugging the prototype (no we aren't putting logging in that shit like your stupid post-2008 web app. it will slow down too much and we don't collect info from our clients anyway). The end user interface (0.0000001% of the code) will have a few strings but I sure as fuck don't need built in language support for that or whatever trendy bullshit you are referring to. You are one of those fucking morons who passes strings around all over the code so of course you could never handle real progrmaming.

i mean it's not like I'm saying it's a show stopper that Go has the misfeature of a slightly shittier equality operator. it's just a small issue and a mark of bad PL design. The real problem I have with Go is it serves no purpose in comparison to Java 1.4 or Algol

You will never get stable work using meme languages.

I suggest you go read the various Ada rationales. The DoD doesn't muck around when it wants to design a language that detects as many errors as it can at compile time instead of runtime.

oh really? what made you think so?
also, strings are just one example. of course I won't list everything.

...

10s of thousands is really not that much.

so explain specifically how it's superior instead of listing buzzwords fam-a-lam


kek. fizzbuzz isn't "exploring" a language

Pure Python — 0.123 seconds for 128MB file on same machine, almost as fast as Cython:
from os import SEEK_ENDfrom time import perf_counterimport gcimport numpydef measure(fun): print(fun.__name__) t1 = perf_counter() result = fun() t2 = perf_counter() print(result) print('time = {} seconds'.format(t2 - t1))# noinspection PyTypeCheckerdef main(): gc.disable() measure(sum_numpy_chunked)def for_each_chunk(filename, chunk_size, fun_map, fun_reduce, acc_start): with open(filename, 'rb') as f: start_position = f.tell() f.seek(0, SEEK_END) total_size = f.tell() f.seek(start_position) remaining_size = total_size result = acc_start buffer = bytearray(chunk_size) while True: size_to_read = min(remaining_size, chunk_size) if not size_to_read: break read_size = f.readinto(buffer) remaining_size -= read_size result = fun_reduce(result, fun_map(buffer, read_size)) return resultdef sum_numpy_chunked(): return for_each_chunk('benis', 2 ** 17, lambda ch, size: numpy.frombuffer(ch, dtype='uint8', count=size).sum(dtype='uint64'), lambda acc, r: acc + int(r), 0)if __name__ == '__main__': main()
For those who said that Cython is cheating :)

fuck, accidentally posted dirty unfinished code.
here's the real thing:

from time import perf_counterimport gcimport numpydef measure(fun): print(fun.__name__) t1 = perf_counter() result = fun() t2 = perf_counter() print(result) print('time = {} seconds'.format(t2 - t1))# noinspection PyTypeCheckerdef main(): gc.disable() measure(sum_numpy_chunked)def for_each_chunk(filename, chunk_size, fun_map, fun_reduce, acc_start): with open(filename, 'rb') as f: result = acc_start buffer = bytearray(chunk_size) while True: read_size = f.readinto(buffer) if not read_size: break result = fun_reduce(result, fun_map(buffer, read_size)) return resultdef sum_numpy_chunked(): return for_each_chunk('benis', 2 ** 17, lambda ch, size: numpy.frombuffer(ch, dtype='uint8', count=size).sum(dtype='uint64'), lambda acc, r: acc + int(r), 0)if __name__ == '__main__': main()

the fact that you think it matters whether a language "supports utf-8" (whatever that means)

10s of thousands in each, not in total. now continue to be a bunch of faggots who can't do anything more than bikeshed about the credentials of some user poster on the internet

SML is just a fucking PL. It lets you define and construct/deconstruct variant types and modules and they just do what the fuck is advertised. It doesn't have 500 weird constructs that combine in strange and surprising ways because some asshole had a fetish for them at the time and put them in the language. It doesn't have unimeme syntax built in (and only has some ascii or latin-1 whatever bullshit they use for pragmatic reasons).
Everything is in SML for a reason. All the other mainstream PLs just put a bunch of assorted bullshit in because it's popular at the time.
Java:
Python:
>yield inside generator expression (stackoverflow.com/questions/15955948/how-does-this-lambda-yield-generator-comprehension-work)
Javascript:
Go:
SML has none of this bullshit, and is still more powerful because of variant types and modules.
SML is what you get by default when you design a PL as a means to write code. Every other mainstream PL is what you get when you hire a marketing department and have them drive the design. SML is like what a dude living in a cave would make and the others are what you get from a capitalist society (given some task that each have the resources to solve equally well, such as building a chair). The SML chair is just a piece of wood and shit. The capatalist chair is spring loaded and malfunctions shooting a rod into your ass.
as for your second point, that's ironic to say when everyone who shills Go literally never wrote anything more than fizzbuzz in it (aside from ClownFlare, but they are clowns)

oh really? so I'm a fucking genius then?

example for IDEA?

how is that a bad thing?

not a big deal, it's still possible to write fast code by optimizing where necessary

nobody forces anybody to write like this.


how is that a good thing?

Also why not OCaml?
it seems like it's more practical
and, despite all claims that MLton is such a cool compiler, it's not even the fastest implementation of SML (SML/NJ beats it) and all SML implementations lose to OCaml

thebreakfastpost.com/2015/05/10/sml-and-ocaml-so-why-was-the-ocaml-faster/
tl;dr best times:
SML/NJ — 9.3
MLton — 10.5 (epic fail, given all the boasts they are stating)
OCaml — 7.3
so, for SML's best try, OCaml is 27% faster.

still not that much

You are the one who brought it up in the first place!

Storing unicode code points wastes an absurd amount of space and doesn't fix anything since copepoints aren't glyphs.

fucked up code is possible in every language

automatic coercion is not weak typing, get your terminology straight if you want to look like a CS student on the internet

they don't

you can shadow, but the semantics of := is confusingly non-recursive.

I'm not really sure what your beef with the java memory model is, it seems fine to me.

memory safety and thread safety are two different things.

Let me do a critique of SML:

How is this different from other programming languages? Of course you don't know exactly what code does without knowing how the programming language works.

>>yield inside generator expression (stackoverflow.com/questions/15955948/how-does-this-lambda-yield-generator-comprehension-work)
You can do fucked up things in any language. The most you can ask for is for fucked up things to look weird and out of place, and that particular fucked up thing looks very weird and out of place.

They're moving away from this, and some name resolution is static already:
docs.python.org/3/tutorial/classes.html

Only in Python 2.

I wonder what's holding back OSS.

There are no good new programming languages. Anytime you adopt early you trade in security, reliability and your own odds of success for a higher chance of suckering other idiots like you into failing.

How about the ifps guys.
Use 3 years old meme language for the reference implementation.


I ran your Python on my machine, output says 0.281 seconds
but time says a bit more. I guess that is the python environment.
real 0m0.482suser 0m0.444ssys 0m0.036s

I slapped something together in Ada. It uses Streams, which is an Ada mechanism for I/O. Steams can be Serial ports, files on disk, Network connection, etc.
Types have a 'read and 'write procedure which handles the serialization, and you can denfine custom ones.
The default ValArray'read just uses the uint8'read for each of its fields, ie reading byte by byte.
This is obviously slow as fuck, about 7 seconds on my system.
The solution is to convert (or deserialize if you wish) the whole array at once, and this is what read_valArray does.
Note that this way the same function can be used to read valArray's from a socket or any other thing that can be handled as a stream in Ada.
Quite a bit faster than Python, and except for the read_valArray procedure fairly straight forward.
real 0m0.168suser 0m0.128ssys 0m0.036s

with ada.Text_IO; use ada.Text_IO;with Ada.streams.Stream_IO; use ada.streams;with ada.Unchecked_Conversion;with system;procedure Main is type uint8 is mod 2**8 with Size => 8; type uint64 is mod 2**64 with Size =>64; Benis : ada.streams.Stream_IO.File_Type; BenisStream : Stream_IO.Stream_Access; type valArray is array( Integer range 1 .. 4096) of uint8; procedure read_valArray(Stream : not null access Root_Stream_Type'Class; Item : out ValArray); for valArray'read use Read_valArray; procedure read_valArray(Stream : not null access Root_Stream_Type'Class; Item : out ValArray) is use System; Item_Size : constant Stream_Element_Offset := valArray'Object_Size / Stream_Element'Size; type SEA_Pointer is access all Stream_Element_array( 1 .. Item_Size); function As_SEA_Pointer is new ada.Unchecked_Conversion (System.Address, SEA_Pointer); lastnum : Stream_Element_Offset; begin Ada.Streams.Read (Stream.all, As_SEA_Pointer(Item'Address).all,lastnum); end read_valArray; sum : uint64 := 0; valArr : valArray := (others => 0);begin Stream_IO.Open(Benis, Stream_IO.In_File, "benis"); BenisStream := Stream_IO.Stream(Benis); while not Stream_IO.End_Of_File(Benis) loop valarray'read(BenisStream,valArr); for foo of valArr loop sum := sum + uint64(foo); end loop; end loop; Put_Line(uint64'image(sum));end Main;

yeah, that's environment startup time, there's no point in measuring it.

Also try running several times, first run might be slower disk read and on subsequent accesses OS may read it faster.

Yet, there's still no Go code posted that performs this task. Either Go can't compete with Python in speed (that would be epic fail btw), or all who shilled for Go on 8ch can't actually write Go, even such simple challenges.

(OTOH, a bunch of other solutions were already posted, including Rust, Ada, both of which are believed to be more difficult than Go)

And any of these facts would speak of something.

type uint8 is mod 2**8 with Size => 8; type uint64 is mod 2**64 with Size =>64;
these lines look really interesting btw.

Do you use Ada in some serious projects? How is it going?

Go can probably compete with python and many other "good" languages on technical merits. But adopting it early before they provide a comparable ecosystem is stupid unless it's a job requirement.

Adopters and evangelists of trendy languages is why we have retards out there using npm install TROJAN or running curl VIRUS_URL | bash

WHERE IS THE CODE MOTHERUCKER??
REEEEEEEEEEEEEEEEEEEEEEEEEEE

package mainimport ( "fmt" "io" "golang.org/x/exp/mmap")func must(err error) { if err != nil { panic(err) }}func main() { rd, err := mmap.Open("benis") must(err) defer rd.Close() var off int64 var result uint64 buf := make([]byte, 1

i ran all of those:
rust: 81 ms
python: 260 ms
go: 207 ms

get rekt faggots. rust is the best

I'm not using it for anything serious at the moment, but I found it really nice to work with. Syntax can be a bit wordy, but there are good reasons for that. Too bad the board can't handle it, the ' throws the syntax highlighting off.
And it has Tasks, exceptions, Real-time, systems programming.
And it should be able to do OO although I still need to look into that.
The whole concept of splitting your code into a specification and a (hidden) body is also really nice.

What's the state of tooling (IDE support, etc.) if we compare it with mainstream shit like Java and Python?

Is this thing libre.adacore.com/tools/gps/ any good?

Gps in the only IDE available I think, unless you want to go the eclipse way. But in runs on all platforms and has more or less what you expect.
Community is a lot smaller, although there are some really nice projects out there. And you can target things like 8bit-avr and arm-cortex boards which people expect to be the sole domain of C.

0.078 seconds vs previous 0.123
still pure python
from time import perf_counterimport gcimport numpyfrom numba import jit, uint8, int64def measure(fun): print(fun.__name__) t1 = perf_counter() result = fun() t2 = perf_counter() print(result) print('time = {} seconds'.format(t2 - t1))# noinspection PyTypeCheckerdef main(): gc.disable() measure(sum_nb_buffered)def for_each_chunk(filename, chunk_size, fun_map, fun_reduce, acc_start): with open(filename, 'rb') as f: result = acc_start buffer = bytearray(chunk_size) while True: read_size = f.readinto(buffer) if not read_size: break result = fun_reduce(result, fun_map(buffer, read_size)) return result@jit(int64(uint8[:]))def calc_sum_nb_loop(a): result = 0 for i in range(a.shape[0]): result += a[i] return resultdef sum_nb_buffered(): return for_each_chunk('benis', 2 ** 18, lambda ch, size: calc_sum_nb_loop(numpy.frombuffer(ch, dtype='uint8', count=size)), lambda acc, r: acc + int(r), 0)if __name__ == '__main__': main()

Nicely done, I tried to run it myself but I could not get numba to build.
So @jit basically optimizes the summing loop for you? That seems nice.
I've got my (Ada) code working using mmap instead of streams which resulted in a nice reduction in the sys time.
Not quite there yet, but I might have a trick up my sleeve...

on my machine:
your ada: 0.407s (measuing full run from shell)
vfx forth: 0.334s (same)
gforth: 0.677 (same)
iforth: 0.329 (just summing bytes) & 0.467 (slurping and summing bytes)
gforth -e 's" benis" slurp-file : sumbytes 0 -rot bounds do i c@ + loop ; sumbytes . bye'
assuming 64-bit gforth,
: slurp ( c-addr u -- c-addr u ) r/o open-file throw dup file-size throw abort" too big" dup allocate throw locals| a n fid | a n fid read-file throw n abort" failed to read entire file" fid close-file throw a n ;: sumbytes ( c-addr u -- u ) 0. 2swap bounds do i c@ 0 d+ loop ;: go ( . . . . ) 4drop s" benis" slurp sumbytes d. bye ;' go is EntryPointsave main2
last for vfx, but it's basically the same in other forths - you just don't need the GO , and when 64-bit can use the gforth version of SUMBYTES instead of this one.

I couldn't get any of the python versions to even work.

on the same machine 's go does it in 130ms

geez.

I can get 61ms with just sumbytes, or 200ms with the previous slurp, or 150ms with normal file I/O and letting iforth mmap it behind the scenes.

: sumbytes ( c-addr u -- u ) 0 LOCAL sum begin dup cell > while swap @+ dup $FF and over 8 rshift $FF and + over 16 rshift $FF and + over 24 rshift $FF and + over 32 rshift $FF and + over 40 rshift $FF and + over 48 rshift $FF and + swap 56 rshift + +to sum swap cell - repeat begin dup while over c@ +to sum 1 /string repeat 2drop sum ;: slurp-sumbytes ( c-addr u -- u ) r/o open-file throw LOCAL file 0 begin pad 4096 file read-file throw dup while pad swap sumbytes + repeat drop file close-file throw ;timer-reset s" benis" slurp-sumbytes .elapsed cr . bye

1. do very easy naive Forth solution
2. faster than Ada but more than 2x slower than GO
3. apply obvious low-level optimization, get within 10-20ms of go (can shave off a little more by summing larger than 4K buffers at a time).
4. it's still obvious where I'd go next for more speed (direct mmap use; SSE2 instructions)
vs. GO where it's magically fast on this particular benchmark and if it's not fast enough then wtf I dunno try something else.

saging cause this is enough about Forth in a go (and unfortunately python) thread

because you didn't install dependencies? then that's quite an expected outcome

What compiler setting did you use? I go from 570 ms with no optimization to 160 ms with -O1 and -O2, and further down to about 115ms with -O3.

ah. 140ms with -O2


every language has its own completely incompatible dependency resolver. I learned and then dropped Python before it adopted += assignment, so I don't know Python's. I still think it's interesting that none of the Python posted here works with a bare Python install. Not 'batteries included' anymore is it? Or if you're writing benchmark-dialect Python, you don't even bother trying to use the core language?

Why would I if there are tools like numpy?

so core python is useless?

golang? more like gulag amirite?

HAHAHAHAHA
good one reddit
anyway rust is the best language

...

so your brain is useless?

let me guess you also think perl is shit because it is too difficult for your pajeet brain to comprehend?

where?
perl has shit syntax, almost like PHP which is shit (I hope you wont try arguing with this, it would be just stupid). also, performance is crap but unlike Python, there's nothing like Numpy or Cython.
latest perl also has undecidable grammar (like C++) which is of course bad because it won't have any reasonable tools. inb4 i write all code in notepad hurr durr

Did you just say Perl has worse performance than Python?

With this one small tweak this guy is using Ada to sum bytes at incredible rates and basically you are retarded. :)
real 0m0.075suser 0m0.060ssys 0m0.004s

Perl is basically a mix of C, Lisp, and awk/shell. Oh and it has its own regex, more powerful than the POSIX ones. But otherwise, it's pretty straightforward and not much in there is really original or new.

They're completely different. Go has great concurrency, good performance. It's meant for writing services. Python has no concurrency (GIL lol) or performance, it's used for data science/ML (calling C libraries), simple scripts, tests and by people who can't code. It has a pretty big webdev community. Shit performance often doesn't matter there.

There's nothing more infuriating then running a Python program for an hour and getting an AttributeError at the end because you made a typo. Python needs tests for everything and Python code with no tests will quickly stop working. Go won't so easily.

I use Python for TensorFlow, matplotlib, etc. and C++ for the rest. If Go had simple generics, sum types and pattern matching I would like it. Rust is cool but the lifetime system is a bit too complex (good luck teaching that to some code monkey) and AFAIK it has no libraries.

Core python is bad at being sanic fast. You could easily implement this without using third party libraries, but its speed wouldn't be competitive with fast programs in the other languages posted.

If you want a fast Python program you need to move the bottleneck to a fast language, usually by using a library that isn't pure Python.


He said that Perl and Python both have crap performance by default but at least Python has libraries and dialects to compensate for it when needed.

This is also not really true, PyPy implementation can deliver really good speeds and it's just that, an alternative implementation of standard Python language.

Cython/Numba are just even faster for unconditional number crunching, so I didn't bother posting PyPy timings. But I guess PyPy will be faster than Perl most of the times (given comparable efforts for codez in both).

Protection of the codebase from code monkeys is a good thing.
There are quite a lot libraries and even some really interesting ones, but not as much as in Java™ or Python.

Nobody said it's too hard to write. It's just shit.

some random comparison where, among others, PyPy and Perl are tested.
blog.famzah.net/2016/02/09/cpp-vs-python-vs-perl-vs-php-performance-benchmark-2016/
Perl is 14 times slower

More up to date version
blog.famzah.net/2016/09/10/cpp-vs-python-vs-php-vs-java-vs-others-performance-benchmark-2016-q3/
now Perl is 16 times slower than PyPy

...

Who cares? I don't use Perl when I want raw speed. It's mainly a tool for sysadmin tasks, web dev, database interface/reports, and such.
Next someone is going to tell me PL/SQL is slower than Python. Or Tcl is slower than Go. Well I don't give a shit, because I know when not to use them.

It's a highly optimized implementation of core Python. Is there a highly optimized implementation of core Perl you'd want to compare it to instead? This is one program they used, written in the core Python language:
raw.githubusercontent.com/famzah/langs-performance/master/primes.py

Compare normal perl to normal python for accurate results

Why should your arbitrary choice of what "normal python" is be used, if PyPy doesn't even require changing your program?

is poopoo installed by default?

What do you mean? Installed by default where?

on Windoze(for example) none of them is installed by default. not even C
probably there are more distributions which don't have them too.
so it's not an argument

and if you look at repositories, many of them do have pypy

who cares about your lame excuse? you (or someone similar) said that Perl is faster than Python. I simply countered that with real counter evidence. What use or not use is a different topic and nobody said that Python or Perl are to be used for everything on earth.

what does "core Perl" mean?
You want to say that every language has only 1 "real" implementation and all others are somehow "fake"?
Like, Clang is fake? ICL, MSVC — fake?
Standard ML has more than 5 useful implementations and only one of them is real? (which one then?)
What a stupid horseshit.

added results for ada:
rust: 81 ms
python: 260 ms
go: 207 ms
ada: 173 ms

hmmmmmmmmmmm maybe the reference perl interpreter?
he never used those words bro
are you a stupid nigger?

lol

who said that only reference implementation's performance does matter?

nobody you retarded nigger

not an argument
rust is still the best

microbenchmarks aren't arguments either

there is nothing micro about 128MiB :^)

kek

it's trivial to increase it to any greater size, btw

#include #include #include #include #include #include #define BUFSIZE 128*1024*1024int main(void){ int16_t f = 0; uint8_t *p = NULL; uint64_t i = 0; uint64_t x = 0; if ((f = open("shit.bin", O_RDONLY)) < 0) { perror("shit.bin"); exit(EXIT_FAILURE); } if ((p = mmap(0, BUFSIZE, PROT_READ, MAP_SHARED, f, 0)) == MAP_FAILED) { perror("mmap"); exit(EXIT_FAILURE); } for (i = 0; i < BUFSIZE; i++) x += p[i]; printf("%" PRIu64 "\n", x); munmap(p, BUFSIZE); close(f); return EXIT_SUCCESS;}% dd if=/dev/urandom of=shit.bin bs=1M count=128% gcc -Ofast map.c -o map% time ./map17112286902./map 0.03s user 0.00s system 99% cpu 0.032 total
too wasted to write it in assembly

edit:
27 #pragma omp parallel shared(x) 1 #pragma omp for reduction(+:x) 2 for (int i = 0; i < BUFSIZE; i++) { 3 x += p[i]; 4 } % gcc -std=c11 -fopenmp -Ofast -o map map.c% time ./map17112286902./map 0.06s user 0.00s system 572% cpu 0.011 total

And here I thought to set the new lowest record with muh Ada MultiTasking, only to find we all got btfo by C and compiler magic.
How does that magic work btw? I just made two tasks that each do half of the file.
real 0m0.051suser 0m0.056ssys 0m0.016s

en.wikipedia.org/wiki/OpenMP

i updated github.com/InaneBob/sum_bytes accordingly

rust single thread: 110 ms
rust multiple threads: 74 ms

c single thread: 133 ms
c multiple threads: 93 ms

rust is the fucking best

lol

"this one thing takes $ms on my machine" is almost completely worthless. At least have two different things, so that you're not just showing off how you brought a desktop to a laptop fight.

www.rust-lang.org/en-US/conduct.html
archive.is/ennYu

not an argument


what are you talking about?

It's duct tape, and sometimes you need that.
If all you do is ivory tower stuff in tightly controlled environments where there's no stress or deadlines and you're not exhausted from overtime and shitty management decisions, then I guess you can hate Perl. But it has served me well.

I was laughing because I saw that you had a different loop than the C variant or have you also modified the C loop to sum 16 bytes at a time?

I haven't followed this thread so idk if the C guy is then supposed to improve his or however this shit works.. but anyway, that was my "lol".

i posted my version nearly a month ago.
not my fault when he brings a laptop to a desktop fight

I don't see why Python can't be used for all tasks that you use Perl for.

man perlrun
-a
-F
-l
-n
-p
-i

Python+awk+sed can replace Perl. Ruby can replace Perl. Python by itself is like having half of the multitool.

all they do is trivially doable from Python unless you are some kind of disabled person

not an argument

Are you saying you didn't fix the invariant and compile and run all the programs on your own machine?

yes

Can you name a few things you can do with python+awk+sed but not (easily/properly) with just python?

1: i can use awk instead of python
2: i can use sed instead of python
3: i can use awk and sed instead of python

answer to that question dude

Once you know one of these scripting languages well, it's a waste of time to switch to another. I've been using Perl since the 90's and have no desire to switch just because something is currently popular. I managed to avoid wasting time on PHP, and I'll avoid Python as well. So long as Perl works adequately, I'll stick with it.
Instead I've been playing with Forth. That's different enough to be interesting.

so you're saying that possibility of other people reading your code is not important?

FUCKING DROPPED

golang is a toy language supported by idiots and 9front hipster idiots

python is used for actual work and is the best choice within its niche

C isn't going anywhere, neither is python, Go is only going to the dustbin :^)