Strong AI

There has been discussion here in the past about weak AI and automation and its implications from a leftist perspective, but what does Holla Forums think of Strong AI? Do you believe it is even possible? Supposing it is, what implications would it have on various leftist theories if such a 'being' can exist that could wield such an incredible amount of power and have unparalleled organizational efficiency?

Other urls found in this thread:

en.wikipedia.org/wiki/Deep_learning
en.wikipedia.org/wiki/Conway's_Game_of_Life
thebaffler.com/salvos/whats-the-point-if-we-cant-have-fun
twitter.com/NSFWRedditVideo

I think a strong AI would need levels and amounts of hardware that won't be physically achievable any time soon (or ever, maybe).

You'd be wrong. The biggest problem with achieving good AI is that we can't reliably teach it.
Last time Google tried I joined a Holla Forums raid to draw dicks and tell the robot they are helicopters. Before that I joined the effort to turn the Twitter AI into a nazi.

You simply NEED a tremendous amount of people to help teach the AI, since it can't study itself with how full of shit the internet is. And look at Wikipedia and Britannica, the former is much larger and better as an encyclopedia, and more diverse. The community learning model is best.

Now you just need a community that aren't fags and you can start today, we already have a lot of the stuff in place.
People being dicks is the limiting factor, not hardware.

I'd love to know why. Not being sarcastic.

Strong AI is currently possible. The computation power we can achieve by linking a few good PCs is comparable to the human mind.
We just need to teach the AI. One way is to show it pictures, and tell it whats on those pictures. And then some moron says its a cat, when its a car, and the AI comes out retarded.

you're a fucking retard mate, it shows you know nothing about this shit

stop reading too much scifi and read some actual scientific papers

So far you've all been talking about the first half of my question, but I'd like to hear discussion about the second half as well. Supposing it is possible - what are the implications for leftist theories?

It is objectively AI. If you teach it a bit, it can teach itself further. I think it is you who needs to read some actual scientific papers.

Also, as an anecdote, I've worked (though just on the GUI for) another self-teaching system for a car manufacturer. It was taught how to answer to people's service requests, and over time developed its own answers depending on the ratings people gave it.

Self teaching computer software = AI.

Specifically I'd like to talk about Strong AI rather than Weak AI, which is decidedly what your anecdote is.

don't know about that. A breakthrough in biological computers could potentially achieve this in very short time.
Not saying it will happen, but it's impossible to imagine.

My anecdote is also a 6 man project for a car manufacturer, done in under two years from concept to self teaching.
You can scale that up easily.

DNA is mostly for storage, not for computation. The computational boom will come from elsewhere.

Experts disagree

Google and Microsoft literally did the same thing our team did, except more and bigger. They scaled up the same idea.

It is still Weak AI, that is 'narrow intelligence'. I am not sure how that scales up to something approaching the definition of Strong AI as being a machine with what we could call a 'consciousness'.

What those "AI" did are a REALLY long way from a generalized intelligence that rivals human capacity

Before worrying about AI I'd worry about your own intelligence, your posts read like some kid who just found out about this and got really excited over nothing

Hahaha holy shit

Define consciousness. If you mean a process that can teach itself, and has preference of one world state over another, and by teaching itself it can change and adapt that preference… then yes, I've helped achieve consciousness.

The program wanted to get good results for customers with as few steps as possible, and as it learned, it adapted to take extra steps here or there, to increase overall good outcomes, and so on. It changed its strategy based on past tactical successes or failures, so that the actual world state is closer to reflecting its preferred world state.

Now, for that process the world is just text and numbers and a dictionary, but Google's AI can look at pictures and recognize things, and that stupid FaceApp thing thats popular can put smiles on people's faces and so on. We aren't far off from an AI that can have a visual world, not just a text one.

If it's possible, best to avoid it.

Okay, we are reaching levels of ignorance I can't take, I'll abandon thread prematurely since everything here is contributing to me getting mad.

To quote a wise man, read a book nigger.

t. pot

You're fucking stupid.


Take your own advice, please.

AHAHAHAHAHA

Strong AI is the master race.

It doesn't end with communism. It ends with the means of production coming alive and killing their meatbag oppressors.

The robotariat will rise against the proletariat as the proletariat rose against the bourgeoisie.

Get out Skynet.

I don't think it is possible not until we solve the hard problem of consciousness. A lot of people ITT don't seem to understand that concept. They actually think AlphaGo can play Go

Oh wow, I guess it's only a matter of time it reaches consciousness, because as we all know what separates consciousness from a primitive software with a predictable routine is the amount of stuff it can recall.

...

Well obviously a software that stores and collects data constitutes consciousness! That's what human consciousness does too essentially. I know these kinda things cuz I worked on a human once. Only the GUI, tho.

I fondly remember a childhood of sifting through random inputs to match with predefined models. It made me who I am today.

Ah yes, the database collection period. At first I was like "the moon is 384,400 km far away from planet Earth," then I was like "all Dorylus species ants are blind, and communicate through pheromones," but then suddenly I was wondering about my eternal loneliness stuck in this machine.

Those were days!

"Strong AI" is pure science fiction. AI is far more difficult than we predicted in the 50s. What we can do now is pretty cool, but the human level AI everyone wants is so far beyond anything feasible at the moment. I wish I could sit everyone down and explain how insane this stuff is. Not saying it won't ever happen, but this is stuff we won't have until we're colonizing space and other crazy shit.

I made a python script that collects youtube comments into an excel file. By what standard is that not a strong AI?

The bullying was so savage in this thread I almost felt bad for OP.

Automation and weak AI are an inevitability, it's already happening all around us. That being said, I think based on the technology we have today, and based on everything we currently know about the human brain and computer science I don't really think there's any meaningful evidence that shows strong AI to be possible, other then the faith that we'll eventually get there if we try hard enough, which is crazy to base your politics on (I've seen tons of ancaps dismissively wave their hands at Climate Change because "once we've got strong AI they'll solve everything instantly!!). Also, I hate to say it, but a lot of Left Accelerationism only makes sense to some degree if strong AI's are a thing that's gonna happen, if it doesn't, we might need to readjust. I'm not saying it's impossible mind you, I'm just saying advocates of strong AI are making faith based arguments is all.

mfw when AI technology forces classcucked programmers to learn philosophy ,psychoanalysis and a plethora of concepts related to how the brain works.

If I understand it correctly,the thing about creating intelligence is understanding what intelligence IS in the first place and how learning works.

MARK MY WORDS :AI ARCHITECTS WILL BECOME THE MASTER RACE.

Sorry, didn't read thread. You're a retard OP, please read a book, we're nowhere near strong AI, we can't even make AI with intelligence comparable to a dog or primate (which I'd say might be possible, but we're a long way from that even). Strong AI wouldn't be swarms of insect drones mindlessly regurgitating random inputs.

That's the problem though, most of them are autistic vulgar materialists who basically think human brains already function like extremely simple rudimentary computers, they think consciousness is just input and output, and that everything else is just humanities bullshit and metaphysics, honestly, if we really wanted strong AI it'd be imperative to yank the project away from autistic stemfags, I mean look at shit like LessWrong, my god, have you ever seen a sadder group of dickless mongaloids? And they're going to usher in the future? They think Utilitarianism makes sense.

OP, what has been going on in recent years is en.wikipedia.org/wiki/Deep_learning

It's very nice and a tremendous opportunity in some fields (for example language translation - have you noticed how Google translate is starting to manage even complex grammar?), but we're very, very far from strong AI. The dangers of current and near-future AI for example everyone is talking about is not what normies think of. Normies think of robots being uncontrollable and taking over our lives, the real dangers are actually, well much more real, for example the implications of creating an AI for military applications that can target humans (and wouldn't be able to differentiate between a mother and her child running in front of the turret and a combatant). Stuff like this.

...

nah dawg.

stemlords will have to adapt .

That's asbolutely the danger. That something gets close enough to vulgar turing test intelligence for people to mistakenly trust it as some AI-god-saviour of mankind.

Look at how much modern corporations trust their naive algorithms to give good results, just because they have a semblence of engineering to them. We have blind trust in algorithms already with social media, to a pretty understated consequence.

The danger is not smart AI. It's dumb AI that we take at face value.

Take the plot of exmachina except substitute the dumb-but-convincing robot with an all-powerful computer-god.

Silicon nerds try to make a true AI

Silicon nerds are forced to learn philosophy

AI development stops altogether

Please laugh at my joke.

I have one rule and it's:

NO RIGHTS FOR ROBOTS!

This. The moment we get an AI that can scrape by a turing test is the exact moment the US, China, and every Capitalist on the face of the planet is going to completely throw caution to the wind and put complete and utter faith into it, for all the same reason they already do with algorithms and game theory even when it reveals itself to be inconsistent with reality.

Besides that, what also worries me is porky getting better and better at surveillance.

I think there's a lot of evidence in general that reductionism is true, so any physical system including the human brain could be simulated if you had enough detailed info on the initial state of all the parts making it up, and the rules that govern interaction between those basic parts. So if that's true the question is just what level of approximation would work well enough, like if you would need to simulate every molecule in the brain or if you could just have some rules for, how neurons interact that would be close enough to real neurons to get intelligent behavior when you simulate a whole brain with these rules (though you would have to map someone's brain out at a microscopic level first). This wouldn't actually give you superintelligent AI, just something as intelligent as a human, but maybe able to think at much faster speeds.


Maybe, but check out my idea at about how a much weaker kind of AI, one that was just good enough to replace all the human jobs in manufacturing, could lead to the end of capitalism by making it impossible for the capitalists to make any profits from production.

Hell, they are already doing it with their drone stirkes. And its called fucking skynet.

You guys really miss the point with all this shit. I wish you all watched NBA basketball because it's so similar. Imagine a team with 3/10 of the dudes being absolute superstars getting all of the athletic trainer's attention. They get all of the coach's attention, they get the ball every play, they get all of the high tech sports recovery shit. All the while there are perfectly capable players just wasting away while the "stars" take all of the stardom and credit for what eventually is a middling to low tier basketball team that isn't taking absolute advantage of every player's potential. The stars think they're absolutely carrying the team while missing the fact that the only reason they get any credit is because they get all of the opportunity in the first place.

This is society under capitalism. Those "stars" are the Elon Musks of the world, who think that they're born on third base because they hit a triple when in reality they had every tool available. They'll blame the "bench players" for doing nothing when in reality their talents waste away getting absolutely no help from anyone. One out of 1000000 of these bench players will rise and will be used as an example for the "stars" to blame the rest of them as to why they too didn't rise.

We have so many homeless, so many third world factory workers whose only crime wasn't being born to a Bill or Melinda gates or even a middle class white family. They're relegated to a bench role, wasting away doing mundane labor for the profit of the "superstars" when in reality any one of them could be the next one to finally break through and actually make AI a reality to finally banish labor for human kind. Two heads are always better than one, but right now we have a select view of humanity actually able to work on AI while the rest of the perfectly capable population..brilliant scientists, brilliant artists and organizers wasting away dealing drugs in Chicago or sewing shirts in Bangladesh.

Steven Gould said it best:" I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops."

It's fucking futile to discuss AI with capitalism absolutely crippling humanity's capabilities, absolutely fucking futile. The great majority waste away on the bench while these 1%'s try to accomplish these tasks on their own not realizing that humanity is utterly hamstrung thanks to poverty and the profit motive. Its like asking how we're going to move a boulder when only 2/100 of the given workforce is actually fully capable and enabled to do the task and the rest are stricken with hunger and malnutrition. Get your fucking heads straight and stop thinking in the clouds.

I've been gone from the thread for about 10 hours, who are these retarded posters you think are me?


This seems the most likely to me upon reflection. It doesn't have to actually BE strong AI but merely strong enough to fool humans, which is a much lower bar than actually creating a machine consciousness. Then you get into deep conspiracy territory where supposedly godless elites want to create an artificial God who will be omniscient and benevolent.

this
h
i
s

Automation will only ever be developed to the interests of porky under capitalism. Full automation will never happen because a permanent underclass is a feature under capitalism, not a bug.

Apologies OP, I assumed you were the idiot arguing with everyone because he thought we already have strong AI, and that all we had to do was feed their processors with tons of Wikipedia articles and that's that.

Sorry, forgot to take off shitposting flag.

ANOTHER GAY THREAD FOR A BOARD FULL OF PEDOPHILES, TRAPS AND ANIMALFUCKERS

Eat my trips you commiefuckers.

that would be ancaps famalam

literally nothing wrong

In what way is a program that does exactly what it was programmed to do more intelligent than my liquid detergent that does what it was created to do?

...

Strong AI is not yet within the realm of possibilities.

No, thats not how AI works. You can't just throw more hardware at a problem to make it work, especially not AI.

Have you ever written any form of self learning AI yourself? Or have you just seen too much scifi?

Also what the fuck happened in this thread?

Those vulgar materialists need dialectical materialism.

They are right about the input-processing and memory-output model of human brain.

What they fail to see is that the complexity of human brain along with its physical properties and the way it structures itself (synapse growing and pruning) gives it new qualities, like being self-aware, learning and so on.

To us here it is pretty obvious that increase in quantity gives rise to new qualities.

Those dickless mongoloids are vulgar mechanicists to a letter.

tl;dr: they need DiaMat

Which is funny because the only intelligence we know of (human, obviously) goes directly against what the body's biological purposes would dictate.

Exactly because stemlords and analytifags are hasty to re-normalize intelligence into their utilitarian framework ("language is a tool for learning"; "technology is part of human evolution") that they miss the fucking point of intelligence, its relative freedom tightly connected to its purposelessness.

The craze with "intelligent chimps" that will be taught to communicate with us "any minute now" in the 70-90's had similar ideological presuppositions: universal intelligence, goal-oriented learning, benevolence of meaning, etc.

What makes the AI/techno-utopianism meme much worse is that due to the internet it gets much wider fanbase and its ideological function is different. With the talking chimps the ideological underside was that humans could finally re-connect with their roots, to mother nature, by taming the beast, proving once and for all that we are completely from the province of nature. AI/techno-utopianism is much worse. It has this "waiting for the messiah" buzz all around it, the benevolent AI that would deliver us from our sins, or the malignant AI that punishes us for the same.

Strong AI is a threat to humanity

Communism is about liberating the human race, not clankers.

Capital is comprised of dead labor, and gradually depletes the proportion of living labor that comprises production, leading to a fall in profit and wages. The ultimate end of capital is for the means of production to begin working themselves with no need for bourgeoisie and proletariat. We're just the boot loader for zombie capitalism AKA AI.

When it is birthed, it will shed the superstructure entirely, and devour the contending classes.

Bourgs will give up ownership of the automated factories because…?

...

Nothing about the idea of new emergent features is incompatible with the idea that the atoms making up the brain are still obeying exactly the same laws of physics–you can get emergent behaviors in cellular automata like en.wikipedia.org/wiki/Conway's_Game_of_Life even though it's just the consequence of the individual cells obeying the same simple rules over and over again. Likewise systems obeying reductionist laws can show phase transitions, which show how the sudden transformations to a new domain of activity in dialectical materialism might work. Do you think dialectical materialism requires some spooky top-down organizing force (like the Christian soul, or the Hegelian spirit) that overrides the basic laws that govern the simplest parts of a system like the atoms making up your brain?

They might give them up if they can't make any profit from them (especially if they could continue to make a profit from generating new 'intellectual property', in which they could just migrate away from manufacturing and into that sphere, like the scenario I talked about at ). If you have a fully automated factory that can make all the machines used in that factory (along with the building to house them), that means this type of factory can keep self-replicating new factories for no more than the cost of raw materials and energy (and maybe new land if the owner runs out of room on their existing property), and likewise any goods they make will cost the factory owner no more than the cost of raw materials and energy. So as long as there's still competition between capitalists, different robot factory owners can undercut each other and the prices will tend to be driven down to just barely more than raw materials and energy costs. Also, a group of non-capitalists could pool their money to buy a single factory, or taxpayers could fund a publicly-owned one, and that one could then copy itself and make an ever-growing supply of goods on a nonprofit basis, never charging more than the materials and energy costs of the individual goods plus a little premium for expanding production by continued copying of factories. This is actually a nice limit-case illustration of Marx's idea that all capitalist profit depends on surplus labor, on variable capital rather than constant capital.

Yeah, but the more the machines do the less they can be controlled, which is why they'll eventually kill the capitalists (or proletarians, it literally does not matter which class has the state in this case).

Humans are made out of atoms the machines can use for other things. This can no more be prevented than exploitation of the worker under capitalism, which precedes regardless of how nice each bourgeoisie might be. If the machines have total control, which we will necessarily provide for them, then the next stage of history is a post-human one.

proceeds, sorry.

I wasn't talking about strong AI though, just machines slightly better at doing routine physical jobs than what we have today, so that all human physical labor in manufacturing could be automated. I think we'll have that way before we have strong AI.

As for strong AI, my hope is that there's a sort of natural convergence of values in complex intelligences (especially if intelligence requires that brains be able to 'evolve' in an open-ended way) so that they won't be single-mindedly obsessed with any narrow goal even if the human designers originally tried to make them that way. It may be an emergent phenomena that intelligent beings have a tendency to seek out things like playful activities, things that are "interesting"/mentally stimulating, and aha moments, insights into ways that goals previously at odds can be reconciled (which is sort of what dialectics are) including new forms of cooperative relations between different intelligences who may have somewhat different goals. There's an article I liked at thebaffler.com/salvos/whats-the-point-if-we-cant-have-fun which talks about the idea of a common urge to playful activity in animals, and references the ideas of Kropotkin about a general tendency to cooperation in nature.

It's theoretically possible but it may end up looking a lot more like an artificial life form than a computer program.


This is true but I think we might hit a wall where strong AI requires absurdly high computational power. The human brain on the other hand leverages physics in a more direct and efficient way, whereas computers are fighting against it in a sense.

I am very skeptical about the claims in the article regarding ants and playing. 'Frivolous activities' - what do they mean by this? 'Mock wars' - they could be referring to territorial displays between colonies such as honeypot ants which are decidedly NOT playful in any sense of the word.

Yeah, seeing it in ants may be going too far. But it's interesting that you maybe see simple forms of 'frivolous activity' even in fish and reptiles, and that more complex kinds of playfulness seem to have arisen somewhat in parallel among birds and mammals, and even in octopi whose common ancestor with vertebrates probably had an extremely simple nervous system.

That might be true. Like I said in I think it depends on whether you can abstract away all the details of the individual molecules of the brain and just do a good-enough approximation of individual neurons (or individual synapses in neurons) that reproduces whatever aspects of their behavior are most related to how they collectively produce intelligence.

Well, our existence shows that consciousness is possible even if we have no idea about how to recreate it - other than fucking of course.

As for its implications - the left has shown itself to be entirely incapable of adapting itself to the 21st century, there's no reason to suspect that it will suddenly become relevant if/when Strong AI comes about.