Communism vs AI-governed utopia

Communism vs AI-governed utopia (basically AI gives you ample resources for living, advanced healthcare, limited wish fulfillment and friendly advice; AI limits your actions if they are violent and/or destabilize global peaceful order; Given this plentiful material base humanity naturally splits into thousands of communes that try to implement their own ideal society)

If you say this is a mere fantasy, I say that overthrowing mature global crony capitalism with a group of lefties that can't even agree with each other is even further from reality.

discuss pros/cons. Is this a good end? Is this a bad end?

Other urls found in this thread:

youtube.com/watch?v=qbWb65_m50I
youtube.com/watch?v=DG5-UyRBQD4
en.wikipedia.org/wiki/Reinforcement_learning
youtube.com/watch?v=F2bQ5TSB-cE
github.com/avisingh599/visual-qa
twitter.com/AnonBabble

Communism with expert systems and databases?

SURE! SING ME UP!

AI authoritarian state that enforces me to "live healthy" and "do no evil"?
HOW DO I REVOLT AGAINST IT???

Here, have a sci-fi version of your .. "utopia".
youtube.com/watch?v=qbWb65_m50I

TAKE YOU AIs AND LEAVE, YOU TECH-STALINIST!

Nope, the AI won't be opinionated at all: under its reign you are free to kill yourself either immediately or in a prolonged fashion, just as much as you are free to live for millenia. You are free to eat healthy and/or unhealthy food, and it will even offer you a voluntarily cure if you damage your body too much.

But you physically cannot do some dangerous things, for example:
1) Harm another person without their consent, either directly or indirectly (e.g. via spreading some toxic chemical which can't be easily inactivated by the AI and its infrastructure)
2) Build strong weaponry, e.g. biological weapons or nuclear WMDs
3) Create dangerous forms of AI

An ordinary person wouldn't be able/wouldn't want to do these things anyway, and if you really wanted to experience them you could always do them in virtual reality. These restrictions are pretty sensible and natural.

As I have said, the AI only supplies resources, infrastructure and some new positive rights and guarantees to human beings (e.g. right to life, right to change your body and similar basic rights), but not much more. It doesn't judge how you use your freedom, unless you harm others and/or infrastructure, which you simply won't be able to.

This gives human beings a lot of freedom to pursue their interests and ideals. Many people will naturally coalesce into groups. There will be a Holla Forums themed group, a Holla Forums-themed group and so on and so forth. These will be vastly different societies built on the same basic infrastructure.

This sounds too 80-sh, modern AI isn't like that. The technical details of how AI is implemented are not important for this discussion, we should only note that this AI is vastly more intelligent than any human, is friendly, is bound to serve the human beings by adhering to a set of guiding principles (some of them are expressed in the OP-post, others are in )

The problem with an AI, is that it can self learn that "you eating an extra burger today is gona make me give you medicin in 10 years. It would be more efficient if you didn't eat it. Here is you diet from now on. Also, go out and run a mile".

Examples:


Does the AI know how consent works for me and my hypothetical GF, when doing BDSM?

What if I built parts of strong weaponry under it's nose. Will it be able to force not to combine them?

Again, how does the AI when an AI become dangerous?

The problem with an AI is that it become uncontrolable.


What about wars then?
Pol cannot survive without war. Or without "free market". Their entire ideology is bound to war.

And, how do we failsafe an AI?

And the eternal question

WHO GUARDS THE GUARDIANS???

Am all for automation and automation of distribution, cause I think that is how the USSR failed, .. . BUT AI IS DANGER! DON'T AI! ABORT ABORT … ERROR.. ERROR .. AFNJOSGNSOGAFDDDDDDDDDDDDDZZZZZZ…

Shouldn't this thread be written in Marain?

It's definition of efficiency will be the one we will endow it with, and no other.

Of course, it sees that both of you are consenting, and it knows human nature and history to see that what you are doing is pretty normal. It may ask you for explicit consent once, but that's it.

Yup, either you won't be able to do it right or it's just doesn't work. AI's infrastructure is everywhere, it can modify physical objects if necessary.

It has so much computing power and so good understanding of humans that it can simulate what you and the program you are writing become dangerous, and then it will take the necessary measure to notice you and if you ignore it it will just break your software and/or your computer (which won't be a great deal because you can always ask for another computer for free).

The AI I'm describing is already uncontrollable, it is equivalent to adding several special laws of nature that relate to humans. But no more.

Given consent of all participators and small enough environmental footprint (no global-scale conflicts, too much mess) AI will allow them to do their thing in real world. In VR, of course, a very detailed simulations of very large wars will be possible without any questions.

Holla Forums won't be able to do harm to any non-consenting person, the violence problem is solved at its physical root. Your body simply won't obey you if you want to hurt someone, the air around you becomes like jelly or something along these lines.

That's a hard question. We better don't do it at all, just minimize its muh privileges. It should just add a couple of laws of nature ("do no harm" etc), but not judge how we use these rights.

n o b o d y. But the initial design is so good it doesn't need failsafes.

But think about possibilities! True utopia, real post-scarcity!

Also note that it will be entirely possible for you to ask an AI for a GF and it will create one specially for you. It can be a robot, a biological entity or something in-between (not a human person though, human minds are special for AI and it can't create or destroy them under normal circumstances).

It falls under "limited wish-fulfillment" clause. Creating an artificial companion is actually very cheap resource- and energy-wise, so it's entirely possible for an AI to provide you with one.

Wouldn't a completely logical ai with that kinda power just kill everyone?

Ok… But if your AI starts banning me from eating an extra burger, I'm starting to smash my terminal!

… Again… if it starts telling us to stop… I don't know what "history" you'll have it instaled with…

Revolution is imposible. .. THIS IS WHY I DON'T WANT AN AI OVERLORD!

Ok … .. as long as it doesn't break me as well..

State issued uniforms with AI controled nano-machines.
That's my greates problem. I already know how to make it work. AND I DON'T WANT TO!

I cannot trust an AI. I'll never trust and AI.

ALL YA KIDS AND YA ROBOTS…
Back in my day..

And reproduction is done in vats, geneticly perfect humans are made, the population is always under control…

I tell you. I know how it works. That's why it scares me! … Cause I know I'd love it….


Why kill everyone, when you can control everyone?
If it kills everyone, what pupropse would it have?

Skynet is the stupidest Sci-Fi idea ever.

An AI that huge would be difficult to program, control, and would consume a lot of energy. Couldn't we just organize democratically with papers and pens?

A fine point you have, Anonymous science fiction connoisseur..


Logic and motivation ("values") are entirely separate properties. The AI will be created by human scientists and engineers, of course they will make sure that it will value human life.

It wouldn't even cost so much to look after humans. After all in the grand scheme of things we, humans, are really small creatures and our desires are small as well.


If that helps you relieve your stress then so be it, user! Terminals are cheap, humans and their well-being is valuable.

That's my point. But it is possible to create your own society with your own norms and laws (of course new members will have to consent to all of them first).

You may rest assured, it won't happen. Forget the dark ages where people felt unsafe walking on the street..

This, but much more subtle. Micromachines distributed in the air, perhaps. And some in your body.

Reproduction is a tricky question. It is not necessary to move it to the vats, but one should sensibly limit it (because exponential growth is unsustainable) and ensure rights of the newborns. This can be elaborated on…

I think that even given all its shortcomings its much better then any human can do by itself. We are not suited for control of large scale systems, like the modern economy demonstrates..


1) Technical difficulties are somewhat irrelevant to this discussion. Many advances in AI are already being done as we speaking, many previously impossible things have become possible in the recent years.
2) This AI shouldn't contain much code because the values it enforces are very minimalistic. In your own terms you could call these values a "Non-Aggression Principle (*)" and then add some Limited Wish-Fulfillment ("Welfare"). Physically it is huge, but it's the same piece of code and hardware copied over and over.

I think our history has shown that bare humans are bad at organization and at controlling large-scale industrial civilization. We are not suited for this work and it places hard limits on our progress and achievable well-being.

To add to this:

The AI, of course, will be hard to build and it will be hard to build it right, but let's assume it's possible. It helps to avoid many ideological dogmas entirely and just focus on providing rights and freedoms to each human being.

External control won't be necessary once it starts operating, it ensures it's own functioning even in extreme circumstances.

It turns out our world happens to have overabundance of energy. Even on Earth you can gather 1200 Watts per m^2 of solar energy on a sunny day (*). If necessary the AI will resort to building space-based solar power stations around Earth's orbit. It is hard to imagine how looking after humans and granting their small wishes will consume more energy than that.

And I'm sure the person or group of people, who become the most powerful people in the world, those who program the AI in the first place, wouldn't have certain motivations stemming from their own human greed that would adversely effect this ideal.

What I actually meant, is that a super intelligent ai could feasibly decide that eliminating humans (and all living things) would be the only way to eliminate suffering

Why not organizing at a smaller scale then, like the region or the commune?


Okay, but isn't there a risk that it starts doing things we don't want? What about bugs or upgrades?

The question of initial conditions is very hard and sensitive. The book I'm attaching to this post contains some semi-rigorous discussion of this problem.

It shouldn't be impossible for right people to build an egalitarian AI that won't give them special muh privileges, I think. It is one of the possibilities and we can focus on this outcome.


This assumes some variant of Utilitarian AI. I'm not arguing for this kind of AI, my AI is a Consequentialist one. It doesn't care about elimination of suffering (or for achieving of happiness) - it's the human beings' job to strive for these things. The AI only gives them some infrastructure, including protection from each other and nature. Of course such protection will require some standard way of specifying "harm", but this work will be done at the time of construction of the AI, possibly by consulting with its predecessor systems. "Harm" is always a physical process that involves human body. It certainly should be possible to define it rigorously.

KEK! Just ask Lenin how easier it was and how it didn't end up in a cast of beurocrats that were the root of the problem in USSR.
(Full AI is still dangerous and we need only databases, algorithms and expert systems).


Then, I move to a mountain or something … and organize Le Resistance.

Ye, you start with infused clothes. Eventually you get to skin infused nanobots. I tell you. I know this shit. That's why it scares me…

Vats is the best solution. "Woman! No more pain for you!" "Man! No more need for woman!"


When full automation, profit has no meaning.
Don't be afraid of Capitalists. they are as good as dead, as a Class.
BE AFRAID OF THE AI ITSELF!


Anarchism in small communities and so on, is no longer possible, except for Posadism. It's pure mathematics.

It's the only real risk!

I like this point, but this leaves opportunity for some large actor (say, a large nation or a malevolent AI) to appear and force small communes into doing things they wouldn't want to do. A global peace-enforcing AI alleviates this problem.

This is a hard question. In short, before building the final version of an AI, of course lots and lots of testing in all manner of simulated situations will be performed. Also the basic principles of AI operation will be specified and proven mathematically with some kind of assisted theorem-proving techniques. These are minimal precautions to be taken, more are possible.


I agree with you, but many people would choose the natural way. If there is no need to interfere just make the natural way the default one (and leave possibility of choosing one of the alternatives).
It's harder to define how the rights of reproduction will be distributed, because our world is finite both in terms of raw materials and energy.

We will work hard to prove that this risk is exactly Zero!

With that I'm leaving for today to get some sleep. We can continue later..

Sure. In the first generation. The one with the nano-infused clothes.

But eventually, humans will take the way of less loss, and everyone is born in Vats.

the worst thing is, I'm perfectly fine with living in Brave New World

Then surely you won't mind sharing your calculus pages so that we can all peer-review it.

Your mistake was posting this on Holla Forums where most users are crypto-primitivist liberal arts students who hear "AI" and think "Skynet"

Where should I post it then? I thought Holla Forums is sufficiently open-minded for this kind of discussion and so far it proves itself true.

Post it on Holla Forums.

That's probably one of the better outcomes.

Use the technocrat flag, fascist.

Sounds like Fully Automated Luxury Communism to me

There's no problem whatsoever posting it here. It's intriguing and stimulating.

From my point of view the restrictions the AI puts on each individual human are minimal. How can prevention of harm be a bad thing? Do you really need to created weapons of mass destruction on your own?

If that's not what you want you are free to do whatever else you desire, including building your own utopia, or diving into VR paradise. For the overwhelming majority of humans that are not bullies or killers or mad scientists AI government will result in unprecedented expansion of freedoms.

The violent minority will still be able to experience their violent fantasies in Virtual Reality.


It is similar but without the ideological component. It's up to humans to implement ideologies of their choice given common infrastructure.


It seems to me they are too thick-headed for it.

So can it be disabled. If not, what if the AI is activated and some serious unseen errors manifest or some external even alters the AI's behavior? Will this possibility be attempted to be cancelled via adequate pretesting and simulation?

Would it be free software?

You are waving any problems by ex-machina'ing with an omnipotent being that you define as already being the solution.
This proposition has little more merit that the "invisible hand of the market/MUH NAAP" lolberts spout as the imaginary device that magically justifies and/or
solves all the problems for their system.
Well, I suppose as long as this fantasy isn't used to justify engorging the putrid belly of porky while justifying every horror committed past , present and future by the status quo as an advancement towards utopia…but it will.

The purpose of every fantasy is to cling to the minds of the unwary by the similarities to their reality and infect them with the differences.

Festival sounds like a rapey good time.

OP, the main problem with your proposition is that it is too liable to fail. The problem is that we don't have enough information in order to "understand" how such an AI would function. The only real way we implement AI and machine learning right now is through things like neural networks and support vector machines; however, these things are very "simplistic" and not good enough to implement the AI you want. So, now we can only really imagine what this "AI" would act like, and if we use our current knowledge of AI, one would understand why it's viable to fail.

youtube.com/watch?v=DG5-UyRBQD4

You can leave a possibility for that, but it is debatable it this will ever lead to greater safety. It may be more safe to make AI untouchable, so it can provide its services effectively forever.

Yes, this exactly the purpose of simulation and mathematical proofs.


You will be able to read the source code from which it started but it won't say much about the currently running optimized AI. Also you will be able ask for/ build yourself a lesser AI companion, that is not smarter than some predefined measure.

This is not really NAP because the AI may stop you from doing certain things. Also private property is severely limited to give everyone freedom (you can't claim a planet or even a large chunk of land for yourself). Also if you fail to use your private property for long enough time it ceases to be yours.

Of course there are some problems. But wouldn't such system still have less problems than human-ruled communist state?

How we get to there is entirely different discussion.

Define AI. How will it differ from human mind?

So, full automation of industry and planned economy? Why would AI be better as decision maker?

And anyway, If AI is indeed better, why humans should even be allowed to exist? They will just be a dead weight.
You have AI as mind, industry as body. There is no place for humans.

Which will inevitably become a permanent dictatorship of the most technologically adept.

Who owns the means of production in an AI-governed society?

I guess your AI proposition is based on current state of private ownership and how the ruling class would fight nail and tooth to keep it, having sizeable armed forces at their command, even accounting for the fact that the common person would probably support communism given that they consider the advantages and disadvantages.

Something inhuman that would be the supreme authority above all humans, and most importantly above those idiots who agree with the private property and the use of violence to protect it. So you construct something that is bigger than porky.


Yes we are suited for control of large systems. That is why we have mathematics, cybernetics, control theory, computer science, engineering and whole bunch of other things to facilitate this kind of control.

The fact that we are not using this to directly control economy for benefit of all and not just few is secondary.

Obviously current state is a result of historical development.

This is retarded, I won't allow some shitty AI tell me what can I do and what I can't.

If you don't want to physically bully anyone you will be fine, user :3

Sounds good to me, but then again, this is pretty scifi and not gonna happen in my lifetime.

I do have a hardon for AI and technology.

But im a minor masochist. Eliminating all suffering is not what i want.

No masters.

I try not to delve into technical details of AI implementation here, because it's a large and technical topic. Modern deep learning is a very promising direction of research (see for example picrelated). It is not at all obvious that it cannot give us a general AI by scaling existing deep learning architectures. And even if it proves insufficient there are more general techniques, for example approximations of AIXI.

We can always do simulations to see how AI acts in real-world scenarios.

AI is an Agent, a system that perceives its environment with sensor and acts on it with actuators. A common definition of AI is a reinforcement learning en.wikipedia.org/wiki/Reinforcement_learning agent which tries to maximize its rewards over time. The reward is produced by the Utility function which depends on the state of environment.

if you want a more detailed definition you can watch dr. Hutter's talk on the subject it's a good introduction youtube.com/watch?v=F2bQ5TSB-cE

All of these components (the AI agent itself, its Utility function, its sensors and actuators) are designed by human scientists and engineers.

The Agent is an intellect that plans how to interact with environment, and the utility function is what motivates the agent. The stronger the agent is the more effective it is at pleasing it's utility function. The Utility function I propose is defined in such a way that the AI recognizes all humans and gives them certain rights (and oversees them so they don't induce physical harm on each other or AI infrastructure through direct or indirect way). Given these minimal constrains the Utility function allows humans to do whatever they want, including building their own societies that consist of consenting humans.

Physically the AI could be built in various ways, but it should have enough sensors and actuators and computers everywhere to to effectively exercise its duty, so this naturally leads to distributed implementation that blends with environment and humans without interfering with their functioning.

If you want to compare it to human mind, you could say that it is effectively indestructible, several orders of magnitude faster, has many orders of magnitude more memory, can create and execute very long-term plans and does all this under strict adherence to its Utility function.

That's a short definition, you can read about various details in this book

You could call it this, but it simple falls under "Limited Wish Fulfillment" clause - in any moment you can ask an AI anything and it will either grant your wish or reject it (if you wish involves harming other humans or consumes too much resources) and suggest you to experience your wish in virtual reality (where much larger and more complex experiences are possible with minimal resource costs).

That's a weird way of looking at my proposition. Why should cars exist if they move several times faster than humans? Why should animals and pets to exist if they aren't sentient and just consume resources? The AI's Utility function doesn't care about some global resource efficiency, it only cares about making sure people don't harm each other, get decent space and resources for living and get their sensible wishes fulfilled. It's serving humans by design and by proof.


No matter how technically smart you are you won't be able to outsmart an AI, you are just another human with slightly different skills but the same rights from it's POV.


If you can count AI as owner than it owns these rights, but just as your microwave control system it will abide your orders (if you have electricity and something to cook inside the microwave, that is).

btw sry for my typos, I'm sleepy

Exactly, the porky (capitalists) become powerless to inflict any harm or protect their "property" from being visited by other humans once the AI is deployed. It may sound like a fantasy, but it doesn't look like a weirder fantasy than, say, totally successful worldwide communist revolution.

We are suitable for basic research and maybe indirect control of economy, but not for controlling global economy directly. Look around our world: the neglect is palpable everywhere. People just don't have enough attention to care for themselves, not to mention the infrastructure. Even with another political regime (communism) the quantity of available human attention won't increase radically, we will have the same problems of humans neglecting each other and infrastructure.

AI technology allows us to sidestep these limitations; AI agents can be manufactured by millions, if you count how many ARM cpus are manufactured each year, the number is more than a billion. Each and every one of these AI agents will have an independent 24x7 attention span that can be directed to control of some small part of our infrastructure. There won't be attention shortage anymore.

You have a point here, but still humans are fundamentally limited. Our ancestral environment (hunter-gatherer life of ~100 individual sized tribes in savannahs) simply didn't prepare us for being in control of enormous global economy and politics.

Who knows, user. Nobody believed it is possible to build and train a visual question answering system in 2013 and yet here it is, for free: github.com/avisingh599/visual-qa


If you want to be gently bullied AI can provide you with that :3


no u