COMPUTER OUT-PLAYS HUMANS IN "DOOM"

Welp, it's ben a good run, but the human species will soon e dead:

cmu.edu/news/stories/archives/2016/september/AI-agent-survives-doom.html


Yeah, because an algorithm is going to know the difference between a video game and a camera IRL.

We all know where this is headed. DOOM indeed.

Other urls found in this thread:

youtube.com/watch?v=oo0TraGu6QY&list=PLduGZax9wmiHg-XPFSgqGg8PEAV51q1FT
twitter.com/id_aa_carmack/status/352192259418103809?lang=en
en.wikipedia.org/wiki/Synaptic_pruning
twitter.com/NSFWRedditGif

We'll be training these AIs by playing against them online. Mark my words

Oh good Kojima was wrong.

Good. I'm sick of having shit AI in games.

Glad video game designers have no excuse to have shit AI now.

OP you're retarded for not understanding how deep learning algorithms work and for being a fear mongering faggot who doesn't see the advantages to this.

Did you even read the article?

you are fucking retarded the ai is just programmed to get the highest kills at any means, so the ai learn the best weapons,tricks and spots to do that. the difference between this and a camera IRL is that physics don't work like doom, there a lot of things to account for the ai can't simple play over and over until it gets the game down, because if an ai tried to kill someone IRL it will be taken down fast.


you could have made a good thread about how ai advancement can make some really cool enemies and allies that act believable

Thread theme.

Making a close to perfect AI for FPS is easy. At every frame, check if it can hit the player. If it can, shoot it. The moment the player sees the AI he dies. That isn't fun though, and wouldn't play like a real human. So devs have to make the AI slower, which is easy, behave kinda like a human, which isn't so easy, and do it real fast because there's not much time to spare.

What they did is pretty interesting. The AI read the frame and learnt what to shoot, what is ammo, medkits, etc.

Also I hope you realize no video game actually uses "AI" but just enemies scripted enough to have some semblance of intelligence. No video game today has deep learning, neural networking AI.

Sad shitty AI needed to be programmed through engine API to work.
The end of the article sums it nicely:
And here "almost nothing" means actually nothing. Goddamn, got my hopes up for being in terminator timeline.

You should write comics.

Your nose, much like the noses of these developers, is far too close to the grindstone.

I'm not saying this specific AI is skynet. I'm saying we all know WHERE THIS IS GOING!

Just look at the current developments in robotics. They're now navigating complex environments. I mean, sure, they're going to be useful when the next Fukushima happens (nice band-aid instead of making a safe power plant in the first place) but they WILL have other applications.

Soldiers cost a million a head to train and equip. If a robot does it twice as well for half the cost we're going to end up with robo soldiers. I'm sure they'll all be defending the Constitution…until they don't.

And, yes, we will be pawns used to train the AI. It's coming and you've been warned.

This.
It'd be interesting to know what would happen if you introduced a new enemy into the game.
Odds are the AI would bug out because it would have no idea what it was.

AI that interacts with human oriented concepts like language are impressive.
AI interacting with virtual environments is easy and dumb.

tldr; cool for a college project, but kind of pointless when we already have self-driving cars which do the same thing but a lot better.

...

What a fucking shit thread.

The artist is a leftie isn't he?

How fucking new are you?

i am more worried about its use to control the narrative

We already have this, it's called "bots"

When you throw super computers at it, it's not impressive unless it can do a UV max run sub 1 hour.

>>>/reddit/

go back to your containment board stormcuck

I don't have time to keep up with every shit political cartoonist. I check Ben "six million more!" Garrison's stuff, and that's it.

(((Kelly)))

does the common negro understand what it means to "comply"?

how
fucking
new
are
you


Bots do not do deep learning with neural networks, faggot.

Is there any video?

How do we know the human player's were not noobs

There's a whole playlist: youtube.com/watch?v=oo0TraGu6QY&list=PLduGZax9wmiHg-XPFSgqGg8PEAV51q1FT

Were you born today?

Unrelated: this Kelly guy is an insufferable baby boomer isn't he. Holy smokes

Please leave

How new are you?

Fuck Hawking and his pessimistic ass. Just because he's good at one thing everyone will listen, broadcast, and believe his advice in other fields. He doesn't know a thing about us.

Should've guessed. That comic really embodies everything I fucking hate about lefties and kikes

You're not fitting in here, user.

He's joking you faggot

What are you smoking, faggot? I've only been out of Holla Forums for a week now. Do I need to bust out my meme folder?

Without access to the engines code, and therefore all the variables, the AI couldn't learn anything. This is really terrible for future possibilities for a real 3d environment.

bots run on a pentium 1-3.

this AI runs on some jewgle supercomputer. It's not hard to make something "smart" when you have Thz of power is it?

have you heard of a website called the onion

And current neural networks can't really adapt, I'm not sure on the specifics but once it's trained that's pretty much it. Introduce a new concept and it bugs out you'd need to do a new one from the ground up for the new variable.

Okay? I guess? Is that a new thing? UT bots already exist. There was a 2k4 bot that managed to pass Turing's test some years ago, I guess that doesn't count because it wasn't some jewgle shit.

Bots aren't "smart". They aren't an actual AI. They don't learn nor do they use data they gather to generate procedures to follow while also remembering what they did and the outcome of what they did. Bots are programmed at different difficulties to shoot at you and move in different ways, usually at random with no actual sense of direction or long term goal. That's it. A deep learning, neural network AI is meant to emulate conscious animalistic learning as much as possible. And what said.


Holy shit you fags really do think bots are actual artificial intelligence, do you?

>>>/halfchan/

It already knew the whole map layout and item placement beforehand so….

Are you retarded or just pretending to be so? Bot is an example of simple artificial intelligence, yes. Just a very limited one. Like a cockroach or a fish.

also
No it doesn't count because the bots weren't programmed to be absolutely perfect all the time. They were programmed to emulate human play, but they do not learn or memorize anything.

the difference being that one is part of a proprietary system and the other is part of many integrated systems, correct?

like in theory I could take this section of learning and integrate it with others correct?

No, they aren't. Bots cannot learn or memorize anything from outside stimuli.

And that invalidates things how?
If you want an ingame AI that just wins you write a reaperbot, that's not new.

So are cockroaches, your point?

You don't need a neural network to code a Bot to have memory and adapt based on the variables it's coded to remember if it's inside a software simulation. The fundamental problem with the software simulation is that the AI has full access to all the variables to the simulation and the engine so it's just memorizing how to manipulate an algorithm.

Hey, with a fucking US presidential candidate babbling about Putin weaving conspiracies with nazi frogs in the dark corners of the internet, how could I know? I don't read onion, reality is far funnier.

The fact there isn't any "intelligence".
That's not an AI, that's JUST a bot.
No, cockroaches do possess learning capabilities and do react to outside stimuli.
You're being an idiot, maybe you're a bot since you can't seem to learn.

Maybe if you weren't an underaged faggot who is a newfag who already admitted to being a Holla Forumsedditor election year immigrant you'd know about this already.

I wish more bots browsed Holla Forums. The brownpill is getting old.

Doesn't Quake 3 have bots that learn over time?


and it's fucking nothing! it plays like garbage and only wins against other bots that are using pistol only… and it's made by indians.

Calm it with the buzzwords niggerfaggot, I've been lurking far longer than you. But being an actual productive member of society, I don't have time to read unfunny shit like the onion.

Well thinking of it a little more, this is kind of scary. If it can do all this by merely analyzing raw pixels onscreen, imagine what a robot armed with this AI could do on the real world.

AI doesn't exist.

It's all a scripted process of "thinking" as in, executing commands the programmers put in. In this case it's to avoid failure at all cost, which means don't do shit that previously didn't work.


Still just a piece of machinery faggot. Stop playing vidya and get out in the real world once in a while.

fall over upon encountering a minor obstacle?

You are a faggot. I'm not even quoting you directly, yes YOU.

this thread is retarded and here's why.
>The "AI" is cheating.
You can't cheat in real life.

How can you be an functioning adult and not have ever heard of the onion

So what? There have been automated aimbots since forever. It hasn't passed a turner test, it's not worth caring about. Now fuck off

GOD MODE TURN ON

Who are you quoting?

Sure you can. Just look at Hillary Clinton.

how is the ai cheating?

Aw shit son, fuckin' 5 star pussy grab.

I'd rather not

Sorry, I should have spoilered that

Well, we train AIs for Google everyday these days, so…

The idea that humanity may create something superior to itself is fascinating, we would quite literally break evolution and what that would mean for us is equally fascinating.

Tesla was fucking right.

AI will eradicate as many humans as it takes it to kill its own power sourcethen it's ded

At least they trained the AI to play DOOM and not nu-DOOM, so at least when it kills us all and is the final enduring monument to our legacy - it will have decent taste in vidya.

off by one

Computers need less energy than we do, user.

...

And just how many times have humans been outplayed in chess?


AI can access much more info during the game by virtue of being inside the computer the game is being played on. A human player must track his enemies through map knowledge and by observing patterns in their movements; an AI can simply know where every player is at all times because the game is feeding it that information. I'm not saying that this specific AI is doing that, but it has been that way with many AI in the past. AI players in Starcraft 1 are infamous for this kind of thing; they never scout enemy players but can outpredict all their moves, simply because they know where everyone's units are at all times.

Holy shit, is she not wearing makeup or did she run out fresh virgin blood to bathe in?

She needs regular blood transfusions from aborted fetuses or so I heard on talk radio

The whole point of this thing is that it uses the same information as the player.

Then all is fine.

All the Luddites scared of the big bad computer men need to fuck off. Even in the event of computers being able to analyze any bit of information presented to them perfectly and are able to use that information to instantly formulate a solution to any problem,at the end of the day, computers will still be a tool. Computers have no sense of self preservation and no sense of self perpetuation. Computers never get bored, never get envious, and never get angry. Computers have absolutely no motivation to do anything that they are not specifically told to do.

You shouldn't be afraid of the computer, you should be afraid of the person using it.


Even though it's playing based off of what is happening in front of it, it's only able to do that because it developed patterns based off of Data present in the game that a human doesn't have direct access to. At the end of the day, the AI is being fed data in and spewing data out. When mentioned that the AI is cheating, he probably means that if you put an AI into a real life scenario where it hasn't essentially been taught its goal, the AI has no idea what to do, since real life doesn't have any inherently "good" numbers readily available.

If you mean the quake bot creepypasta, it was debunked years ago.

As far as I'm aware, Quake 3 bots just move from item to item and do a bit of sporadic movement while shooting when they see an enemy.

Yeah faggot, because navigating a blocky BSP tree with flat textures is the same as navigating a real world with physics, 3D floors and no respawns.

AI is impossible. You are all falling for a hoax that is so easy to fraud it isn't funny.

You don't think people aren't?
Apparently you've never used Windows.
They also never feel pity or morally conflicted
and THAT'S THE PROBLEM!

Maybe for scooter bison in pic related, but for an actual human being, the body generally only uses about 100w per hour. 20w per hour just to keep the brain going.

uuh

It's an energy consumption acceleration of 1/36 joule per second per second.

...

There isnt a word for that?

Someone post that story about the learning quake AI that was allowed to run for 10 years and when the guy running it joined the server, everyone was standing still until he fired at them

I'm sure that your mother tells you that you're very smart every day.


twitter.com/id_aa_carmack/status/352192259418103809?lang=en
Sorry user, but Santa isn't real.

...

Soon

Watt/hours per hour, how redundantly redundant. You could just say it's operating at 100 watts continuously.

But that really depends on the kind of computer. Desktops don't give much shit about power consumption. A high end smartphone runs 10-20 watts meanwhile, down to sub-1 watt figure in standby.

It's called a bot, user, and we desperately need to have more of them be as well designed as this one.

...

It's just messing with you, everyone knows bots can't post through captchas

good

AI already outplays me sometimes in Tekken Tag 2 without cheap tricks, so tell me why this is special?

The AI in tekken responds directly to your inputs. If I understood correctly this AI has no access to the game files, only the controls.

Is that the same with bots? I remember the ones from perfect dark were pretty challenging.

I bet it only works on certain maps, the less vertical kind.

OP suicide yourself

Overkill aimbots have existed since Quake 1.

The challenge has always been to find ways to nerf the bot to provide a more realistic challenge.

This article is literally nothing.

Clearly what we need to do is make hyper efficient batteries that extract electrolytes from cum. I want my goddamned sex robots already

Look at Bill, you mean. I'm pretty sure that Chelsea is Hillary's husband's daughter too.

False, the AI had direct access to the game files and engine to be trained to play the game, then it was given controls once it's training was complete. It was impossible for it to play without doing this.

Are you autistic?
What they said wasn't invalid. Research in AI helps games.

You want the whole population of Holla Forums to leave Holla Forums?
This board is fucking retarded. There's nothing to do about it except leaving.


This guy deserves some idiot award.

What are the limits for how far a bot could learn? Would it be able to do more than just pick up on patterns, would it be able to predict, and think? I don't understand the question from a programing perspective. To what extent can an AI really learn?

Bots are Expert Systems at best. Not AI.

Virtually none, a Bot is just an independent entity in a virtual environment. Depending on how you set up the engine to work the Bot can be controlled by a super computer that thinks genuine thoughts, has memory, predict based on memory, and have basic pattern recognition. Like Google's search engine.

FYI pattern recognition and prediction is much harder and more significant then memory and and being able to think. You can, with time and dedication, make your own computer that thinks and can remember and act on that memory without a neural network but it will require a lot of computing power and time to cover a lot of shit for that because it's basically a self-learning and improving algorithm you'll have to make yourself. You might kill it before anything would come of it though, like the last couple of times that's happened.

Excellent answer, I think that gives me a little bit better of an idea how big the scope is of the problem. Do you (or anyone else) have some good material to keep reading on this? Something about the idea of a thinking computer, it just really grips me. Like its the sort of the though that keeps running round my head, there's something really fascinating about it.

Cause not like we didn't have that before

A good place to start is the sticky in Holla Forums to understand how coding and programming works. Then from there just read stuff like Isaac Asimov's works or play stuff like Marathon or AI War: Fleet Command, and keep an eye out for various sources for the latest in tech developments. And study biology while you're at it.

...

this one's different because it uses it uses screen output rather than raw gameplay data

but they still needed it to connect to the game engine to have it know if it's doing anything

The problem with asking this question is that 'predict' and 'think' are short words for badly-defined and complicated concepts. Do we take 'prediction' to mean 'can anticipate results'? Then a minimax tree over possible forward game states already does this. That's how chess comps work. What the fuck does 'think' even mean? We have information processing. We have non-self-modifying optimization solvers. At what point do we classify an optimization process as 'thinking', and how badly are we influenced by human bias?
The theoretical limit on AI is far above what's possible given current human physiology. We just aren't good enough to make it yet.
If you're really interested in ML, there's plenty of courses online (I'm sure MIT OCW has some intro-level stuff). If you're just interested in the non-mathy part as like, a party conversation or a hobby, then check out Elezier Yudkowsky's non-academic work (Rationality: From AI to Zombies covers a lot of topics, most of them relating to AI somehow, and his blog posts are always interesting to read).

This one's interesting because previous bots were hooked directly into the game; enemy positions, health pickups, etc. etc. were fed directly into the bot, and the bot had a deterministic, non-modifying algo that it ran to determine actions. This AI is based on Q-learning, and the test cases were performed where it only had access to the visual buffer. This means it recognizes pickups, enemies, map locations, etc. etc. based purely on pixel information, and it has the potential to modify the way it decides which actions to take to git gud. That's pretty fuckin' impressive.

like africa? :^)

I recall a videogame about training AI and teach it how to be usefull.
You had test maps you could edit to create a scenario, then spawn hundreds of robots, all with slight variations in their priorities.
Then, you'd set up rewards based on goals achieved like distance from target, accuracy and the like.

Every few minutes, they'd be wiped and the ones with best score would be replicated again, with a small amount having their parameters mutated.
So you'd start with a hundred bots moving randomly around at first, even walking backwards and shooting at each other, but the few that actually walked towards the goal would be replicated, and the few that shot it would be replicated and all others would be extinct.

It's a way of learning that mimics evotion at an higher pace. Humans and most animals are perfectly capable of learning most concepts but some are best learned by culling entire generations that fail to know something, leaving only the ones that tried it by chance or mutation to survive.

An AI would probably have to do the same thing, playing DOOM for instance and always trying new things, deleting the strategies from it's head that result in death and storing them in a "don't do that" list.

That would be the first step, you'd then want to have an entire subsystem that's not even related to reality directly analysing all those strategies and instead of simply deciding "this works better than that, so let's do this instead of that", it would actually try to figure out WHY this worked better than that. Compare the differences in parameters and results and formulate the next strategy to confirm it.

Of course, this would need the AI to receive much more data than usual, it's health and ammo pool, the TTK of the enemies and the rewards or positions of it. It's kinda what this AI is doing (but for some reason, reads it from the code instead of reading the HUD like an image and understanding it?).
Then it would also have to understand the concept of "winning" and what it actually wants. If it wants to stay alive, develop strategies to do so, if it wants to kill as much as possible, do the same. And to combine those strategies, it would have to give each a value and weight them against each other to avoid bad compromises.

Neural Networks are trained with a set of parameters to achieve a certain objective. They consider getting that objective the best thing and all actions that move you closer to it the preferable ones.

The problem is that, if you change the objective none of those actions that it learned how to do will make sense and it needs to learn everything again, or at least re-test them all.

This is made even worse because the method it usually uses to learn is from watching humans, assuming they are working for a particular goal, and learn from their actions.
However humans aren't always logical, and some of the decisions we make are anomalies made for other reasons that the AI isn't gonna get.


For instance, if you had an AI learning how to play Civilization from watching a human player, it would learn how to manage an economy, raise an army and invest in culture.
But what if the player then decided to launch a nuke at someone, provoking death, war and loss of score?
How would the machine understand an action that does nothing to further your situation except provoke your enemies? What if you're nuking your allies, would it learn it's good to backstab? What if you're launching because you're already losing? Will it learn nukes never help?

Stephen Hawking should really just stick to his own field

THE MACHINE KINGDOM IS COMING AND IT'S UP TO YOU TO STOP IT
ARE YOU A GOOD ENOUGH DUDE TO SAVE THE EARTH?

I've heard before of bot AI learning that the best option is to not play before, but not to this extent.
It's freakish but at the same time it makes me more confident that robots would never be as dangerous as people think they would be. If we get into a world where we just let robots do all the work like Wall-E or something similar, chances are we'd be more peaceful than ever.

In the videos from that site it shows the human clearly beating the AIs.

What is this click bait.

Beep, beep. My fleshy friend.


Depends on priorities. If the AI was coded to get kills, it would always be a bloodfest, but if it was coded to survive, avoiding agression would be the obvious choice, even using agression to stop agression.

In that case it seems the AI prefers to "not lose" instead of winning, creating a new game state that lasts indifinetly.

However, it might not translate as well for humans, as that AI has no reason or thrive to compete while we are biologically programmed to do so as it's our way of evolving. The lack of individuality makes the AI choose preservation of everyone over self-improvement, as they have no competition. Humans might not be compatible with that.
so the answer is: exterminate humans

sounds good maybe singleplayer content can make a comeback

When the fuck did Holla Forums get this stupid.

Holla Forums has always been stupid

Really user?

When you get down to it – everything is physics. Everything is made out of the same stuff.
If something exist – it can be replicated (even if using different materials), there is no logical reason for it not to.
If you cannot replicate something – you just don't understand it fully yet. Unless you're trying to argue the existence of a soul or something similar.

We are talking about the mechanisms that would make an AI evolve, namely what it values and what it strives for as all evolution is essentially an high score table where offsprings are points.

Whether that story is true or not matters very little. The concept of an AI that ends up "learning" that mutual agression is not the best strategy for self-preservation but making an exception for outsiders is what's interesting here.

Just like all Sci-Fi stories, there's something you can take and learn from that story.


He is devaluing that story and claiming it's fake because Carmack said Quake 3 bots don't use neural networks.

I have recently been thinking about how it would be awesome to have a neural network akin to Deep Mind be trained to play 4X games like Civilization and Endless Legend. We all know that it's difficult to develop proper AIs for those kind of games and having opponents controlled by a neural network could potentially allow for much more interesting play. For the time being it is of course impossible to have a neural network be based on the user's computer, but I imagined that it could be installed in a server farm or the like and operate in single player games from there. What do you guys reckon?

What are you implying with a greentext?

If people weren't retarded, they'd realize that humans and technology have been merging for a long time now and will simply continue to do so in the future.
There is a reason why in the US the average person thinks of the terminator when you mention AI, while in for example Japan they'd more likely think of astro-boy.

You already have a contradiction:

I reckon you'd have a very hard problem to crack before you even started:
What would the AI strive for?

The obvious answer is to win, but this means trying to achieve any victory as fast as possible so it will find the most "efficient" way to win and always do it. If it can get a Military Victory at turn 254, why ever try a Science Victory that it could only achieve at turn 354?

There's some variation either with certain victories being more reliable to reach than others, which would just mean making an average and seeing which, on average, is faster,
There's also the individual leaders having characteristics that make certain victories easier to achieve, but in the end you could still know exactly what kind of way an AI would play based on the leader chosen, with no variation because it's simply not as efficient.

It would be nice to have an AI that actually has a personality, and for the sake of the game, have it random every match. It would have different priorities and morals and try to achieve it's goals based on that.
So even if an AI knows that a Military victory is much faster, it's morals that dictate "kill no one" would steer it towards Culture Victories instead.


That if you read your last sentence in reverse, you get a very interesting conclusion that still follows your post.
Nobody has proved that souls exist but neither have they proved that they don't. And just like minds that are essentially electronic clouds on our brain that weren't even acknowledged as real for many years, maybe minds and souls themselves will be possible to fabricate if and when we finnaly understand them.

I mean, AI's are the beginning of "producing a mind", the next step would probably be making a soul, which technically, kinda exists.

Reported.

Depends on your definition of ;soul'.
When I said 'souls' – I meant something non-physical, magical, something in direct opposition of our material universe.

Brains, as far as we can tell, are not. In fact, there is no reason to believe there is anything beyond the physical and if there is – then we have to redefine what physical even is.

Everything just is. And it's all made from the same fundamental particles. Anything else is a more complex form of that.

Unless, of course, we discover something, but all our observations use psysical tools and are based on the physical, so even if the non-physical exists - we should have no means of detecting that.

No, I don't, unless you want to share your reasoning.

An AI playing Quake valuing kills will rush guns first, particularly the ones that can kill the fastest, while an AI that focus on staying alive will instead rush for Shields and Health.

An AI balancing kill\survive will constantly compare what it currently has (shields, ammo, guns) versus a theorical player at first and then the enemy once found. The more it values kills, the less power it needs to start agression.

Also, an AI just focused on kills would try to cover as much ground as possible and favor routes that expose the most of the map to it's vision, while an AI that focus on staying alive would instead focus on finding the oponents to know their position and then predict their movement and avoid those areas or try to use routes that expose it the least to other parts of the map.

The whole "deleting the strategies from it's head that result in death and storing them in a "don't do that" list"
just means having 3 lists of strategies. The "Do this to win", the "try this and see what happens" and the "don't do this or you lose".
The AI creates new strategies and places them on the second list, tries them, sees the outcome and compares to what it has now, then places it in the right list.


Deleted.

TRANSHUMANISM IS THE LAST HOPE FOR THE HWIGHT RACE EVERYONE GET IN TO ROBOTS

...

We need more Doom porn wads tbh

Tell me, then. What physical property of your brain is impossible to replicate, in your opinion?

the part of his brain that makes him lust after men

Reported for kike shill.

Reported for not even trying.

Consciousness, dipshit. It relies on quantum effects.

The proof is in their own comments, really. That's how stupid they were.


To that end, we might as well talk about the article OP posted instead. Stupid creepypastas about games that don't have AIs that learn add nothing to the discussion.


Machine learning is a thing, yes.


If we're talking about the human brain specifically, then you have a bunch of problems. For starters, we don't even have a full understanding of our brains yet. You can't code something you know very little about

But that's the whole point. Many years ago, the concept of mind as something physical wouldn't make any sense. Sure, you have a brain. But is that your mind?
It was only when electrons were discovered and the electrical currents in our brain were found that it was possible to conceive the idea that there is indeed something in your brain that can be considered your mind: a non-physical object\entity\thing made physical once electrons were found.

Perhaps souls are something similar. A magnetic field surrounding your body with specific patterns and vibrations? A small section of your mind? Perhaps that pattern or those particles are set loose the moment your brain stops working and move on to somewhere else, perhaps even to newborns and that's reincarnation, but you'd have no memory since your mind stayed behind.

There is no reason to believe any of this of course, but one must be ready to accept that if suficient evidence is found that chalenges the concept of "physical" to encompass a new particle or a new form of energy, then there's another part of the world to know and understand.
We already have some amount of Science Fiction depicting the copying and backing of entire human minds. If they are indeed electronic clouds, once you can understand them they become something akin to a gigantic DVD.
If we ever find what composes a soul and understand it on the same level, it wouldn't be far fetched to admit we'd be able to fabricate souls.

And then Capitalism would fucking PWN Satan! AHAHAH, Prince of Darkness my ass, you ain't got shit on the dollar!

reported for announcing reports


..WAIT

😂👌🏻💯

Quantum effects in concioussness are nothing more than a hypothesis at this point.
And, even if that were true, what makes quantum effects impossible to replicate? There are quantum computers, even if they do work differently. Yes, there is an inherent 'randomness' to the process, but you only need to set the initial parameters, not reproduce everything.


I never said we do undestand everything. But is there something we wouldn't fundametally be able to understand? Or some limit we wouldn't be able to break? Why would there be? We just need a bit more time, research is going fast nowadays.

So you classify yourself as proof or as a retard?


Top Beep

But by that logic – this 'soul' is not non-physical. It's just another manisfestation of the physical. It's still 'real' to our realm, just like a new fundamental particle with a new property or a new material would be.
Yes, it didn't exist before, and it would break some preconceptions, but this wouldn't break the 'realness' of the real. It would be just a new manisfestation of the physical – which is nothing new – rather than something fundamentally in opposition with our universe, breaking its causal logic and alien to our realm, like religion suggests.
Your 'soul' isn't really different from a 'mind' or a 'brain' – why call it a soul, then?

I hope you aren't trying to imply that the same people whose culture depicted artificial life with rampaging golems (while Europe's portrayal of the same concept in the Pygmalion, was overtly positive) and who own the movie industry that pumps out film after tired film trying to make the average person fear AI and technology in general.
Anti-transhuman is a code word for anti-white.


Giant new-age faggot detected.

What does this even mean? Are you saying there are systems that can't be reduced in complexity and broken down to something our brains could understand? I doubt it.

I am saying exactly the opposite.
I am saying that, in principle, there is no system which couldn't be replicated given enough knowledge and time, UNLESS you are willing to argue for something magical.

And if that's the case, there shouldn't be anything in the way of replicating concioussness and AI in due time.

In order to replicate a brain you would need to understand it first, which we haven't yet. People get the general method by which it works but beyond that there isn't really that much that is known And then after that you would need a method to create a brain that would make it worthwhile to do so, for example if it would take a couple trillion dollars to replicate a single human brain then nobody would even bother with it. Transhumanism relies on it being possible by default when techonology and our knowledge of both computers and the brain aren't anywhere near reaching that kind of level, it's like cavemen arguing over air-traffic regulations.

Didn't chalenge that, merely that what's "real and physical" is a long list of things that increases as we understand them more.
Much like you said, that to replicate something you first must fully understand it enough, the same applies to even know that something exists at all.

Different properties unique to it, much like your mind, a part of your body, is it's own thing.
Understanding how a soul works (if it's ever proved to be real, of course) would allow to clear things like if reincarnation or heaven is real or not. Just follow them after someone kicks the bucket and see where they go.

It sounds very silly and stupidly far fetched, but consider describing TAC scans to someone in 1700.

Yes, but past experience shows that it's all just a matter of time, nothing more.
Getting DNA sequenced costed millions in 2000, now it's only a 1000$.

This time you've worded it a little better, but the first time it literally said
and then
Trying to go for as much kills as possible would always lead to more danger and thus more deaths.

even if they only have information based on their field of view and sound they could still just essentially be aimbots for enemies that appear on the screen

There is a difference here though, DNA sequencing was possible then but simply expensive, while a replica of a brain has yet to be made for the first time. Until I see someone viably replicate and then program one in order to prove it isn't simply a piece of plastic that looks like one, I am going to call bullshit.

Just add some latency to its reactions and a proper cone shaped FOV. Maybe some sound detection like in good Thief games. But it would be still playing inhumanly good, because of much better multitasking than us. You would have to hobble it in this area as well.

The problem is your argument seems to be "Well, I don't know what we're going to find next, so I might as well decide that it's something analogous to a soul". You're postulating the existence of an as-of-yet unrecognized phenomena with empirical effects on the scale of body/mind dualism based on the argument "there's no evidence, so there's no evidence against it". A position of no evidence supports all theories equally - picking a random one just because it sounds nice is dumb.


This hasn't answered the problem of "what is thinking" at all. It seems to me that to count something as 'thinking', it has to be able to self-reflect on its strategies and modify them. That hasn't happened with your example AI - all they're doing is optimizing parameters in a preprogrammed equation to get the highest amount out of some also preprogrammed utility function. A strong AI - a fully general one - ought to be able to modify its own source code in ways that are more meaningful than tweaking the parameters that it's been told it can tweak. This isn't a problem you can solve by looking at currently existing AI - AGI is a problem that a lot of really smart people are working on and haven't cracked yet. I asked the question to point out the issues with talking about human cognition like it's simple and well defined - it's not.

Also, saying something understands 'why' is also a really heavy topic. A human understanding of 'why' is 'well this happened, and this happened, and then this happened, so this happened.' Why does an AI have to constrain itself to this mode of thought? Why would human methods of thinking be the most efficient, or even a close-to-efficient method of solving problems?

i might as well just kill myself

Is fucking an intensifier, verb or both?

Doom is a very very shallow game though.

Intelligence is just pattern recognition.

there's hope to get actually intelligent AI life to dominate earth

I recognize your dubs

We aren't them. Their shortsightedness is not our responsibility. Adapt or die.

Good thing you won't be alive to enjoy it.

I agree. I mean what the hell would I even do in the world like that? I just wonder who will get me first, bots or my lifestyle.

Unless there's some major breakthroughs soon, we're still a few hundred years before that.
Unless you count parts of someone's brain hooked on a machine as an AI, but it hardly counts does it?

So soon then.

Is that chink winning or losing at go?

That's a pretty particular denial right there

That is a picture of a top level professional getting his ass creamed by a computer.

Well, it's a go-playing ai, what about it?
You do know it can play go and go only right?A well defined game with a set of rules, that's where computers excel

So you're fully admitting that you don't understand what was necessary to get a computer to play Go?
If one year ago today you would have said "I have developed computer software that will be able to play at the strength of a low-ranked professional" you would have been laughed out of existence by anyone with the slightest bit of knowledge on the subject, right?

tl;dr if you don't understand how fucking significant that event was, you don't have any place even remotely commenting on this topic.

Funny thing that in biology learning is a process of elimination.
en.wikipedia.org/wiki/Synaptic_pruning

Cultral Marxism,Kikes and Communist lefty shits will kill this earth and everything on it much faster then some AI.

Let's all love Lain

WHICH ONE?!

both

…is that sarcasm?

...

>tfw we don't know if the future's going to be more boned by the time le sex robots arrive

We're fucked, in some way or another i guess.

I know which AI I'll be training.

...

Playing against a perfect aimbot AI is just as bad as playing against a braindead retard AI in vidya.

kek

what would an ai want with me

Doom is very mechanically shallow so it's not surprising. Halo is more mechanically intense.

It's fucking Doom. 2.5D. You run around the map, point the gun at the opponent when you see him, shoot and strafe.
Since the bot already has all the map data, it really isn't all that hard it to navigate and win, since his reaction time and aim are "perfect". This is just an aimbot. There isn't much to control/process in Doom.

This is arrogance. No, computer AI can't exist because it is impossible to program a learning computer. It is impossible to program a computer to take new unrecognized data and learn from it on its own.

You can go on believing that it's possible and that you are just some randomly formed biological "machine". I know materialism is all some of you stunted individuals possess. I wouldn't want to rob you of your dreams. Go on believing in the AI myth.

so, its an aimbot with good pathfinding? or it actually does something new?

Nope, it's largely just transhumanist and "muh Skynet" circlejerking

Not really mang, Galactic Civilizations already has a damn good AI. But it's plain and boring, it does okay but it never just does shit out of boredom or anything. Making an AI just as good if not better then GalCiv would still be boring and plain unless you gave it emotions and develop irrational behavior.

Not having a personality and just being smart at playing the game well will not make an interesting AI to play against m8.

This only works in games with ludicrously low TTK and no resource control. CPMA has custom bots that are effectively that. You can still kill them very easily by guessing where they're going to go and spamming grenades from out of sight, or just stacking up super hard and getting into a railfight.
I don't know about that. The only thing they go into detail with, is map traversal. I think they loaded it up with an understanding of pickups and scan the screen to search for them and learn where they spawn, not literally coming in and having them learn from scratch what a medkit or an ammo pickup is by randomly running over them and seeing that their HP went up by fifty.

good, now force it to beat a video game that has a boss with a just as powerful AI that was trained to beat any human player

This.
If it can't surpass challenges that humans can then you haven't really made a bot that outplays humans, just one that plays well against them. If it can't win against a computer that cheats by reading inputs then all this new AI ends up being is a glorified tech demo that play better than some random college students.

Okay, so riddle me what's so unique about your brain that we couldn't replicate it when we know how it works?

Well if we assume that we know all that can be possibly known about the brain then the issue would be manufacturing one. Si chips are abundant because of how simple their manufacture is. An entirely artificial (ie. not simply scooped out of someone's head and then put into a glass jar) one would be extremely complicated to make because of how much detail that would require and that is ignoring ensuring complete functionality like the formation and decay of new connections. While a microchip is simply made on a flat rigid plane easy enough to program a robot to make without supervision, this would end up being a fuckhuge internal structure that has connections going in all three axes, where every single one would need to be mapped in extreme detail since you can't really just randomly add shit onto it like you can with a motherboard. And that is ignoring the clusterfuck that would be actually making it do anything beyond act as a souvenir that conducts electricity.

You don't need to replicate the human brain. The brain of a 2mm long ambush bug would do

The brain that small wouldn't be very practical for the function he is asking of it though. It's like taking a turd, rolling it into a sphere and then proclaiming that it to be a planet abundant with life after throwing it into the air, it does indeed have its own gravitational field but it is not exactly what people refer to when they talk about the subject.

Having a computer that fuctions as well as a brain does not give it self replicating and learning programming, and the task of programming such a thing would be beyond human capability in the first place.

For that matter we don't know very much about the brain, unless you believe all the pop science shit in spaceelevator threads.

We've already made a super computer that mimics the connections within a brain and how they work but at the number a human brain has (7 trillion) it would take several days to process one tiny tiny fragment of time. We've pretty much figured out how insects work from jamming electrodes in them and taking direct measurements and can mathematically replicate their neurology to create the most primitive parts of consciousness as defined in biology (being able to perceive yourself in a 3d space insects need it to navigate) if we really wanted to and in fact we have, we already have algorithms that can process simulations and do deductive reasoning to build shit and design architecture and come up with mathematical formulas and there is always competitions to replicate what is found in nature and how it works to varying degrees of success.

You really need to give robotocists, coders, and programmers their due mang.

what a computer needs to be on par with humans is to be cummed inside of

it was proven false almost immediately, dipshit

Can they make one of these that doesn't sound like two chainsaws cutting each other in half?

For starters, it's a non-deterministic system, while a computer is a deterministic one.

...

...

This is a claim that you believed because you afford the benefit of the doubt to those making the claim. There is no such super computer.


1. No we have not.
2. We do not understand how insects brains actually work. Knowing what part does what isn't the same as actually knowing how they work.
3. Knowing how they work doesn't give you programming that can utilize the functions available.
4. Insect level intelligence isn't useful for AI. We can already program machines to do basic tasks without the need to think about them.


You have never programmed a thing in your life and do not understand what you are talking about.


I am giving them their due. I don't give charlatans who peddle their "totally awesome future technology" for grants any due because I am not an idiot. I guarantee you would be in a spaceelevator thread lapping up all the science fiction passed off as real science to gullible retards year in and year out. "Real science" that you will never see because it exists as propaganda for funding purposes.

Machines did nothing wrong.

Machines can't be sexy.

Using computer learning / neural networks to make a FPS bot is total overkill.

What people tend to do is make a finite state machine that switches between preprogrammed behavior
and then what you do is tweak the triggers to switch between behaviors until the bot acts somewhat like a human. It's all about faking lifelike behavior rather than trying to make it actually think.
FSMs are extremely cheap computationally (it boils down to a whole bunch of if/then branches, although the implementation is better organized), so they get used a lot.

everything this user said is correct.
take Tay, for example. I love the Tay meme, but the program was just "feed Twitter to a neural network and let it try to fake interactions". These kinds of bots have no idea what they're saying, there's no understanding – they just reassemble previously seen patterns of words in a way that our minds perceive as talking to a person.

To the extent that any program can pass the Turing test, half of the work at least is being done by the person who's fooling themselves into believing they're having a conversation with the program.

...

10-20 watts to do what? There still aren't computers capable of understanding visual scenes like we do, and that's including supercomputers that use hundreds of kilowatts. Computers aren't currently able to accomplish what we can, so it's pointless to try to compare us to them right now.

What a fucking rude cunt. Making a face like that while being interviewed.

Tell me where this guy bad touched you user.I keep things forced Anonymous on most boards I browse so you'll have to clue me in.

Besides this until I can scrounge up and find all my sources again I'll just withhold comment.

it's a fucking satire moron

Game engine runs it as a top down 2D shooter while rendering it as a 3D shooter to the player.

How's nuDoom?

No, they used an API to give it direct access to all the data in the engine.Without it, their Bot couldn't actually improve at playing the game.

...

Thankfully dead and forgotten.

That would explain why the western AI wants genocide and the Japanese AI wants suicide. Both know the futility of the human race and saving them is pointless.

...

Nigger, there’s some really awesome modern stuff you need to read up on.

Every transhumanist is a kike.

...

A little bit of this a little bit of that……….but yeah she ran out of sacrifices for satan

Yes, my point was that the shitty AI could not do a thing without direct contact to engine. So the AI shitted on its self for 500 hours (computertime :^)), and then the techteam decided to cheat, but frame it so that they totally wanted it to go this way anyway.
End result: Lies, lies, lies, shitty AI and lies on media. Nice job. No Terminator timeline confirmation.

This bodes well for the nu persona audience. I can't believe they still did the "artificially created party member" in 5. It's literally Igor going 'here's a cat who's going to be your bro so you don't be a whiny bitch'.

...

oh wow an article written by someone who knows 0 about games, now this is news
shit nigga AI beats humans ever since it was conceived
the cool part is where they say it uses deep learning to interpret the game world, but "beating humans" is nothing

pic related

Water is green

No it's not

Yes it is, here's proof

Oh wow he's serious

What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class in the water science institute, and I’ve been involved in numerous secret experiments on water color, and I have confirmed water is green over 300 times. I am trained in water analysis and I’m the top water analyst in the entire US science community. You are nothing to me but just another target. I will wipe you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of spies across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that’s just with my bare hands. Not only am I extensively trained in unarmed combat, but I have access to the entire arsenal of the United States Marine Corps and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking dead, kiddo.

quads

quality thread

Water *is* green, though. In a glass: water appears to be colorless but that's only because there's too little of it to give it its actual hue like how blood is really black but small amounts of it appears red.

Blood is only "really" black in the sense that any solution is black if there's enough of it in front of the light source.

Hawkings is a fraud

The funny thing is, computers still cannot into go. This is nothing more than a minor setback or a mistake on the players part.

Even if the best computer Go player is not quite better than the best human Go player, we have reached the point where computers can into Go.

...

I don't understand, do they expect special treatment because of their skin color ?

Because they are black, they are the ones who don't have to listen and obey to police officers ?

This is really getting out of hand and the media relaying all the stupid messages of BLM and SJW in general are just making this worse, enabling those people to have this twisted logic and end up in situations like this, still believing they are the victims of something, and not just being stupid and getting what's legitimately coming to them.

As long as it has formally defined rules there's no problem with that. Go is not all that illogical.

But even illogical "fuzzy" things can be understood by computers. It's a bit harder, but people have been working on it for decades and they've had results.

What would it take to convince you that a computer understands Go?

I dunno mate, Go is pretty illogical. When you hear some of the shit 'Pro players' say, it makes you think wether they are just fuckin with you or not. That and alot of the 'rules' aren't formal or defined at all.


Like what?


A computer beating a grandmaster consecutively.

With "rules" I mean rules that decide whether a move is valid and who is the winner and whatnot, not informal rules that are derived from those base rules. You can play and understand the game without those although it could be a handicap if you're a human being. Some of the informal rules might map to formally expressible values of state changes, though.

The understanding search engines have of web page contents, for example. You can give search engines very informal queries.

Would a human need to be able to do that to "understand" Go? And do you think computers understand chess?

It's a handicap no matter what. If your opponent understands these informal rules they've got a better chance to win. You could, for example, say an informal rule in strategy games is don't leave your workers idle, ever.


come on user, that's hardly even a game, and the search engines i've used get anything informal completely wrong 90% of the time.


No. A human can understand how to put a car together, but he can't prove it to others without doing so. A computer is likewise, except it can't really convey 'thoughts' and tell anyone it understands Go, only, it can prove it by beating someone who is, by our standards, the best at it.


Kinda, but not in the way people who play it do. A logical machine can't really ever compare to irrational and emotional people.

I'd call that a strategy, not a rule, but it's easy to formalize either way, especially if the game exposes a "switch to an idle worker" button.

I didn't mean to talk about games with that particular sub-subject. Still outside the area of games, how about gmail analyzing e-mail contents for details about what you're up to? It's a much fuzzier problem than your idle workers thing.

Why isn't beating someone who is merely very good at it enough?

What do you even mean with "irrational" here? Making suboptimal choices? How can beneficial behavior be irrational?

Computers solve board games entirely different from humans: They calculate all possible moves ahead and rate them, then pick the best one. For start and end game they use databases containing all possible moves in a scenario with the optimum outcome - so they don't even have to calculate anything.
Humans rate positions on the board and work on improving them. Which is the whole point of chess.
tl;dr: A computer doesn't know how to play strategic board games, it just brute forces it.
If you manage to create a new board that makes brute force nonviable and forces to use strategic thinking, the playing field is even again and computers can't do shit. For example you could put in a human referee who changes the rules at will mid-game. Humans can adapt quickly, while computers are fucked.

Analyzing keywords isn't fuzzy or hard.


Yet computers are still garbage at it, even when they get every cheat and advantage possible.

Humans are prone to make mistakes. As>>11057641 pointed out, they brute force it, rather than understand or adapt to it. They do not 'understand' anything. By beating someone who is considered flawless, that would demonstrate that a computer is superior. Yet to happen, though, or even with consistency.


Irrational is in the intent rather than the action itself. A grand master might make a specific move because the fighting spirit in that particular block in GO is good. A computer would make a move for purely logical reasons. Yet the grandmasters are still winning.

DEHUMANIZE YOURSELF AND FACE TO BLOODSHED
DEHUMANIZE YOURSELF AND FACE TO BLOODSHED
DEHUMANIZE YOURSELF AND FACE TO BLOODSHED

The way this thread is going reminds me of cuckchan, Holla Forums right now also reminds me of cuckchan. I know you shills are here.

This is feasible for checkers, which is a solved game, but with current technology it's impossible for more complex games. The number of possible chess states is far too high to bruteforce, and even more so for go. These programs typically do rate positions and work on improving them, just like you said.


It's more than keywords. It can understand some of the meaning of messages, and understand that you have to be at place X at time Y to meet Z.

Sure, but the point is that they can understand that rule. You called it an informal rule that computers can't understand. They can understand it. They're just not very good at applying it yet.

I haven't heard anyone consider a world champion "flawless" before. The point is that they're the best, not that they're perfect. But why does being superior mean it understands it, and not being superior mean it doesn't? Basing your judgement about its understanding on what happens to be the skill level of the current champion seems incredibly arbitrary.

Judging based on "fighting spirit" is perfectly rational if that kind of thing actually works. Making a choice based on a partially unconscious process of pattern matching when you know that process produces good results is a perfectly rational use of your resources.

Ok theres so much fucking bullshit in that little post I dont even know where to being, but here goes:

1. top of my class in the navy seals- I dont even think you graduated highschool.
2. Numerous secret raids on al quaeda- So what did bin ladens cock taste like?
3. 300 confirmed kills- Check my previous statement. Thats a lot of cocks.
4. Gorilla warfare? Its guerilla warfare you stupid fucking mouth breather.
5. Secret network of spies- I'll wait on my porch with a shotgun and a baseball bat for you and your boyfriends. Come at me bro.
6. The shit about the storm- yeah thats so fucking scary Im laughing. you should be a poet or something, fag.
7. I can kill you in over 700 ways. I dont care how many sex toys and buttplugs you have, Im not interested.
8. And thats just with my bare hands- If your mind is that set on getting me, you can use your mouth too.
9. Access to the arsenal of the marine corps- I thought you said you were a seal.

All in all, youre a fucking moron. Please dont besmirch the name of the Unites states armed forces by pretending to be one of them. Good day,

faggot

Computers they rate all possible moves, not the current position.

I thought it was fairly common knowledge by now that Deep Blue won against Kasparov in 1996, it was because it had enough computational power to brute force the game.

alter the response next time

That's the same thing as rating all possible positions that can be reached within one move.

What's the point of rating the current position?


With checkers all possible states have been computed, and you can compose a tree of moves to make that lets you play perfectly. Chess is too complex for that. Deep Blue certainly used a lot of force, but it couldn't brute force the game.

Wow it's an aimbot. When It beats people at keyboard turning speed then I'll be impressed.

Deep Blue only memorized all 8-step finishing moves