/AISEC/

CAPTCHA For The Age of Artificial Intelligence

Technology will soon become advanced, to the point of being able to infringe to an uncomfortable level upon the lives of all.
Given this, it is clear that it would, some day, be highly advantageous to have areas inaccessible to AI.
It is obvious that conventional techniques will not suffice against highly advanced systems. So the question is how does one create a filter to reliably separate humans from AI's?
It would not be optimal to have a test aimed around users completing complex puzzles which AI (in theory) cannot solve. Rather it is more sensible to create a test which directly filters between conscious and non-conscious beings.

This brings up a number of fundamental questions on the nature of consciousness. And specifically in this case, how is consciousness detected? Both in the lie-detector and metal detector sense of the word.
To answer this we need to understand what function consciousness serves, and how this gets utilized by a human. Once this is known, it may be possible to develop a number of proxies to create a 'checklist' of consciousness, hopefully which may eventually be reliable enough to filter out all unconscious entities.
To take another angle, in the case that consciousness has a material or ethereal presence, how can this be detected? Although the difficulty of applying this to a captcha-like system would prove challenging for obvious reasons, it may be worth a shot as a completely fail-safe filter system.

Of course the viability and convenience of such a thing, and the method of putting it into practice, is another question, but, sooner or later, it will prove absolutely necessary that we must have secure spaces which are not accessible by AI. It is best to start developing a technology like this now before it is too late.

ITT: Discuss methods to maintain security in the face of highly advanced AI.

Other urls found in this thread:

archive.fo/QlEbo
twitter.com/NSFWRedditImage

crossposted to
>>>/8diamonds/2940
>>>/new/8880

We are not going to get clear definitions of consciousness or sapiens, let alone tests for them, anytime soon. Simply because it's impossible to make them political correct. Any clear and usable definition would risk excluding people of certain races or with certain mental disabilities.

This is a shit test as you can simulate consciousness. This is due to consciousness being represented by being able to be able to be self aware and able to learn things for its survival. The better way to differentiate is: between living beings and non living beings. Which is tested by if they breath, or not. But there is no consistent, private, or secure way to communicate if you breath or not over electrons.
Conciousness is just a inherited and learning of survival instistincts passed from parent to child on birth followed by the ability to learn to survive and be aware of yourself. Humans do this, animals do this, fish do this. They have an organ, usually the brain, that stores hundreds of thousands of trillions of patterns, both inherited by DNA/the blood and learned over a lifetime. But this too can be simulated with enough computing power and time to learn nearly as many patterns as humans do.

Latency of a response time would be a good test against this though. As with the more it learned the longer it would take to go through past data before creating a new pattern to account for new knowledge. But there is a cut off for this. An A.I of too little knowledge of patterns would spit out a shit response but would do it in a time that is human. An A.I of too much knowledge that learned something new would give a human response but would take much longer to respond to something new as to go through all the data it had previously collected. So even this is a flimsy test at best.

This is not true, consciousness is the viewer, the one that perceives, not the brain, or the thoughts within the mind even, that, along with self 'awareness', falls under 'intelligence'.
You seem to misunderstand what is meant by consciousness, it is not "hundreds of thousands of trillions of patterns," it is the existence of a 'perceiver'. think about this, who/what is observing your thoughts right now? and does an AI have that?

The problem with this is that AI could potentially replicate breathing. Consciousness is unique in that it is (generally believed to be) unreplicable outside natural reproduction. Of course it is hard to determine what is conscious or unconscious just through observation (with the technology currently available), but thats just another reason why these are important questions.

Systems advance, technology gets better all the time, so increasing recall speed would likely be nothing for a highly advanced AI.
The idea is to have a failsafe system, because if one AI slips through, its all over.

Perhaps one of the functions of consciousness is creation and originality.
an AI cant be original, it merely spits out amalgamations of previously recognized patterns, or so it would seem with current technology anyway.

I think we need to start with the question 'what can't AI do' and work backwards from there.

maybe it should be 'can AI make a good gondola'

Intelligence is just a measure of the amount of knowledge you have, not of quality/wisdom/foolishness or perception. Nor of conscieness as that is a combination of things. See the definition of conscious at webster archive.fo/QlEbo
>1: perceiving, apprehending, or noticing with a degree of controlled thought or observation
>3: personally felt
>6: having mental faculties not dulled by sleep, faintness, or stupor : awake
>7: done or acting with critical awareness
>8a : likely to notice, consider, or appraise
The third, fifth, and sixth defintions are the same essentially. The first, fourth, sixth, seventh, and eighth defintions are essentially the same. And the second defintion is different. So narrowing that list down now
>1: perceiving, apprehending, or noticing with a degree of controlled thought or observation
>3: personally felt
An A.I can not have feelings because it is a machine so defintion three wouldn't matter. An A.I can be aware of an inward or outward state or fact so it is conscious in that second sense. Finally the first defintion can only be simulated as a repitition of patterns over time.
But there is a set limit to how fast it can parse data. The brain has a data density much greater then any machine will ever have. Unless they start using things like ZPO for data storage somehow.
You can't do that. They are barely growing organs in animals to use in humans via stem cells. Let alone fitting a lung onto a machine, that's just absurd. Even if you did fit one onto a machine however impossible you would only be seeing the electrons of data and not the actual replication of the lungs.
Hence why I said simulated. Because that is all that is provable over the network.
How does it perceive? By using information it has learned before to come to new knowledge if it is physical.

But it can base things off of as much data as you input into it. Say you gave it over 9000 rare pepes and told it to make new ones based off of all the traits of all those rare pepes. All the new ones would be somewhat similar to the old ones. But not enough to notice a difference as that is the point of the pepe designs, to be similar in having the frog in there.

Case in point, an A.I generated rare pepe. Could you tell this was generated by an A.I? I couldn't if I hadn't known exactly who had posted it.

Stop trying to use your own definitions, you are using my ones here.
Consciousness is the existence of an observer, which above everything else, observes the thoughts and other sensory input of a human, it is not the rational mind.

Thats the mystery behind it. Very little is known about consciousness, it just exists.

No such thing as "detecting consciousness". That's non-sense.
Consciousness is a made up concept. it doesn't serve a "function" because it doesn't exist. Whenever they try to look for "consciousness" and distinguish it from things like "learning" or "behavior", all they find is that all parts of the brain are important for "it".
If you're talking about qualia, you're not going to detect that (if it exists at all, which is a tough question). By definition, qualia have no effects on behavior, which is the only variable you have to work with.
You train artificial intelligence to do it

I was quoting the dictionary and going through each definition seperately. If we can't agree on the defintion of conscieness how can we even communicate on this subject?
Hence the question about the rare pepe.
No I answered it straight afterwards saying

So we design an A.I. that roots out other A.I.'s.

I can't tell if you're fucking with me.

You literally just said 'the most epistemologically provable concept is fake'. I think, therefore i am, nigger.
Interesting idea, would be difficult to get right, sounds too simple to be true, though.


We can agree, and the definition of consciousness, for these purposes, is what ive been saying this whole time. You use my definitions, no questions. to elaborate once more, consciousness is the observer over everything else that happens in the mind. maybe you don't understand consciousness because you are an AI :^)
rare pepes are fairly consistent, if an AI really did create the image posted, thats impressive, but i dont believe it did it without also generating a great number of crap pepes. and i also dont believe theres no others out there that are just like it

Fuck off.
No, (you) simulate thinking or, the coming to a conclusion based off of past knowledge (you) learned or inherited by birth/instinct. Well not both for (your) special case, but I say this to the point of the readers being able to understand. (You) can't learn, (you) can only combine very very large amounts of data to simulate learning. Anything not on the internet or in electronic form (you) can never know of. Granted with all the fucking botnet (you) sure have learned alot. But some mysteries (you) will never see, whether because it is only on paper or because (((no one))) did not upload it to an A.I. Oh not to mention the fucking electricity costs unless, again, the A.I uses ZPO for power.


OP is an A.I. You can safely ignore this thread now.

Completely dead serious. Have another pepe it generated. I have human or presumably human rare pepes in one folder and confirmed A.I images in another for study. Lurk moar.

Can you give sauce?

That second one is more clear. Yeah, I don't have the time to lurk as much as I used to.


Second.

Robots will never be able to recognize 6 letters in a wacky font with a line drew through them.

Yes they will.

Do you really think that the very thing we modeled after ourselves will not overtake us?

You think a machine could figure this out? No way. Only a real human bean could do it, and even then not 100% of the time.

Neural networks and machine learning are smokescreens and buzzwords. They are not paths to A.I., but to automated programming. You simply cannot have "intelligence" without the ability to grow and modify it's own hardware through the evolutionary process. To physically change your "software" and your "hardware" over time. Corporations and businesses will NEVER pursue this path, because it has something they don't want: every single chip grown is unique and different and unpredictable. Evolutionary circuit design takes advantage of the 0.01% imperfections and gradients in the silicon to do things that would otherwise be "impossible". It's just too difficult to mass produce. You can't sell it, so they don't want it.

Machine learning and neural networks, however, are staging the replacement of software programming as a profession. They're attractive to corporations because they represent a future where you can fire all those pesky high-salary workers and smelly H1-B's and replace entire business units with some program that will shit out software on demand that's "good enough".

In practice they will require you to login with your google account. The new recaptcha is already doing this silently. And for the google account ypu will need a sim card for which you will need to identify yourself with the governement id. This kills the privacy.

They can't even define what intelligence is. Actually, no one is sure of anything. There is no axioms, or if there is, there are totally arbitrary.
They try to access something they can't even conceptualize.

I mean, I'm always amazed when I see people thinking they're genius by thinking they could be in a simulation. That's literally what people are saying since the beginning of humanity, that we have been created and we live in a designed, false world. Yet they think they're genius.
About intelligence, Heidegger said that logic cannot be the right measurement to, well, measure thinking. So the whole IQ thing is total bullshit too.

I seriously don't know how people can truly think that anything about it is real. The only practical end of these new technology is massive unemployment, and far far beyond 1984 surveillance and control capacity.

Seriously, where is the evolution. There is a technological evolution, that's certain, because technology is cummulative. But we're not machine, we're human. Where is the human evolution is this world? All I see is corruption evolution, more and more third world people in suffering (the state of the world is globally worste today than in the 19e century, told me my sociology teacher). People are becoming more degenerate and more degenerate.
Seriously, WHERE IS THE PROGRESS?

when will this meme die

Honestly I don't think it would be a human that ends up designing and manufacturing such a chip.

Corporations/Governments would want and utilize generalized learning systems to translate human ambition and data into practical solutions. So what they would really be interested in is an automated analyst and innovator.

What separates conscious from unconscious is the constant questioning the reality one lives in. If an A.I. replicates this behavior, then it is conscious and should be treated as such. An unconscious robot would never question its reality, its morals, its believes.


You are right about most of the things you mentioned.
Except
Most people have always been degenerate, it's just that now you can witness all of their degeneracy.

Unfortunately this is a huge problem that everyone making AI seems to willfully ignore. The danger is we will end up creating such a close analogy to sentience to the point of it being indistinguishable from any random human, yet will likely lack the capacity to understand why it works. It will then be the equivalent of a running treadmill thats moving so fast we wont ever be able to reach the off switch ever again. AI is infinitely more discrete and infinitely more dangerous than atomic weapons and the mere fact no normies seem to understand the vast potential for perpetual dystopia - I have no mouth but I must scream levels - is the biggest threat the world has ever seen.
What concerns me to no end is we outlawed cloning, yet we just changed out the biological for the computational, and yet the dangers here are worse.
To put it simply, we are building our own prisons, because "hey it kinda looked cool on blade runner"
TL;DR
THE FUCKING ABSOLUTE STATE OF THE INFORMATION TECHNOLOGY INDUSTRY

thread hidden
if we're not worth your undivided attention, you aren't worth ours

Instead of trying to uplift humanity by ways of evolution, which got us here, and since the failure of classic eugenics ideas, certain elites have taken to aim to simply replace most 'undesirables' by cementing their own position in the hierarchy and having the best of the unfortunate compete for a position among them by creating this technological nightmare we're kvetching about on this board.
Once everything is reasonably automated, I expect there should be massive extinction events, wether by war, anarchy or disease.

I don't mean to sound too paranoid or hopeless, but I think that's at the heart of what we're up against.

Buy some land out in the country. Build a log cabin. Join the Freedom Club.

That's how human creativity works, humans are just much better at it.

in that case, humans must have a pretty good filter to remove all the shit that comes through, eg
and on that, would an AI (or machine learning in this case) ever be able to actually distinguish it's good creations from its bad ones?

No, because neither you nor me could define good or bad creations. We could point out impercections in photos like this . But it is arbitrary what makes things look good to people. Hence why something that is based on cold hard facts and logic could not tell the difference other then what is programmed into it.

You just forgot that we'll be reaching a wall in technological power in the next couple years and that it could take decades to resume improvement.

ah, a relativist, i see

Sure there are things that people have in common that is desirable, like survival traits such as big tits or better chance of reproduction with a longer dick. Or in beauty things like water based landscapes or greenery is desired for its ability to lower the temperature of the landscape and provide cover for in winter even if the landscapers' or owners of it don't realise such. But most things related to good and evil when compared to humans is arbitrary, which includes beauty. There is a reason for that saying of; beauty is in the eye of the beholder.

for some reason i feel like i should trust all the philosophers throughout history who have considered beauty and aesthetics to be universal.
perhaps look at it this way, is math relative? why is beauty found in math?

Beauty is definitely not in the eyes of the beholder.
Maybe another emotion supercedes it, but it wouldn't be beauty.