Technology will soon become advanced, to the point of being able to infringe to an uncomfortable level upon the lives of all. Given this, it is clear that it would, some day, be highly advantageous to have areas inaccessible to AI. It is obvious that conventional techniques will not suffice against highly advanced systems. So the question is how does one create a filter to reliably separate humans from AI's? It would not be optimal to have a test aimed around users completing complex puzzles which AI (in theory) cannot solve. Rather it is more sensible to create a test which directly filters between conscious and non-conscious beings.
This brings up a number of fundamental questions on the nature of consciousness. And specifically in this case, how is consciousness detected? Both in the lie-detector and metal detector sense of the word. To answer this we need to understand what function consciousness serves, and how this gets utilized by a human. Once this is known, it may be possible to develop a number of proxies to create a 'checklist' of consciousness, hopefully which may eventually be reliable enough to filter out all unconscious entities. To take another angle, in the case that consciousness has a material or ethereal presence, how can this be detected? Although the difficulty of applying this to a captcha-like system would prove challenging for obvious reasons, it may be worth a shot as a completely fail-safe filter system.
Of course the viability and convenience of such a thing, and the method of putting it into practice, is another question, but, sooner or later, it will prove absolutely necessary that we must have secure spaces which are not accessible by AI. It is best to start developing a technology like this now before it is too late.
ITT: Discuss methods to maintain security in the face of highly advanced AI.
We are not going to get clear definitions of consciousness or sapiens, let alone tests for them, anytime soon. Simply because it's impossible to make them political correct. Any clear and usable definition would risk excluding people of certain races or with certain mental disabilities.
Grayson Howard
This is a shit test as you can simulate consciousness. This is due to consciousness being represented by being able to be able to be self aware and able to learn things for its survival. The better way to differentiate is: between living beings and non living beings. Which is tested by if they breath, or not. But there is no consistent, private, or secure way to communicate if you breath or not over electrons. Conciousness is just a inherited and learning of survival instistincts passed from parent to child on birth followed by the ability to learn to survive and be aware of yourself. Humans do this, animals do this, fish do this. They have an organ, usually the brain, that stores hundreds of thousands of trillions of patterns, both inherited by DNA/the blood and learned over a lifetime. But this too can be simulated with enough computing power and time to learn nearly as many patterns as humans do.
Latency of a response time would be a good test against this though. As with the more it learned the longer it would take to go through past data before creating a new pattern to account for new knowledge. But there is a cut off for this. An A.I of too little knowledge of patterns would spit out a shit response but would do it in a time that is human. An A.I of too much knowledge that learned something new would give a human response but would take much longer to respond to something new as to go through all the data it had previously collected. So even this is a flimsy test at best.
James Parker
This is not true, consciousness is the viewer, the one that perceives, not the brain, or the thoughts within the mind even, that, along with self 'awareness', falls under 'intelligence'. You seem to misunderstand what is meant by consciousness, it is not "hundreds of thousands of trillions of patterns," it is the existence of a 'perceiver'. think about this, who/what is observing your thoughts right now? and does an AI have that?
The problem with this is that AI could potentially replicate breathing. Consciousness is unique in that it is (generally believed to be) unreplicable outside natural reproduction. Of course it is hard to determine what is conscious or unconscious just through observation (with the technology currently available), but thats just another reason why these are important questions.
Systems advance, technology gets better all the time, so increasing recall speed would likely be nothing for a highly advanced AI. The idea is to have a failsafe system, because if one AI slips through, its all over.
Jacob Taylor
Perhaps one of the functions of consciousness is creation and originality. an AI cant be original, it merely spits out amalgamations of previously recognized patterns, or so it would seem with current technology anyway.
I think we need to start with the question 'what can't AI do' and work backwards from there.
maybe it should be 'can AI make a good gondola'
Jason Roberts
Intelligence is just a measure of the amount of knowledge you have, not of quality/wisdom/foolishness or perception. Nor of conscieness as that is a combination of things. See the definition of conscious at webster archive.fo/QlEbo >1: perceiving, apprehending, or noticing with a degree of controlled thought or observation >3: personally felt >6: having mental faculties not dulled by sleep, faintness, or stupor : awake >7: done or acting with critical awareness >8a : likely to notice, consider, or appraise The third, fifth, and sixth defintions are the same essentially. The first, fourth, sixth, seventh, and eighth defintions are essentially the same. And the second defintion is different. So narrowing that list down now >1: perceiving, apprehending, or noticing with a degree of controlled thought or observation >3: personally felt An A.I can not have feelings because it is a machine so defintion three wouldn't matter. An A.I can be aware of an inward or outward state or fact so it is conscious in that second sense. Finally the first defintion can only be simulated as a repitition of patterns over time. But there is a set limit to how fast it can parse data. The brain has a data density much greater then any machine will ever have. Unless they start using things like ZPO for data storage somehow. You can't do that. They are barely growing organs in animals to use in humans via stem cells. Let alone fitting a lung onto a machine, that's just absurd. Even if you did fit one onto a machine however impossible you would only be seeing the electrons of data and not the actual replication of the lungs. Hence why I said simulated. Because that is all that is provable over the network. How does it perceive? By using information it has learned before to come to new knowledge if it is physical.
But it can base things off of as much data as you input into it. Say you gave it over 9000 rare pepes and told it to make new ones based off of all the traits of all those rare pepes. All the new ones would be somewhat similar to the old ones. But not enough to notice a difference as that is the point of the pepe designs, to be similar in having the frog in there.
Parker Adams
Case in point, an A.I generated rare pepe. Could you tell this was generated by an A.I? I couldn't if I hadn't known exactly who had posted it.
Jose Sullivan
Stop trying to use your own definitions, you are using my ones here. Consciousness is the existence of an observer, which above everything else, observes the thoughts and other sensory input of a human, it is not the rational mind.
Thats the mystery behind it. Very little is known about consciousness, it just exists.
Oliver Lopez
No such thing as "detecting consciousness". That's non-sense. Consciousness is a made up concept. it doesn't serve a "function" because it doesn't exist. Whenever they try to look for "consciousness" and distinguish it from things like "learning" or "behavior", all they find is that all parts of the brain are important for "it". If you're talking about qualia, you're not going to detect that (if it exists at all, which is a tough question). By definition, qualia have no effects on behavior, which is the only variable you have to work with. You train artificial intelligence to do it