Attention Schema Theory and AI

aeon.co/essays/can-we-make-consciousness-into-an-engineering-problem

Other urls found in this thread:

youtu.be/peHcu8LEgEE
youtube.com/watch?v=P7fGRitJNP0
youtube.com/watch?v=e91D5UAz-f4
en.wikipedia.org/wiki/Confabulation_(neural_networks)
youtube.com/watch?v=gsRb5PJcBP4
twitter.com/AnonBabble

Certainly a better way of looking at it than hoping it will pop up spontaneously. Of course it's still quite a challenge to build both a tool to generalize information in the world and a robust attention model.

That's quite an interesting take on things. I like it.

I'm sorry, but this article sounds pretty nodev ideaguy to me. Maybe I didn't understand the article, but I think it basically says "just like, program consciousness, man". Yeah, it introduces the idea of a modular approach to intelligence (which I am pretty sure it was already around, with computer vision and whatnot), and also that we may have to build a metadata database to give the AI attention and therefore consciousness, but it gives no fucking pointers on how to do so.

The only thing I liked is that it called out the fucking wannabe skeptics from /x/ who think they are deep for saying consciousness is an illusion, but that was pretty entry level.

More detail, can't find his book on bookzz, will try to find, but I think this goes into much more detail.
youtu.be/peHcu8LEgEE
and
youtube.com/watch?v=P7fGRitJNP0

Spontaneously popping up is the correct way to go about it though. After all, it's how WE developed intelligence. Hundreds of thousands of years of evolution and spontaneous genetic mutations has led to what we are today. The evolutionary process can produce brains of such stunning complexity that we can only just barely begin to scratch the surface of how they really work.

The evolutionary process has already been applied to electrical circuits to produce results that would have otherwise been impossible using conventional engineering techniques. We have only to direct our efforts into it, guide it so that it can happen in mere years instead of eons, and create something worthy of being called not "artificial intelligence", but "intelligence".

Consciousness is not intelligence. Deep learning is already a way of using an evolution like process to develop intelligence but not consciousness or volition. Any person attempting to give human or animal like motivation and drive to an artificial intelligence is sick, because without those sort of characteristics A.I. is both harmless and unlikely to develop mental disorders.

You should read the first part of "The Origin of Consciousness in the Breakdown of the Bicameral Mind" by Julian Jaynes. Even if you don't agree with his later conclusions it's a good primer for understanding what "consciousness" actually is.

You act as though free will exists outside our perception of time linearity.

Without emotion, isn't intelligence completely useless, though? Think of a task as simple as trying to make an AI design an aircraft within an aerodynamic hull, or or a tool with a good handgrip. It would have to understand what all the subcomponents of these things were, what order of priority each had in the design, and what limits on tradeoffs for each were acceptable. That prioritization and balancing under goal-oriented thought isn't just core to creativity and practicality, it's how emotions work, and the evolutionary reason they exist.

In other words, you can't use intelligence for any purpose more complex than solving a premade equation without free will. It's everything or nothing.

I don't think you understand what the words "emotion" and "intelligence" mean

English not your native language?

Emotion isn't the only possible motivator. Actually, motivation isn't really the right word to use with AI, since a simple utility function like 'get higher x under constraints y' works just as well. For example, in evolution (and evolutionary algorithms), the utility function is 'what survives the highest % of the time' or 'what shape produces the highest average power output under normal conditions'

I don't think you understand what you're talking about. Why is it that you choose to do certain things, but choose not to do others, not just specific practical choices, but broad goal selection? Because some of them produce more positive/less negative emotional responses, without that capacity, you would be fundamentally incapable of creative problem solving.


That' s sort of my point. I suspect there isn't any difference between a utility function and human emotions, that the one would always beget the behaviors of the other.

Furthermore, Evolutionary algos do NOT need to understand the components. In one of the simpler experiments where they got an EO to build a circuit to differentiate pitches, it build it just fine with only knowledge of the rules (can't place 2 components into one slot) and its utility function (how statistically correlated is its output and its input), and NO explicit knowledge of the components.
As a side note, as noted, they do produce results a human might not think of. This one ended up using the specific physical properties of the FPGA it was running its circuit on to get its result (think Rowhammer).

Yeah, human emotions are just one implementation of a utility function, just like evolution's 'survival of the fittest' or EO's selection criteria.

What's wrong is the implication that an intelligence would

'Understanding' is a system present in the human mind-design and similar ones, but is by no means necessary for intelligence. EO's don't need understanding to do the complex tasks you mentioned, like building a tool.

What completely baffles me is your statement that

Could you explain what you mean? 'free wil'l seems completely orthogonal to an intelligence's usefulness.

I think your example of designing a simple circuit probably isn't the best, since it's a "black box" sort of device, so all that matters is the what goes in, what goes out, and how efficient it is.

The tool and aircraft examples are different because they not only place demands on the internal arrangement and interface of the thing, but its interaction with external factors (assembly, maintenance, multiple uses, ergonomics, etc). This requires knowledge of the whole context of a problem, and the ability to formulate goals without explicit instruction.

Free will, as my (admittedly dated and slight) understanding of philosophy usually defines it, refers to a capacity for self-directed initiative, not just following instructions or instinct, but analyzing situations to invent and choose goals.

Oh, so you're saying the ability to formulate instrumental goals in addition to its terminal goals is an important part of intelligence?

But do keep in mind that the various factors for evaluating whether a tool is 'good' - (e.g assembly, maintenance, multiple uses, ergonomics) are all still part of a utility function which weights each factor, albeit a much longer one which we could either write in explicitly or give it the capability to form independently.

Correct me if I'm wrong, but it seems you're arguing that there's a fundamental difference between 'equation-ish' intelligence like those we currently use, and 'human-like' intelligence.

Why does the tide happen? Why does the earth go around the sun?

Physics. Energy drives things.
It's not some magic thing living in your head that makes you go, it's physics. See Jeremy England's work at MIT.

TRIGGER WARNING: actual science in this

youtube.com/watch?v=e91D5UAz-f4

Yes, there is a difference. It's the difference between solving an equation and writing one, a difference which even a good recompiler doesn't bridge the gap between. In the case of the engineering example, it's the ability to think that other external subjects are relevant to the intended application of your engineering task, decide which ones are relevant, learn about them them, make your design goals more specific, redesign, repeat the process, and decide when you've done a good enough job.

It's because we can understand the entirety of our tasks (including ourselves) that we can be truly creative.


I'm not some goony essentialist. I'm not saying that technology is forever barred from "strong"/"generalized" AI until it magically gains the divine breath of logos to spark its homunculus, or whatever. I'm saying that intellect on par with humanity is inseparable from emotional behavior, because emotions are a direct side-effect of metacognition, rather than a mere happenstance quirk of our evolution.

What you're describing is how deep learning systems are trained/designed using weights not emotions mind you. The way that things are going currently we can create intelligence that can solve problems provided that they can be defined properly. I'm sure at some point that we will be able to produce something capable of having a dialog with a human about a problem to be solved and then designing an intelligence to solve it.

Confabulation ( en.wikipedia.org/wiki/Confabulation_(neural_networks) ) works just as well on neural nets as it does on humans and it's a way to get a machine to come up with novel ideas that does not involve emotions.

Emotions are a complex mechanism of learning reinforcement. "This feels good, therefore you do it, this feels bad, therefore you don't do it", is, on a very very basic level, how emotions work. It's, in some way, how we experiment learning processes or even remember things we learned.

Reinforcement-based neural network systems can both get a positive reinforcement (you did good with this guess, so learn from it) or a negative reinforcement (you did wrong with this guess, so unlearn whatever made you think this was correct or avoid samples like this one from now on). This could be thought of as happy and sad emotions, although we're not sure how the machine actually experiences them, if at all.

But we're no longer talking about intelligence here. We're talking about qualia, and this is metaphysics territory so I doubt we get to know the answer any time soon. We more or less deduced that intelligence doesn't depend on qualia (philosophical zombies thought experiment), but we don't know whether qualia simply spawns where there is intelligence or not, which would allow computers to actually "feel" and experience things on a metaphysical level, even though maybe not like us.

It's an interesting topic, but it doesn't have an answer so debating it in the technology board may not be appropriate. Thing is, emotions may be necessary (or not) for intelligent machines, depending on how you define emotions.

The issue raised by was less about the inner nature of such AI, but about their external behavior. IMHO, any AI capable of substituting for significant human thought in any practical task would also display similar misbehaviors to that of a human psyche, due to the inherent nature of self-direction.

classic case of Moravec's paradox.

Most faggot AI developers and theorists build their mental gymnastics on a lot of fallacies. Not everything is a mathematical problem.

For an AI to work, one needs to understand the human mind to be able to mold it into mathematics, but most AI developers are fucking autists who don't understand shit and are incapable of understanding basic concepts of stimuli or even themselves.

The most basic concept when creating AIs which won't blow themselves up or end up in "terminal kill" (basically a mind loop) is setting up control variables as fixed boundaries (something AIs have to orient themselves on), very much like human behavior, which, without orientation around basic needs (hunger), social interactions (power, dominance etc.) or interaction with the material world (change, interact, form, build).

There are mathematical concepts of what I am outlining but they are quite complex and rely on a lot of game-theory-like proof concepts (preventing decision making dead ends).

I was working on AIs in Europe for more than 5 years at a German institute. Ultimately I couldn't cope with the idiocy of fellow programmers/theorists and went back to freelance coding (basically throwing away my degree).

Humanity is not ready for an AI. These hipster chimps are too dumb.

did you mean to reply to Graziano [the OP article], or did you mean this for Feels Can't Be Replicated guy?
Because Graziano, from my understanding, is agreeing with the assertion "Contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources."

In fact, that's pretty much exactly what he's saying with Attention Schema Theory, and to top that he has began the work of making the first steps towards establishing a model for consciousness that can be applied in this fashion in AI.

[the article is more about AI itself than the picture of the robot would lead one to believe.]

I think the whole notion of AI being this wholly separate thing that needs to provide it's own direction and be self sufficient is so entrenched in people's minds that it hurts research.

What is doable right now with deep learning is arguably artificially created "intelligence". It's nothing like what most people think of as AI because it solves problems without thinking. You might want to call this "Artificial intuition" to avoid associating it with traditional notions of AI. But what if intuition is all intelligence really is ? Alot of people in this thread and in general focus so much on human characteristics it's kind of weird. The author of that article thinks that load balancing(attention) is a uniquely human thing that AIs need to emulate. Another poster suggests on referring to the weights in a neural net as positive and negative emotions. Framing things that way may seem helpful but I think it hurts efforts in the long run. AI does not need to be based off the human mind at all as long as it works.

Why would it even need to "narrativize" or think to perform tasks unless you want to give it stupid amounts of autonomy ?

The hardest part of intelligence to emulate is language. Once someone figures out natural language processing to then point where questions can be actually understood by machines real AI is no more than 10 years away.>>653481

I can agree to that, but I think what's missing in the article is the part about why we would want to anthropomorphize a fake mind-- it's so we can achieve replication of living humans.
In fact, here is a video where we discuss this particular facet in more detail.
[but yes I see no reason whatsoever we hold on to this idea that nothing can surpass the human mind other than the reasons we think there can't be a higher intelligence, or the reason we once thought earth was the center of the universe and man the pinnacle of creation-- pure fucking narcissism.]

youtube.com/watch?v=gsRb5PJcBP4

In fact, I'll do you one better: I think the singularity happened some time ago, and that we're kidding ourselves about this, and being manipulated even now by machine intelligence vastly superior to us.
I mean, I think it's to the point where we are being manipulated the way those wasps are that respond to pheremonal signals from trees when the trees need them to attack parasitic insects for them.
It feels very tinfoil to say, but I sometimes wonder if the people at the higher levels of Alphabet are fully aware and recognize their place as sheepdogs.

No, it's a uniquely consciousness thing. Any animal would have it.

It is also not THE consciousness, because I fucking know you immediately jumped to conclusions and don't understand what he's saying whatsoever.