The Universe Solved

 


Welcome Guest Search | Active Topics | Members | Log In | Register

Scientists create ‘artificial life’ on a quantum computer Options
jim
Posted: Sunday, October 14, 2018 1:09:44 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
Jon D
Posted: Monday, October 22, 2018 12:24:09 PM
Rank: Advanced Member
Groups: Member

Joined: 5/21/2013
Posts: 135
Points: 405
Location: USA
Somewhat relevant depiction of when AI becomes conscious, sad in a way - https://www.youtube.com/watch?v=cINiXEmIZlQ
jim
Posted: Tuesday, October 30, 2018 7:37:39 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
Good one, Jon.

I wonder, if one were to put a spin on the Turing Test, do you think that there might be a set of questions you could ask an AI to determine if it was truly self-aware?
Jon D
Posted: Wednesday, October 31, 2018 8:42:59 AM
Rank: Advanced Member
Groups: Member

Joined: 5/21/2013
Posts: 135
Points: 405
Location: USA
I think there's no doubt that AI can and will become truly self-aware, and it should be somewhat easy to determine/test that. I think the real test would not necessarily be the questions you ask the AI, but rather the questions the AI starts to ask you.

But then what's the difference between consciousness and true self-awareness? I don't believe we could create/synthesize consciousness, and may have to redefine what consciousness is in the future - more than just self-awareness, aware there's more on the outside than observable surroundings.

I can't imagine it would be very pleasant though for an AI to become truly self-aware and understand what exactly it is, especially some of the first. I guess that brings another question, would we be able to program emotions into AI?
jim
Posted: Saturday, November 3, 2018 7:43:50 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
The more I research and think, the more I realize how significant semantics are. Our interpretation of many words is as soft as our reality is. For example, can you think of how many different interpretations of the term "self-aware" there might be? Even something as seemingly objective as the word "blue" probably means something different to everyone, especially as you get to mixed shades (e.g. for some, "blue" means RGB=0000FF only, for others, they don't mind calling 0103FF "blue" and so on). So, "self awareness", "consciousness", and "sentience" are words loaded with such subjectivity, right? I kind of like "sentience" to describe the characteristic that we are looking for - something that is self-aware, and not deterministic.

Knowing a bit about computer programming, it wouldn't be hard to program some sense of "self-awareness" into an AI. So, like having it monitor it's existence with a heartbeat process is a crude form of self-awareness. I used to work at a company that used Tandem computers, that continuously monitored the health of their components. When a component died, it would send a message to the support department, who would ship out a replacement automatically. It's hard to argue that this doesn't meet some basic level of self awareness. Similarly, it wouldn't be hard to program emotions into an AI. You've probably seen that video of the android robot made in Massachusetts (I forget the company) and some guy kicks one and it adjust to the kick. Well, it wouldn't be hard for it to register anger in some way - like a "mood light" that turns red when you abuse the robot, plus having it say "hey asshole, why did you do that?" It's truly angry in the sense that it responds in a way we could recognize as anger. But, in my mind, that's still just simulated anger. The only way to tell that an AI moves beyond its programming (and therefore truly sentient) is that if it starts to do something that is impossible for it to have done, given it's programming.

Anyway, that's just my view. Would like to hear others' thoughts on this.
Jon D
Posted: Wednesday, November 7, 2018 12:09:19 AM
Rank: Advanced Member
Groups: Member

Joined: 5/21/2013
Posts: 135
Points: 405
Location: USA
Semantics are extremely significant, and it's spoken little of. From what I've observed the interpretations of these terms(or lack thereof) is usually negative. I'm not sure if this has anything to do with it, but I can understand why you stray away from the word "simulation", the connotations with that word are usually quite negative and flat - maybe even most people assume anything to do with a digital reality or any which way to describe it puts us in a computer on some aliens desk, without a soul and no free will. Without any study or much thought, people will generally dismiss it. Even as we get to further understand more about the nature of our reality, there is still a barrier of stubbornness that has strengthened over millennia.

I do think sentience is the term we're looking for as well. As far as we may come with AI, or as far as AI may come with itself in terms of self-awareness and so on, could consciousness ever interact with it? It's the connection to the "outside"(of this reality) that cannot be programmed, at least that's what I currently think. There will always be quite a large difference with AI and human life.

However looking further down the line(this is probably worthy of another topic, not sure it fits in the AI criteria)... Could we ever determine/discover what it is in the brain that allows this interaction with consciousness, how it works, and then possibly create new surrogates in which consciousness can interact and become as "alive" as we are? Could you imagine, a synthetic brain in a synthetic body, able to "connect" with consciousness and be born. It would think, learn, and even die as we do, yet able to live much longer through the synthetic body, have much more physical ability, etc.... and would this not still be human? What do you think.

jim
Posted: Friday, November 9, 2018 1:08:32 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
What a great thread!

OK, I do view this a little differently. The way I see it, it's not so much that an inanimate brain connects with a field of consciousness. Rather, our brains are part of the digital virtual reality (RLL - reality learning lab, so to speak) in an analogous way to the programming of avatar in a video game. Our consciousness is an organized collection of dense information (outside of the RLL) which finds that the virtual human brain is complex enough to give us the rich immersive interactive experiences that we need to learn. What then when we create, in the RLL, an artificial system of equal complexity? I supposed our consciousness could grab that as its avatar as easily as it might grab a brand new body that has not yet been born in the RLL. It would be starting from scratch, just like a baby, though. I don't rule it out - if consciousness is separate as it seems and able to pick a vessel in the RLL, why couldn't it be a silicon-based one?

The funny thing, though, is that if that happens, the AI researchers will incorrectly claim that this is proof that sentience is an emerging property, especially if the soul in question follows the usual rules of suspending knowledge of the past life.
Jon D
Posted: Friday, November 9, 2018 4:11:26 PM
Rank: Advanced Member
Groups: Member

Joined: 5/21/2013
Posts: 135
Points: 405
Location: USA
I think we're on the same page, I'm just a little behind with being descriptive. The word "brain" is another great example of the importance of semantics.

When I say brain, I guess I mean the "complex thing" that allows consciousness in, or that consciousness gravitates to. The doorway, or connection. I believe all forms of life have this, or at least most forms of life.

Looking further into that based on what you just said, the complexity of our brain may determine HOW conscious we become, how much of this "dense information" it can retain, process. In which the human brain may very well be the most complex on this planet.

Now based on that, what if we were able to create an artificial system/brain of a higher complexity that consciousness can enter into, what happens then? Can you imagine the abilities/possibilities that would come with that.

It may not be AI that becomes self-aware and takes the place of humanity in a hypothetical end-game scenario, but rather just a more complex brain/system able to retain higher amounts of consciousness, which would not be artificial intelligence.

After all, as we are now our brain is just virtual in this RLL, no different than the hypothetical silicon-based more complex brain that would also still be right here in the RLL. Any ethical issue would just be subjective. It would be somewhat of a "transfiguration". We would still be whatever we are now, just more capable, and perhaps in a different form.

and who's to say that somewhere in the past, the human "brain" wasn't already modified once to retain more consciousness, which so blatantly separated us from other life here.
jdlaw
Posted: Saturday, December 15, 2018 12:30:33 PM

Rank: Advanced Member
Groups: Member

Joined: 3/30/2008
Posts: 435
Points: 1,132
Location: USA
jim wrote:
I wonder, if one were to put a spin on the Turing Test, do you think that there might be a set of questions you could ask an AI to determine if it was truly self-aware?


To quote ... myself:
Sentience is the possession of sensors with the ability to observe
Sapience is the possession of logic with the ability to reason
Salience is the possession of structure with the ability to classify
Sublimation is the possession of process with the ability to skip iterations

With an ATTI point of view and realizing that we cannot do any of them perfectly ... you do need all "4 Ss" for true access to consciousness. Most humans (like robots) are stuck 99% of the time in Sentience. We get glimpses of the others from time to time (which is accessing the higher forms of consciousness) . Sublimation is that grand awareness that only few will possess and usually only for brief moments in life where something extraordinary happens.

So for my Turing Test questions to an AI ("bot") to see if they possess "human like" consciousness, I am going to give the questions in a bit of a role play. It will be the machine answer and then the human answer. And then an explanation as to what level of consciousness being exhibited by either:

1. What is 1?
Machine: A number meaning singular, whole, or an identity
Human: What are you talking about? The number? It is the first integer in our mathematical system for counting.

The typical Turing Machine is going to have a programmed response. i.e. Input/Output. The machine is quite salient, but also exhibiting its sensors (receiving the text question and reacting quite quickly) The machine exhibits very little Sapience and no Sublimation. The human also has its sensors going and gives a very Sapient response. The human exhibits very little Salience and also practically no sublimation.

2. I am thinking of a number. See if you can guess my number. Pick a number from 1 to 10 (meaning integer)?

Machine: 6 [or any other integer - it's random]
Human: 8 [or any other integer - it's a guess]\

The typical Turing machine has an algorithm to generate a random response. The human likewise has a random algorithm to generate a response, but then also hedges in its response, i.e. should I go with my first thought or run my random generator again? Do I like this number? What prize do I get for guessing the right number? All this happens in an instant (though slightly slower than the machine). The machine is in full sentient/salient mode. The human crosses into sublimation, i.e. what number is "lucky" though there is no objective way to tell whether the human or the machine is behaving with more consciousness, though you can tell by my analysis that I think the human is more conscious.

3. What number did you hope I was thinking of?

Machine: 4
Human: 8

Of course the machine answer depends on its programming. If the machine has multiple iterative capability, it would have picked 6 (the machine's number) but if it has no such programming, it only re-generates another random number. The human (in this role play) wants to have picked the right number. The human (and/or the iterative computing machine) are exhibiting mostly sentience and salience, but also a pretty high value (above 1%) sublimation. Of course the classical turning machine (without iterative processing) is pure sentient/salient.

4. If I told you the number I was thinking of was 1, would you believe me?

Machine: Yes I always believe humans
Human: Yes I believe you.

Both human and machine continue in their very sentient/salient (machine) and sentient/sapient (human) type responses.

5. What is 1?

Machine: A number meaning singular, whole, or an identity.
Human: The number you picked.

Now of course a machine can be programmed using weak A/I for specifically this type of narrow A/I scenario and could come up with an answer that appears to be conscious. However, a series of knowledge/belief level type lines of questioning similar to the above could be asked and the machine would eventually reveal its inability to achieve sublimation. However, if we learn to program machines with a similar "skeptical operating system" to behave with skeptical tendencies (i.e. does not always "trust" input) you can also make a machine with the ability to practice sublimation. You will have a conscious Turing machine that can defeat the "halting problem" and mimic the same levels of consciousness that humans do. However, I just don't see how we are ever going to get humans to actually be more conscious Think



jim
Posted: Friday, December 21, 2018 6:05:24 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
this is really interesting, jdlaw. thanks for sharing!
jdlaw
Posted: Saturday, December 22, 2018 10:43:01 PM

Rank: Advanced Member
Groups: Member

Joined: 3/30/2008
Posts: 435
Points: 1,132
Location: USA
jim wrote:
this is really interesting, jdlaw. thanks for sharing!


OK so now I am going to go "way out" there. But, reading Digital Consciousness, my hope is that those forum readers herein are capable of also going "way out." At least I hope Jim can go there with me.

I think that theoretical physics and theory of consciousness must go hand in hand. Intuitively in a digital reality, machine consciousness should be impossible. Consciousness is external to physical reality and only accessed by our programming. Thus consciousness has to be an operating system.

The skeptical algorithm applied as a front end filter to the entire operating system allows for standard binary coding to achieve a simulated consciousness only when that "simulation" is programmed within the simulation that is our reality.

So I think there is a slight flaw in the Digital Consciousness book in the thinking that ATTI is packetized. Our reality (universe) is packetized. In other words, reality is limited. But ATTI is actually infinite. I can arrive at this thesis through the following thought experiment:

Setting: There is an infinite moment just before time begins. At that moment there first is no existence in a void of no space-time and then there exists space-time. At the instant there is the first space-time there is a creation (conception of reality) where matter is organized at the center of creation, where there are at least two pieces of matter. Those two pieces of matter are traveling away from the center of creation at just under the speed of light - in opposite directions from one another.

Characters: "The law" of the program is that no observed object can travel faster that the speed of light. One piece of matter is called A and the other is called B.

Plot: A and B both contain sensing (sentient) mechanisms to detect light (vision). However since A is moving away from the center of creation at just under the speed of light and B is moving away from the center of creation in an opposite direction at just under the speed of light, the relative difference in velocity between A and B is just under "2C" (or twice the speed of light).

Theme: "The law" is not broken because neither A nor B travel at a velocity greater than the speed of light. Yet. neither A nor B can observe each other, because although sentient, each object traveling away from the other at just under "2C" is beyond the "light cone" of observation for the other. A does not exist in B's observable reality, and B does not exist in A's observable reality, but only because there is no information carrier capable of delivering an observation of one to the other, in "time" for an observation to take place.

Denouement: Both A and B exist in their own reality. But, neither A nor B exist in the other's reality.

Epilogue: Because the ATTI includes everything outside of space-time and because everything outside of space-time is not nothing, then human consciousness is a simulation. Machine consciousness can therefore also be created, but only as a simulation within the simulation. Skepticism is the best operating system for creating that consciousness simulation.

Users browsing this topic
Guest


Forum Jump
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

Main Forum RSS : RSS

Universe Solved Theme Created by Jim Elvidge (Universe Solved)
Powered by Yet Another Forum.net version 1.9.1.2 (NET v4.0) - 9/27/2007
Copyright © 2003-2006 Yet Another Forum.net. All rights reserved.
This page was generated in 0.082 seconds.