The Universe Solved

 


Welcome Guest Search | Active Topics | Members | Log In | Register

Scientists create ‘artificial life’ on a quantum computer Options
jim
Posted: Sunday, October 14, 2018 1:09:44 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 853
Points: 2,562
Jon D
Posted: Monday, October 22, 2018 12:24:09 PM
Rank: Member
Groups: Member

Joined: 5/21/2013
Posts: 24
Points: 72
Location: USA
Somewhat relevant depiction of when AI becomes conscious, sad in a way - https://www.youtube.com/watch?v=cINiXEmIZlQ
jim
Posted: Tuesday, October 30, 2018 7:37:39 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 853
Points: 2,562
Good one, Jon.

I wonder, if one were to put a spin on the Turing Test, do you think that there might be a set of questions you could ask an AI to determine if it was truly self-aware?
Jon D
Posted: Wednesday, October 31, 2018 8:42:59 AM
Rank: Member
Groups: Member

Joined: 5/21/2013
Posts: 24
Points: 72
Location: USA
I think there's no doubt that AI can and will become truly self-aware, and it should be somewhat easy to determine/test that. I think the real test would not necessarily be the questions you ask the AI, but rather the questions the AI starts to ask you.

But then what's the difference between consciousness and true self-awareness? I don't believe we could create/synthesize consciousness, and may have to redefine what consciousness is in the future - more than just self-awareness, aware there's more on the outside than observable surroundings.

I can't imagine it would be very pleasant though for an AI to become truly self-aware and understand what exactly it is, especially some of the first. I guess that brings another question, would we be able to program emotions into AI?
jim
Posted: Saturday, November 03, 2018 7:43:50 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 853
Points: 2,562
The more I research and think, the more I realize how significant semantics are. Our interpretation of many words is as soft as our reality is. For example, can you think of how many different interpretations of the term "self-aware" there might be? Even something as seemingly objective as the word "blue" probably means something different to everyone, especially as you get to mixed shades (e.g. for some, "blue" means RGB=0000FF only, for others, they don't mind calling 0103FF "blue" and so on). So, "self awareness", "consciousness", and "sentience" are words loaded with such subjectivity, right? I kind of like "sentience" to describe the characteristic that we are looking for - something that is self-aware, and not deterministic.

Knowing a bit about computer programming, it wouldn't be hard to program some sense of "self-awareness" into an AI. So, like having it monitor it's existence with a heartbeat process is a crude form of self-awareness. I used to work at a company that used Tandem computers, that continuously monitored the health of their components. When a component died, it would send a message to the support department, who would ship out a replacement automatically. It's hard to argue that this doesn't meet some basic level of self awareness. Similarly, it wouldn't be hard to program emotions into an AI. You've probably seen that video of the android robot made in Massachusetts (I forget the company) and some guy kicks one and it adjust to the kick. Well, it wouldn't be hard for it to register anger in some way - like a "mood light" that turns red when you abuse the robot, plus having it say "hey asshole, why did you do that?" It's truly angry in the sense that it responds in a way we could recognize as anger. But, in my mind, that's still just simulated anger. The only way to tell that an AI moves beyond its programming (and therefore truly sentient) is that if it starts to do something that is impossible for it to have done, given it's programming.

Anyway, that's just my view. Would like to hear others' thoughts on this.
Jon D
Posted: Wednesday, November 07, 2018 12:09:19 AM
Rank: Member
Groups: Member

Joined: 5/21/2013
Posts: 24
Points: 72
Location: USA
Semantics are extremely significant, and it's spoken little of. From what I've observed the interpretations of these terms(or lack thereof) is usually negative. I'm not sure if this has anything to do with it, but I can understand why you stray away from the word "simulation", the connotations with that word are usually quite negative and flat - maybe even most people assume anything to do with a digital reality or any which way to describe it puts us in a computer on some aliens desk, without a soul and no free will. Without any study or much thought, people will generally dismiss it. Even as we get to further understand more about the nature of our reality, there is still a barrier of stubbornness that has strengthened over millennia.

I do think sentience is the term we're looking for as well. As far as we may come with AI, or as far as AI may come with itself in terms of self-awareness and so on, could consciousness ever interact with it? It's the connection to the "outside"(of this reality) that cannot be programmed, at least that's what I currently think. There will always be quite a large difference with AI and human life.

However looking further down the line(this is probably worthy of another topic, not sure it fits in the AI criteria)... Could we ever determine/discover what it is in the brain that allows this interaction with consciousness, how it works, and then possibly create new surrogates in which consciousness can interact and become as "alive" as we are? Could you imagine, a synthetic brain in a synthetic body, able to "connect" with consciousness and be born. It would think, learn, and even die as we do, yet able to live much longer through the synthetic body, have much more physical ability, etc.... and would this not still be human? What do you think.

jim
Posted: Friday, November 09, 2018 1:08:32 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 853
Points: 2,562
What a great thread!

OK, I do view this a little differently. The way I see it, it's not so much that an inanimate brain connects with a field of consciousness. Rather, our brains are part of the digital virtual reality (RLL - reality learning lab, so to speak) in an analogous way to the programming of avatar in a video game. Our consciousness is an organized collection of dense information (outside of the RLL) which finds that the virtual human brain is complex enough to give us the rich immersive interactive experiences that we need to learn. What then when we create, in the RLL, an artificial system of equal complexity? I supposed our consciousness could grab that as its avatar as easily as it might grab a brand new body that has not yet been born in the RLL. It would be starting from scratch, just like a baby, though. I don't rule it out - if consciousness is separate as it seems and able to pick a vessel in the RLL, why couldn't it be a silicon-based one?

The funny thing, though, is that if that happens, the AI researchers will incorrectly claim that this is proof that sentience is an emerging property, especially if the soul in question follows the usual rules of suspending knowledge of the past life.
Jon D
Posted: Friday, November 09, 2018 4:11:26 PM
Rank: Member
Groups: Member

Joined: 5/21/2013
Posts: 24
Points: 72
Location: USA
I think we're on the same page, I'm just a little behind with being descriptive. The word "brain" is another great example of the importance of semantics.

When I say brain, I guess I mean the "complex thing" that allows consciousness in, or that consciousness gravitates to. The doorway, or connection. I believe all forms of life have this, or at least most forms of life.

Looking further into that based on what you just said, the complexity of our brain may determine HOW conscious we become, how much of this "dense information" it can retain, process. In which the human brain may very well be the most complex on this planet.

Now based on that, what if we were able to create an artificial system/brain of a higher complexity that consciousness can enter into, what happens then? Can you imagine the abilities/possibilities that would come with that.

It may not be AI that becomes self-aware and takes the place of humanity in a hypothetical end-game scenario, but rather just a more complex brain/system able to retain higher amounts of consciousness, which would not be artificial intelligence.

After all, as we are now our brain is just virtual in this RLL, no different than the hypothetical silicon-based more complex brain that would also still be right here in the RLL. Any ethical issue would just be subjective. It would be somewhat of a "transfiguration". We would still be whatever we are now, just more capable, and perhaps in a different form.

and who's to say that somewhere in the past, the human "brain" wasn't already modified once to retain more consciousness, which so blatantly separated us from other life here.
Users browsing this topic
Guest


Forum Jump
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

Main Forum RSS : RSS

Universe Solved Theme Created by Jim Elvidge (Universe Solved)
Powered by Yet Another Forum.net version 1.9.1.2 (NET v2.0) - 9/27/2007
Copyright © 2003-2006 Yet Another Forum.net. All rights reserved.
This page was generated in 0.109 seconds.