The Universe Solved

 


Welcome Guest Search | Active Topics | Members | Log In | Register

Am I Wrong Here? Options
Tracy
Posted: Thursday, July 28, 2011 2:00:02 PM
Rank: Advanced Member
Groups: Member

Joined: 3/30/2009
Posts: 448
Points: 1,347
Location: N.Lewisburg,OH,US
http://www.nytimes.com/2011/07/28/science/28life.html?_r=1
Quote:
Biologists do not agree on what the definition of life should be or whether it is even useful to have one.

Consciousness, I would think that is the thing that defines life.
Or could life be a dead, lifeless, mechanical process sort of thing?
I just thought that one line in the article was a bit odd.d'oh!

It remind me of Jim's post:
http://blog.theuniversesolved.com/2011/05/02/is-lida-the-software-bot-really-conscious/
Quote:
If true, a soul may someday make a decision to occupy a machine of sufficient complexity and design to experience what it is like to be the "soul in a machine".
ebb101
Posted: Friday, July 29, 2011 3:49:58 AM
Rank: Advanced Member
Groups: Member

Joined: 7/5/2010
Posts: 80
Points: 255
It seems when many scientists encounter things that classical physics doesn't understand--or leads them to directions they don't want to go, they simply say: it doesn't matter.
Shut up and calculate.
Consciousness is one of those hang-ups.
In a weird kind of way, this attitude had become much like the Church's stance during the Copernican revolution.
Shut up and pray.
That's not going to work much longer.
jdlaw
Posted: Wednesday, August 10, 2011 7:48:56 AM

Rank: Advanced Member
Groups: Member

Joined: 3/30/2008
Posts: 435
Points: 1,132
Location: USA
Now you may find my following comments either overly complicated or a little over simplified (if not downright confusing, depending on your frame of reference) but I define sentience as simply being able to pause -- but pause for a purpose. In other words, our brains can only categorize. We do not compute facts like a computer. We categorize and then pause.

I think Consciousness defines sentient life, but other simpler things may define "life" in general. For example, an amoeba is alive, but is an amoeba sentient?

Think about it. I define sentience as simply being able to believe (i.e. to actually make a choice). Computers can randomize and give you a random result, but have no ability to actually choose. "Pick a number between 1 and 10" for example. Sure a computer will roll its dice and give a number, but did it acually "pick" a number? That is why computers (at least so far) have not been sentient.

Amoebas and higher animals like dogs, horses, and cats can make some miniscule choices (but my cat likes to drink water out of the fish bowl even though he constantly gets scolded for doing so). That is because ameobas and other animals only have a very limited ability for choice. Mostly they are like robots -- programmed to act on instinct. We humans, however, are the true great experiment of the programmers (the Akasha ... the Matrix ... God' VR world .. programmed reality).

But Human consciousness is this higher ability to think about things and then form a belief -- which to me means to make a choice -- which to me in turn simply means "pause" for a moment.

We think that we as humans compute facts (with our brains). We do not. Computers compute facts. Our Brains compute choices. "Validation" and "verification" in "Bayesian" type models are both crippled and perturbed by the endless heuristics, weights, probabilities, probabilistic quantifiers, or levels of importance. The great human experiment by the programmers was to give us ("players" in this VR game of life) the ability for "demon" arrest (i.e. in a mathematical sense to overcome the combinatorial implosions that probabilities cause). Whereby, we as humans who are the programmer's triumph of working around the combinatorial explosions (implosions) of knowledge representations within the finite computational resources of a "physical" world (local universe) Bayesian probabilities are replaced by simulated reality (non-local universe) where the belief representation (rather than knowledge representation) terminates the probabilities and cannot be crippled by combinatorial explosion.

So to paraphrase Descartes only slighly ... I believe, therfore, I am.


spearshaker
Posted: Wednesday, August 10, 2011 9:30:23 AM

Rank: Advanced Member
Groups: Member

Joined: 10/3/2009
Posts: 31
Points: 93
Location: Canada
Well, it would seem that the absurd things people can be persuaded to do after being subjected to effective hypnotic suggestion strongly supports your view, jdlaw. (And they can invariably justify these actions!) I had never thought about it this way. It seems a straightforward conclusion, based on this well recognized bit of direct evidence. But I'm not convinced we can't actually compute in the conventional sense also, given the chance and enough incentive.
jim
Posted: Wednesday, August 10, 2011 10:15:12 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
I guess I have a slightly different perspective. AI researchers often talk about consciousness as an emergent property of the complexity of the brain. I am very interested in things that are emergent - it is a concept that applies to many systems in many different fields, including software design. For example, rather than designing software up front, a design can emerge from developing software using the practice of test-driven development (TDD).

I talked about emergent properties a bit on the Reality Transformed panel last week. We could categorize emergence into two different types: continuous and discontinuous. Continuous emergence would be a property that results from some process no matter how small or large the input to the process is. If consciousness is a continuously emergent property of neural complexity, then one would have to say that any animal has some level of consciousness. Not only that, any computational device, no matter how small (a flip flop) would have some level of consciousness. Personally, I find that argument a little hard to accept to say the least.

However, one could make the argument that in "living" things only, consciousness exists at a level relative to the level of the complexity of the neurological system. Such an argument makes some sense to me. As an example, if consciousness is a manifestation of a creator experiencing physical reality through living organisms (as many philosophers and mystics have proposed), it would stand to reason that the level of consciousness that can be experienced is somewhat proportional to neurological complexity.

On the other hand, there could be such a thing as discontinuous emergence of a property or process. As an example, consider nuclear fusion in a star. You can pile more and more matter into the system and let gravity generate higher and higher heat as the matter collapses inward. But no nuclear fusion happens until the magic density and quantity of matter occurs. Then, boom, the star blinks on. Could consciousness be similar in nature; that is, it doesn't emerge from neurological complexity until a particular threshold is passed. Maybe that threshold is just below that of humans? Or some other arbitrary point in the animal kingdom?

But I doubt it. Because there is no evidence of that. How might we determine empirically whether something is conscious or not? By apparent intelligence, or ability to learn? Maybe not because synthetic neural nets can learn. By apparent ability to make decisions? Again, logic gates are simple and don't even require silicon. How about by jdlaw's method, by observing a pause before a decision? I kind of like that, but of course it isn't foolproof. Fact is, we have no better way of determining consciousness by observing empirical data that we do of knowing how somebody else feels. Still, wouldn't it seem that you would be able to notice a distinct difference between the behavior of something that is conscious and something that isn't? But we don't. Each animal of incremental complexity displays incremental changes in behavior patterns that are entirely consistent with incremental consciousness. Amoeba's have been shown to learn. Birds can work out puzzles that they haven't seen before and can think symbolically. Apes have vocabularies of hundreds of words. Some have posited that the "encephalization quotient" or ratio of brain weight to animal weight might define intelligence. If so, dolphins are very close to humans!

Finally, there is tremendous evidence that consciousness survives death (e.g. see Pim von Lommel's research). Then there is John von Neumann's estimate that we experience 280 trillion bits of data in our lifetimes. But the brain only contains 100 billion neurons. And yet, under hypnosis, we can be made to recall nearly any particular experience in full detail, not the 1 in 3000 that these numbers would imply.

So, the only thing that makes sense to me is that consciousness is not emergent, but it is greatly assisted by the brain during our lives. The seat of it appears to not be fixed in the brain, which is entirely consistent with programmed reality. And, on the evidence, other animals have it as well. Or they are so well modeled as to be indistinguishable from what human's think of as consciousness.

I am totally going to blog on this!
spearshaker
Posted: Thursday, August 11, 2011 12:23:02 PM

Rank: Advanced Member
Groups: Member

Joined: 10/3/2009
Posts: 31
Points: 93
Location: Canada
The great neurosurgeon Wilder Penfield would seem to view much in the preceding discussions as complementary views. As I recall his summarized conclusions in his book The Mystery of the Mind:

Under electrical stimulation at localized brain sites while fully conscious, patients were able simultaneously to experience past events as if they were happening presently, while completely aware that they were on the operating table, with the experience coming from their past because of the electrical stimulation. This certainly brings the “computer/brain” analogy to mind!

There is nowhere in the human brain a localization of the capabilities to “decide,” to “assess,” to “believe” or “disbelieve.” These are functions entirely of “mind” (whatever that might be!).

With apologies to Wilder Penfield, I further summarize his conclusions after his 50 years of work in the mechanics of the brain: There is a quality in us that has nothing to do with the mechanical gadget, the brain. I don’t recall if he used the word “spirit” (my word for it), but that’s what he described; there’s apparently a ghost in the machine.

Since personal experiences aren’t today considered worthy “scientific” input to basic understanding (a defining symptom of our own dark age), Penfield’s conclusions are worth keeping in mind. He’s hard to argue with.

jim
Posted: Thursday, August 11, 2011 4:25:47 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
thanks for that, spearshaker, very interesting. another book to put on my reading list. Applause
jdlaw
Posted: Saturday, August 13, 2011 7:26:38 AM

Rank: Advanced Member
Groups: Member

Joined: 3/30/2008
Posts: 435
Points: 1,132
Location: USA
Quote:
I guess I have a slightly different perspective. AI researchers often talk about consciousness as an emergent property of the complexity of the brain. I am very interested in things that are emergent - it is a concept that applies to many systems in many different fields, including software design. For example, rather than designing software up front, a design can emerge from developing software using the practice of test-driven development (TDD).


I absolutely do not disagree that "emergent" consciousness is perhaps the one concept gaining popularity in AI that actually shows promise. This emergent behaviour theory to consciousness has also been related to the "embodied cognitive science" meaning you have to look at "intelligence" from a holistic perspective by not just considering the mind, but rather the entire living system and the environment.

I think emergence definitely explains "life" intelligence but does not fully explain "sentient" intelligence. If you buy into Descartes then this becomes a "duality" thing. Is the intelligence "self-aware?" Are we really "self-aware" or are we just fooled into "thinking" we are aware - "I think therefore, I am"

Beyond TDD you can put actual sensors on robots connected directly to motor control (e.g. proximity sensors that steer away from walls, or physical grasping devices that can only pick up certain shape and size objects). When these emobied mechanisms are put into action, discriminating patterns evolve that certainly mimic intelligence or at least some sort of central processing, when in fact there is no central processing. Does that define life, however?
jim
Posted: Tuesday, August 16, 2011 8:49:10 PM

Rank: Advanced Member
Groups: Member

Joined: 3/19/2008
Posts: 980
Points: 2,952
I'm with you, jdlaw. So much of this is semantics. One person's "life" is another person's "programmed autonomy." I think that life is actually very difficult to define. Personally, I think there is more to life than programmed autonomy, but I can't put my finger on it. Sentience to me, is synonymous with soul and free will.
Users browsing this topic
Guest


Forum Jump
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

Main Forum RSS : RSS

Universe Solved Theme Created by Jim Elvidge (Universe Solved)
Powered by Yet Another Forum.net version 1.9.1.2 (NET v4.0) - 9/27/2007
Copyright © 2003-2006 Yet Another Forum.net. All rights reserved.
This page was generated in 0.068 seconds.