Jump to content
Objectivism Online Forum

Rate this topic

Recommended Posts

6 minutes ago, MisterSwig said:

we would need to change man into something not-man, which means we have not actually exceeded man's limits

The concept "man" includes a great many variations, both in virtue of genetic nature (some disabled, others "gifted") and by nurture (natural variations in physical, intellectual, and emotional growth of humans ... "self made soul"...).

That a man has a heart rebuilt with stem cells, or a mechanical one, or a pig heart transplant, makes him no less a man.

Specific men have specific differing natural limits... which can and will be changed with treatment and manmade advances in health and biological intervention, but each will still be a man.

Be sure, I am not advocating that a machine masquerading as a person is a person just because it can imitate that person...

The extension of the limits on MAN, likely require changing his defunct cells with newly generated ones on a continual basis, etc. to grow generations of organs and cells and systems over and over, in an analogous way that new generations of people are newly made all the time, except it would take place within that person's own body, involve that persons own cells/DNA etc. and not entail or require replacement of the whole at the same time... but bits and pieces throughout over time.

 

The body already regenerates all of its cells every 7 years ... the problem is when this process creates unviable ones... telomeres (part of DNA) plays a role... so the system is there already... it just needs some support ..

an internal wheel chair  of sorts.

Share this post


Link to post
Share on other sites
2 hours ago, MisterSwig said:

To exceed man's biological limits, we would need to change man into something not-man, which means we have not actually exceeded man's limits. We've merely used him to make something else.

No, you would have to remove something about yourself to make you no longer homo sapiens. All anyone really means when they say exceeding limits is overcoming a constraint. If you want to be so concrete bound that any change from what is naturally given makes you no longer technically human, fine, but that really only matters if you were abducted by an alien and they had to decide where to put you in a zoo. 

2 hours ago, MisterSwig said:

The capacity of my stomach, for example, has a limit.

And you can exceed those limits surgically. 

Edited by Eiuol

Share this post


Link to post
Share on other sites
2 hours ago, MisterSwig said:

The capacity of my stomach, for example, has a limit.

Unless you alter that limit (such as through stomach stapling).

 

2 hours ago, MisterSwig said:

To exceed man's biological limits, we would need to change man into something not-man, which means we have not actually exceeded man's limits. We've merely used him to make something else.

Perhaps. It depends on how you conceptualize what it means to be a "man".

I think basically everyone would agree that people with glasses, crutches, cybernetic hands or even computer chips in their brains would still be human. But since any true AI we design (or especially upload) would think and speak just like us, I'll probably be inclined to apply such terms to any sentient thing we build (or grow in a vat, for that matter). That's part of why I found Transcendence so deeply moving, and why it's so weird to hear people talk about scenarios in which mankind is "replaced" by something we built.

Anything we build that's smart enough to "replace" us would basically be one of us. Maybe not in its appearance or building materials (which is where there is some wiggle room to define these things differently) but certainly in its mental and cultural content; in all the ways that really matter, it would be part of mankind.

 

Also why I don't describe myself as a "transhumanist". When we talk about putting chips in our heads and uploading copies of our minds to the cloud, I don't think we're talking about becoming "post-humans"; just increasingly better versions of what we always have been.

Since it's a matter of definitions and grey areas that's not one of the points I'd actually argue over. But Eiuol's song was too catchy (and so much more relevant on this thread) to leave out of this.

Share this post


Link to post
Share on other sites
2 hours ago, StrictlyLogical said:

That a man has a heart rebuilt with stem cells, or a mechanical one, or a pig heart transplant, makes him no less a man.

I'm considering arguing otherwise. It makes him part-machine or part-pig. As long as he retains the essence of his manhood, he can be considered man. But let's say we could replace his brain with a monkey's. Would he still be man? What about a mechanical brain?

Otherwise I agree with your general clarifications about limits.

Share this post


Link to post
Share on other sites
3 hours ago, StrictlyLogical said:

Be sure, I am not advocating that a machine masquerading as a person is a person just because it can imitate that person...

What if it had originally been a person who simply replaced every single organ and appendage as they broke down over time (like a biological Ship of Theseus)? What if there was no discernable difference between their original (accidental) appearance and behavior and their artificial (self-chosen) form? At what point would they officially cease to be a man?

These are ideas we should be fleshing out now, before we actually need to use them

8 minutes ago, MisterSwig said:

I'm considering arguing otherwise.

Please do!

Edited by Harrison Danneskjold

Share this post


Link to post
Share on other sites
7 minutes ago, Harrison Danneskjold said:
10 minutes ago, MisterSwig said:

I'm considering arguing otherwise.

Please do!

Only if I can hold SL to a literal interpretation of this line:

3 hours ago, StrictlyLogical said:

That a man has a heart rebuilt with stem cells, or a mechanical one, or a pig heart transplant, makes him no less a man.

Specifically, SL, do you mean "no less a man" literally?

Edited by MisterSwig

Share this post


Link to post
Share on other sites
2 hours ago, MisterSwig said:

Specifically, SL, do you mean "no less a man" literally?

I mean conceptually, a man is a man when certain essentials or fundamentals of the concept are met...

Although one might say, "Now That's a car" when a Ferrari passes by, an old Volkswagen "Golf" is no less a car... it simply IS a car because it's a car!

 

Share this post


Link to post
Share on other sites
2 hours ago, Harrison Danneskjold said:

 At what point would they officially cease to be a man?

I suppose once he ceases to be a rational conscious animal ... once a machine, he would more correctly be a post-human.

As for the property of consciousness, we currently have no idea what is it about the brain and what the brain does that actually constitutes what consciousness is... so I cannot even say whether such a thing requires biological, electrochemical processes to exist.... or whether processes which silicon can perform can constitute it...

in the end, recall a machine running a simulation is not the same thing as the thing it is attempting to simulate.  All a machine running a simulation does is take information as input and transform it into other information which it outputs from time to time.... and although that finally transformed  information can be made to look like something (air turbulence expected to form on a formula one race car), or be put into words which mimic what a human might communicate (as generated by so called AI), what the machine running the simulation is and does is wholly different from the thing it mimics.

Edited by StrictlyLogical

Share this post


Link to post
Share on other sites
1 hour ago, StrictlyLogical said:

I mean conceptually, a man is a man when certain essentials or fundamentals of the concept are met...

And based on your reply to Harrison, I assume you would argue that those essentials are a rational faculty and animalness. I could now ask what are the essentials of a rational faculty and animalness. We could go back and forth for quite awhile until we ultimately reduce manhood to the many essential things that are required for a rational faculty and animalness to exist in a single entity--as far as we know based on objective reality. So, a man is not a man when certain essentials of the concept are met. He is a man when nature makes him a man. If we ever create a man-made "man," he will be just that, not a man according to our concept of man, but an artificial "man" according to our concept of man-made things.

Edited by MisterSwig

Share this post


Link to post
Share on other sites
1 hour ago, MisterSwig said:

If we ever create a man-made "man," he will be just that, not a man according to our concept of man, but an artificial "man" according to our concept of man-made things.

I disagree. A human DNA molecule is a human DNA molecule by virtue of how it is structured atom by atom not where it came from.

Share this post


Link to post
Share on other sites
26 minutes ago, StrictlyLogical said:

A human DNA molecule is a human DNA molecule by virtue of how it is structured atom by atom not where it came from.

So, not by virtue of it being part of a human organism?

Share this post


Link to post
Share on other sites
39 minutes ago, MisterSwig said:

So, not by virtue of it being part of a human organism?

Things are what they are, they are not where they are from.  

Human DNA has a certain structure... a certain sequence.  DNA from a human has a certain origin, but if I extract virus DNA from a sick person that does not mean it is human DNA, it encodes nothing about humans... just the virus... it’s DNA from a human but not human DNA.

Me and my wife can make a human, it’s manmade but biological ... and we did not have intimate control of all the processes we set into motion... but the end result, if we could have recreated exactly using other methods (currently impossible) would still be the same end result, a person.

If two different methods make the exact same type of thing, a perfect copy, then by the nature of the things, their structure and their function, the things are the same, it does not depend upon their origin.

Of course one atom being exactly the same as another does not mean two atoms are really one atom... it just means they are exactly the same.

 

Anywho, feels like an argument and sounds like we disagree so... you can have the last word.

 

Edited by StrictlyLogical

Share this post


Link to post
Share on other sites
2 hours ago, StrictlyLogical said:

Human DNA has a certain structure... a certain sequence.

If you found DNA in a blood pool on the ground, all you would know is the structure and sequence. You wouldn't know that it's a particular kind of DNA until you compared it to a sample from a known organism that matched. The only reason you can talk about human DNA is because we already know what human DNA looks like. We know what DNA from a human looks like. Things aren't where they are from, and I haven't said that. I said that human DNA is part of a human organism. If you separate the two, it's still part of a human, but, like a severed human leg, it's no longer attached to its organism.

Edited by MisterSwig

Share this post


Link to post
Share on other sites
On 9/4/2019 at 8:56 AM, StrictlyLogical said:

HD -  You need to own and read the Golden Age trilogy by John C. Wright.

DO IT.  You WILL thank me later for suggesting it to you.  DO not read any reviews or spoilers just buy it... (if you have to ... just buy the first book, used, paperback... less than 10 bucks now)

The Golden Age

The Pheonix Exultant

The Golden Transcendence

 

and please let me know what you think and feel after reading them.

I managed to snag a hardcover version with all three in it for $7.24!

Started to read it today and had to get over being inundated by the Dramatus Personae. If HD doesn't thank you later for suggesting it, I might just have to, especially if the terminology I have to keep looking up keeps panning out like it has thus far.

Share this post


Link to post
Share on other sites
On 9/13/2019 at 6:51 PM, dream_weaver said:

I managed to snag a hardcover version with all three in it for $7.24!

Started to read it today and had to get over being inundated by the Dramatus Personae. If HD doesn't thank you later for suggesting it, I might just have to, especially if the terminology I have to keep looking up keeps panning out like it has thus far.

Ha!  Well we’ll see how it goes.  I just happen to be rereading the series now as well... 4rth or 5th time?  

Share this post


Link to post
Share on other sites
On 9/11/2019 at 2:26 PM, StrictlyLogical said:

Although one might say, "Now That's a car" when a Ferrari passes by, an old Volkswagen "Golf" is no less a car... it simply IS a car because it's a car!

But not a Prius. That is not a car; it is a lunch box. :P

 

On 9/13/2019 at 5:51 PM, dream_weaver said:

If HD doesn't thank you later for suggesting it, I might just have to, especially if the terminology I have to keep looking up keeps panning out like it has thus far.

I've ordered a copy that should arrive sometime in October. So no spoilers!

On 9/11/2019 at 2:36 PM, StrictlyLogical said:

I suppose once he ceases to be a rational conscious animal ... once a machine, he would more correctly be a post-human.

I suppose so. I've been refining my thoughts on this over the past few days (it's been quite a while since I've tried to participate in this kind of conversation) and I think you're probably right about that. As right as it'd be to attribute "rationality", "personhood" and "individual rights" to any true AI (assuming, for the sake of argument, we actually managed to build one), calling it a member of "homo sapiens" regardless of what it's made of makes about as much sense as a trans guy declaring himself to be a female with a penis. You've got me there.

On 9/11/2019 at 2:36 PM, StrictlyLogical said:

As for the property of consciousness, we currently have no idea what is it about the brain and what the brain does that actually constitutes what consciousness is... 

That's certainly true. However, even if it's not actually possible to program "consciousness" into a computer (which is itself a somewhat dubious assumption since within our lifetimes we'll have computers -if memory serves- capable of simulating the whole human brain down to something like the molecular scale); even granting that, we could always grow the necessary organic components in a vat. We've already done it with rat brains. So although it's true that silicon might not be the appropriate material to use in our efforts to create AI, in the grand scheme of things that would represent at most a minor hiccup in such efforts.

 

On 9/11/2019 at 2:36 PM, StrictlyLogical said:

In the end, recall a machine running a simulation is not the same thing as the thing it is attempting to simulate.  All a machine running a simulation does is take information as input and transform it into other information which it outputs from time to time.... and although that finally transformed  information can be made to look like something (air turbulence expected to form on a formula one race car), or be put into words which mimic what a human might communicate (as generated by so called AI), what the machine running the simulation is and does is wholly different from the thing it mimics.

This is the part I don't entirely agree with.

That infernal Chinese room.

To start with, I'd like to avoid using the terms "input", "output" and "information" unless they're absolutely necessary. I think anyone who's read the ITOE can see how frequently our society abuses those infinitely-elastic terms today, so let's see if we can in the very least minimize them from here on out.

Secondly, as much as I'd like to throw "simulation" into the same junk heap and be done with it, I don't think I can make this next point without it. So I'd like to mention something before I start trying to use it.

The Identity of Indiscernibles is an epistemological principle which states that if any two things have every single attribute in common then they are the same thing; if X is indiscernible from Y (cannot be told apart from each other in any way whatsoever) then X is Y and we don't even need the extra label of "Y" because they're both just X.

I bring this up because I recognize it as the words for the implicit method of thinking which I've always brought to this conversation as well as the basis for my conclusions about it. If it's valid then I'm fairly sure (mostly) that everything else I'm about to say must also be valid. I'd also like to point out that every single Neo-Kantian argument about philosophical zombies gets effortlessly disintegrated by the application of this one little rule. So it does have that going for it.

 

On 9/11/2019 at 2:36 PM, StrictlyLogical said:

recall a machine running a simulation is not the same thing as the thing it is attempting to simulate

I would agree with that - sometimes.

A simulated car in a video game is obviously not the same thing as a real car. One of these can be touched, smelled, weighed and driven (etc) while the other can only be seen from certain very specific angles. The two things are very easy to distinguish from one another, provided the simulated one isn't part of some Matrix-style total simulation (in which case things would get rather complex and existential).

I would even agree that a computer simulation of any specific individual's mind (like in Transcendence) would not be that person's subjective, first-person experience; i.e. it wouldn't actually be THEM (although my reasons for that are complicated and involve one very specific thought experiment).

However, if a simulated consciousness could not be distinguished from an organic one (like if one were to pass the Turing Test) then by the Identity of Indiscernibles one would have to conclude that the machine was, in fact, conscious. It wouldn't be a traditional, biological kind of consciousness (assuming it hadn't been grown in a vat, which could be determined by simply checking "under the hood") but it would nonetheless be a true consciousness. Even if it was simulating the brain of some individual (like in Transcendence) whom it wouldn't actually BE, it would still be alive.

In short, in most cases I would wholeheartedly agree that a simulation of a thing is not actually that thing (and could, in fact, be differentiated from the real thing quite trivially), but not in those cases of actual indiscernibality.

On 9/11/2019 at 2:36 PM, StrictlyLogical said:

and although that finally transformed  information can be made to look like something (air turbulence expected to form on a formula one race car), or be put into words which mimic what a human might communicate (as generated by so called AI), what the machine running the simulation is and does is wholly different from the thing it mimics.

It's that last example that I really take issue with.

I don't know whether it's a case you'd actually make or not and I'm trying not to put words in your mouth. But while I'm on the subject I wanted to mention the Chinese Room objection to AI, partially because it looks vaguely similar to what you actually said (if you squint) and primarily because it annoys me so very much. The argument (which I linked to just there) imagines a man locked in a room with two slots, "input" and "output", who is gradually trained to correctly translate between Chinese and Japanese despite not understanding what a single character of either actually MEANS. This is meant as an analogy to any possible general AI, which implies that it couldn't possibly UNDERSTAND its own functions (no matter how good it gets at giving us the correct responses to the correct stimuli).

 

First of all, one could apply the very same analogy (as well as what you said about merely "transforming information") to any human brain. What makes you think that I understand a single word of this, despite my demonstrable ability to engage with the ideas that're actually in play? Maybe I'm just the latest development in philosophical zombies.

Second of all, the entire argument assumes that it is possible to correctly translate between Chinese and Japanese without speaking either of those languages, SOMEHOW. As a programming enthusiast, this is the part that really gets under my skin about it - HOW in the name of Satan do you program a thing to do any such thing WITHOUT including anything on the MEANING of its actions? The multitude of problems with today's "chatbots" (and I can go on for hours about all the ways in which their non-sentience should be obvious to any THINKING user); everything that's wrong with them more-or-less boils down to their lack of any internal referents. The fact that they don't actually know what they're saying makes them say some truly bizarre things, at times; a consequence which I'd call inescapable (metaphysical) for any non-sentient machine, by virtue of that very mindlessness. The Chinese room argument frolics merrily past all such technicalities to say: "sure, it's physically possible for a mindless thing to do all those things that conscious minds do, so how can we ever tell the two apart?!"

Finally, the Chinese Room argument is almost as shameless a violation of the Identity of Indiscernibles as the concept of a philosophical zombie is.

 

I really wish I knew which Chinese room was the one in question so I could just torch the damn thing. It's so wrong on so many different levels.

Edited by Harrison Danneskjold

Share this post


Link to post
Share on other sites
55 minutes ago, Harrison Danneskjold said:

I've ordered a copy that should arrive sometime in October. So no spoilers!

The only spoiler I uncovered so far is the author, a long-time atheist, turned to catholicism. This, however, is a fact aside from the content of the trilogy.

Perhaps by October, a second reading could commence. It has been an extraordinary and unexpected approach to sci-fi thus far. 

Share this post


Link to post
Share on other sites
37 minutes ago, Harrison Danneskjold said:

But not a Prius. That is not a car; it is a lunch box. :P

 

I've ordered a copy that should arrive sometime in October. So no spoilers!

I suppose so. I've been refining my thoughts on this over the past few days (it's been quite a while since I've tried to participate in this kind of conversation) and I think you're probably right about that. As right as it'd be to attribute "rationality", "personhood" and "individual rights" to any true AI (assuming, for the sake of argument, we actually managed to build one), calling it a member of "homo sapiens" regardless of what it's made of makes about as much sense as a trans guy declaring himself to be a female with a penis. You've got me there.

That's certainly true. However, even if it's not actually possible to program "consciousness" into a computer (which is itself a somewhat dubious assumption since within our lifetimes we'll have computers -if memory serves- capable of simulating the whole human brain down to something like the molecular scale); even granting that, we could always grow the necessary organic components in a vat. We've already done it with rat brains. So although it's true that silicon might not be the appropriate material to use in our efforts to create AI, in the grand scheme of things that would represent at most a minor hiccup in such efforts.

 

This is the part I don't entirely agree with.

That infernal Chinese room.

To start with, I'd like to avoid using the terms "input", "output" and "information" unless they're absolutely necessary. I think anyone who's read the ITOE can see how frequently our society abuses those infinitely-elastic terms today, so let's see if we can in the very least minimize them from here on out.

Secondly, as much as I'd like to throw "simulation" into the same junk heap and be done with it, I don't think I can make this next point without it. So I'd like to mention something before I start trying to use it.

The Identity of Indiscernibles is an epistemological principle which states that if any two things have every single attribute in common then they are the same thing; if X is indiscernible from Y (cannot be told apart from each other in any way whatsoever) then X is Y and we don't even need the extra label of "Y" because they're both just X.

I bring this up because I recognize it as the words for the implicit method of thinking which I've always brought to this conversation as well as the basis for my conclusions about it. If it's valid then I'm fairly sure (mostly) that everything else I'm about to say must also be valid. I'd also like to point out that every single Neo-Kantian argument about philosophical zombies gets effortlessly disintegrated by the application of this one little rule. So it does have that going for it.

 

I would agree with that - sometimes.

A simulated car in a video game is obviously not the same thing as a real car. One of these can be touched, smelled, weighed and driven (etc) while the other can only be seen from certain very specific angles. The two things are very easy to distinguish from one another, provided the simulated one isn't part of some Matrix-style total simulation (in which case things would get rather complex and existential).

I would even agree that a computer simulation of any specific individual's mind (like in Transcendence) would not be that person's subjective, first-person experience; i.e. it wouldn't actually be THEM (although my reasons for that are complicated and involve one very specific thought experiment).

However, if a simulated consciousness could not be distinguished from an organic one (like if one were to pass the Turing Test) then by the Identity of Indiscernibles one would have to conclude that the machine was, in fact, conscious. It wouldn't be a traditional, biological kind of consciousness (assuming it hadn't been grown in a vat, which could be determined by simply checking "under the hood") but it would nonetheless be a true consciousness. Even if it was simulating the brain of some individual (like in Transcendence) whom it wouldn't actually BE, it would still be alive.

In short, in most cases I would wholeheartedly agree that a simulation of a thing is not actually that thing (and could, in fact, be differentiated from the real thing quite trivially), but not in those cases of actual indiscernibality.

It's that last example that I really take issue with.

I don't know whether it's a case you'd actually make or not and I'm trying not to put words in your mouth. But while I'm on the subject I wanted to mention the Chinese Room objection to AI, partially because it looks vaguely similar to what you actually said (if you squint) and primarily because it annoys me so very much. The argument (which I linked to just there) imagines a man locked in a room with two slots, "input" and "output", who is gradually trained to correctly translate between Chinese and Japanese despite not understanding what a single character of either actually MEANS. This is meant as an analogy to any possible general AI, which implies that it couldn't possibly UNDERSTAND its own functions (no matter how good it gets at giving us the correct responses to the correct stimuli).

 

First of all, one could apply the very same analogy (as well as what you said about merely "transforming information") to any human brain. What makes you think that I understand a single word of this, despite my demonstrable ability to engage with the ideas that're actually in play? Maybe I'm just the latest development in philosophical zombies.

Second of all, the entire argument assumes that it is possible to correctly translate between Chinese and Japanese without speaking either of those languages, SOMEHOW. As a programming enthusiast, this is the part that really gets under my skin about it - HOW in the name of Satan do you program a thing to do any such thing WITHOUT including anything on the MEANING of its actions? The multitude of problems with today's "chatbots" (and I can go on for hours about all the ways in which their non-sentience should be obvious to any THINKING user); everything that's wrong with them more-or-less boils down to their lack of any internal referents. The fact that they don't actually know what they're saying makes them say some truly bizarre things, at times; a consequence which I'd call inescapable (metaphysical) for any non-sentient machine, by virtue of that very mindlessness. The Chinese room argument frolics merrily past all such technicalities to say: "sure, it's physically possible for a mindless thing to do all those things that conscious minds do, so how can we ever tell the two apart?!"

Finally, the Chinese Room argument is almost as shameless a violation of the Identity of Indiscernibles as the concept of a philosophical zombie is.

 

I really wish I knew which Chinese room was the one in question so I could just torch the damn thing. It's so wrong on so many different levels.

I think you are conflating the vast and deep complexity of consciousness (and the subconscious) with its vanishingly small and superficial surface appearances.

The words we finally use to communicate what we think, feel, and experience at surface consciousness are nothing compared to what is actually happening when we think, feel, and experience. Making a non conscious thing communicate words to sound like a thinking, feeling, experiencing human, although difficult, is laughably simple compared to making sure a complex system is and does what is necessary for an actual consciousness, which is thinking, feeling, and experiencing. 

There is more to a book, an iceberg, and a human... than what’s on the surface ... you have to look closely inside and beneath the surface to really understand...

If everything about a conscious person thinking, feeling, and experiencing could be fully observed and understood... so that the waves of activity electrical and chemical in sequence and by locality (and globally) could be fully understood, and what about them was important and how, we might know what kind of different complex kind of appearances together are a sure indicator for consciousness in some other complex system... strings of words my friend do not cut it... non thinking AI will fool us long before anything like “Real synthetic I” comes to be.

 

I think an an error of the rationalists in their theory of mind is the conflation of the products of the mind with what mind is and is doing. The mind is doing a lot more than processing information, so much more that comparing a human brain with an algorithm is laughable.

 

The Chinese room is an empty and meaningless toy of a rationalist.

PS The zombie argument is a nonstarter with an Objectivist view of existence and identity.

 

In principle there is EVERY reason to believe we will create a synthetic consciousness, once we understand scientifically what it really is... in the FAR future.

Share this post


Link to post
Share on other sites
On 9/11/2019 at 8:33 PM, MisterSwig said:

Things aren't where they are from, and I haven't said that.

This makes no sense to me. What on Earth did you really mean if it wasn't that the essential characteristic of a thing is where it came from? This might just be the rum talking (and I'm very sorry if it is) but I am very confused.

Share this post


Link to post
Share on other sites
4 hours ago, StrictlyLogical said:

 

4 hours ago, StrictlyLogical said:

The Chinese room is an empty and meaningless toy of a rationalist.

PS The zombie argument is a nonstarter with an Objectivist view of existence and identity.

I'm extremely glad to hear it. :thumbsup: It's heartening to see we can at least agree on that much, from the get-go.

4 hours ago, StrictlyLogical said:

In principle there is EVERY reason to believe we will create a synthetic consciousness, once we understand scientifically what it really is... in the FAR future.

And also that, in essence.

I don't think we'll necessarily have to figure out what consciousness is before we replicate it. The history of science is littered with examples of people discovering new technologies before they fully understood how they worked (penicillin comes to mind) although it would be preferable if we didn't end up playing with such a powerful force before we knew what makes it tick (the phrase "a kid playing with his dad's gun" comes to mind). I also suspect it won't be as far in the future as you seemed to imply, there. I'd be surprised if anyone participating in this thread didn't live to see it happen.

But those are both very minor differences, in the grand scheme of things; in essence we're already on that same page.

 

4 hours ago, StrictlyLogical said:

I think you are conflating the vast and deep complexity of consciousness (and the subconscious) with its vanishingly small and superficial surface appearances.

That's very interesting, though, because it's exactly what I'd say about your position. 

 

Put yourself in the shoes of a chatbot programmer who's trying to handle the case of being asked "how do you feel?" You might program it to respond with "good" or "bad" - both of which open themselves up to be asked "why?" Now, a real person who was really reporting on their internal state would have absolutely no problem answering that question, but a chatbot-programmer would then have to think of a specific, concrete answer to "why" (and "how" and "I know what you mean" and etc), and then an infinitely-branching set of responses for whatever their interlocutor says after that. Anyone who grasps why lying cannot work in the long run will immediately see the problem with such an approach.

I not only see that problem: I am saying that this problem is INHERENT to trying to tell a non-thinking AI some string of words to make it LOOK like real AI, and that the only solution there can ever be would be to do it for real.

4 hours ago, StrictlyLogical said:

non thinking AI will fool us long before anything like “Real synthetic I” comes to be.

Speak for yourself, man.

 

I seem to recall you weren't much of a programmer (at least as of what my memory of several years ago seems to indicate) but if anyone reading this, at any point however-long-from-now, can propose any alternative approach besides sentience, itself, you'll have my eternal (and extremely public) gratitude.

I dare you.

 

Because if the only possible approaches are "pre-scripted strings of text" or "true sentience" then I would love to demonstrate how to reliably falsify the former (i.e. show it for what it really is) every time, because it's really not that complicated. Not only can it be done, it should be done: it's very important for us to know when we've actually built a proper AI and when we haven't.

Finally, for the record: we haven't. But for how much longer I really couldn't say.

 

P.S:

 

In the Fountainhead, the very first words Toohey says to Keating are "what do you think of the temple of Nike Apteros?" Keating, despite having never heard of it before, says "that's my favorite" (just like a chatbot might) and Toohey goes on talking as if that was the only answer he was looking for; briefly saying: "I knew you'd say it".

There is a reason I'm so confident this wouldn't fool anyone who takes the time to learn how the gimmick works.

Edited by Harrison Danneskjold
PostScript

Share this post


Link to post
Share on other sites
4 hours ago, Harrison Danneskjold said:

I dare you

Ha. Strong words.  Note, I said “long before”... Step back a bit.  Let me ask some questions.

Is your Turing test a text only no peaking type test with average human beings doing the judging of who or what is on the other side?

How long is your Turing test?  10 minutes? 2 hours?  1 day?

What raw memory capacity, raw processing power, brute pattern associating, unthinking genetic or neural net algorithms are you limiting your non conscious aspiring impersonator to?

How many people, stories, conversations are you limiting your impersonating behemoth to?  Is the blind nonthinking system permitted to generate a random personal backstory with events and words to describe thoughts and feelings and experiences reported as associated with those events (similar to what it observed others reporting about events and thoughts and feelings etc).  Is it allowed access to hours and hours of television petabytes of literature?  Is the internally silent monstrosity trainee in its patterns corrected in what it reports it thinks feels etc through training and “cognitive” therapy?

How many years of training and creation would it take for a sufficiently sophisticated zombie to take on what looks like a personality filled with history and enough trickery to consistently and convincingly provide text messages over a short time span, such that a person simply cannot tell who or what is on the other side?

 

This is why I say Long before... long before real consciousness is produced.

Edited by StrictlyLogical

Share this post


Link to post
Share on other sites
9 hours ago, Harrison Danneskjold said:

This makes no sense to me. What on Earth did you really mean if it wasn't that the essential characteristic of a thing is where it came from?

SL had switched the context from a human being to human DNA.  A human is an organism. Human DNA is part of an organism. The essentials of a human are his animalness (genus) and his rational faculty (differentia). The essentials of human DNA are its DNAness (genus) and that it's a part of a human (differentia). The differentia can't be that it has a particular atomic structure. Everything has a particular atomic structure.

Edited by MisterSwig

Share this post


Link to post
Share on other sites
13 hours ago, Harrison Danneskjold said:

This makes no sense to me. What on Earth did you really mean if it wasn't that the essential characteristic of a thing is where it came from? This might just be the rum talking (and I'm very sorry if it is) but I am very confused.

MS's position logically implies (or relies upon) accepting that "Man" cannot construct any "animal", or that if he were to succeed in doing so (other than by breeding animals...) the resulting entity, even though identical to an animal in every physical, chemical, and biological respect, in reality would lack a kind of "essence" some kind of "animalness", in the thing, which is quite separate from (and in addition to) the identity of what the thing is as a consequence purely of its natural constituent makeup: physical, chemical, biological...

i.e. his position implies there is a something more to it... and because of that, a manmade animal by definition would be "artificial" and not an "animal". 

 

Edited by StrictlyLogical

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...