Jump to content
Objectivism Online Forum

Senescence

Rate this topic


Harrison Danneskjold

Recommended Posts

1 hour ago, StrictlyLogical said:

MS's position logically implies (or relies upon) accepting that "Man" cannot construct any "animal", or that if he were to succeed in doing so (other than by breeding animals...)

No. If you begin with human DNA and grow a human from it, then it's a human, because you began with a part of a human. You now have a clone-or some altered clone. If you don't begin with a human part, then you are making something else, a faux "human."

Link to comment
Share on other sites

21 minutes ago, dream_weaver said:

MisterSwig, what if the beginning components are not DNA. What if they are the amino acids and other chemicals and proteins, and then once assembled are indistinguishable from human DNA, are you beginning with human DNA?

No, but if you grow a human from it, then it'll be part of a human, and hence human DNA. This is an issue of causality. You can't have human DNA without a human to which it belongs. It's not the DNA itself that makes it human DNA. If that were the case, then human DNA existed before humans, which is non-sensical.  

Link to comment
Share on other sites

2 hours ago, MisterSwig said:

No. If you begin with human DNA and grow a human from it, then it's a human, because you began with a part of a human. You now have a clone-or some altered clone. If you don't begin with a human part, then you are making something else, a faux "human."

What about a human which was assembled ... not grown from non human DNA, a human assembled atom by atom?

Edited by StrictlyLogical
Link to comment
Share on other sites

15 hours ago, StrictlyLogical said:

What about a human which was assembled ... not grown from non human DNA, a human assembled atom by atom?

That's basically god-like power, and I have my doubts that it's even possible, given how many atoms are in a human and the unknown factors. But I would still call such a thing a human. Though I'd want to broaden the concept of "human" to include man-made (versus metaphysically given) humans. Sort of like how we can think of the mythical god-made humans (Adam and Eve) versus nature-made ones (who evolved from chimps).

Edited by MisterSwig
Link to comment
Share on other sites

On 9/16/2019 at 6:29 AM, StrictlyLogical said:

Ha. Strong words.

Yeah; sorry about that. I was already a little bit tipsy. Thanks for not giving me the kind of answer you definitely could have.

 

On 9/16/2019 at 6:29 AM, StrictlyLogical said:

Is your Turing test a text only no speaking type test with average human beings doing the judging of who or what is on the other side?

Sure, it can be text-only, but I wouldn't be comfortable with the kind of emotionalist "average human being" that'd fall for any program sufficiently capable of tugging at their heartstrings. Obviously I'd prefer to be the judge, myself, but the average Objectivist should suffice.

 

On 9/16/2019 at 6:29 AM, StrictlyLogical said:

What raw memory capacity, raw processing power, brute pattern associating, unthinking genetic or neural net algorithms are you limiting your non conscious aspiring impersonator to?

Let's not limit the processing power or memory capacity.

 

On 9/16/2019 at 6:29 AM, StrictlyLogical said:

Is the blind nonthinking system permitted to generate a random personal backstory with events and words to describe thoughts and feelings and experiences reported as associated with those events (similar to what it observed others reporting about events and thoughts and feelings etc).

Well, that's just it. If it was generating any of its own content on-the-fly then it wouldn't be part of today's "chatbot" paradigm (in which absolutely every response is pre-scripted).

But even if it could generate its own content on-the-fly, if it had no basis to know what its words REFERRED to (like if it was just a neural net trained to imitate human speakers) then it would still end up saying some equally-bizarre things from time to time; things that could not be explained by the intentions of some bona fide consciousness.

 

How long it'd take for it to make such a mistake isn't really the point. A more sophisticated system would obviously be able to mimick a real person for longer than a more rudimentary one could; all I'm saying is that sooner or later they would ALL show obvious signs of their non-sentience, unless they truly were sentient.

On 9/16/2019 at 6:29 AM, StrictlyLogical said:

How many years of training and creation would it take for a sufficiently sophisticated zombie to take on what looks like a personality filled with history and enough trickery to consistently and convincingly provide text messages over a short time span, such that a person simply cannot tell who or what is on the other side?

Alright; maybe we'll see certain things that could fool MOST people in the short run. That's not really what I was trying to get at (and I do see that I wasn't being very clear about it; sorry).

 

I think the better way to phrase this principle is that any non-sentient system can eventually be shown to be non-sentient by an outside observer (who perhaps has no idea what's going on "under the hood") who's at least somewhat capable of thinking critically.

 

I have to start getting ready for work soon, but maybe it'd help if I showed some examples of what I mean later on?

Link to comment
Share on other sites

On 9/16/2019 at 10:12 AM, MisterSwig said:

SL had switched the context from a human being to human DNA.  A human is an organism. Human DNA is part of an organism. The essentials of a human are his animalness (genus) and his rational faculty (differentia). The essentials of human DNA are its DNAness (genus) and that it's a part of a human (differentia). The differentia can't be that it has a particular atomic structure. Everything has a particular atomic structure.

 

On 9/16/2019 at 1:52 PM, StrictlyLogical said:

MS's position logically implies (or relies upon) accepting that "Man" cannot construct any "animal", or that if he were to succeed in doing so (other than by breeding animals...) the resulting entity, even though identical to an animal in every physical, chemical, and biological respect, in reality would lack a kind of "essence" some kind of "animalness", in the thing, which is quite separate from (and in addition to) the identity of what the thing is as a consequence purely of its natural constituent makeup: physical, chemical, biological...

i.e. his position implies there is a something more to it... and because of that, a manmade animal by definition would be "artificial" and not an "animal". 

 

 

The Aristotelian concept of "essences" being metaphysical (rather than epistemological) seems applicable.

Link to comment
Share on other sites

1 hour ago, Harrison Danneskjold said:

Alright; maybe we'll see certain things that could fool MOST people in the short run. That's not really what I was trying to get at (and I do see that I wasn't being very clear about it; sorry).

 

I think the better way to phrase this principle is that any non-sentient system can eventually be shown to be non-sentient by an outside observer (who perhaps has no idea what's going on "under the hood") who's at least somewhat capable of thinking critically.

I see what you are getting at, and I tend to agree with you.  I do note however, that the  time period for “eventual” discovery and the level of critical investigation required to reveal the masquerade would increase in proportion to the sheer size and power of the algorithm and the training it had... (Imagine a super Watson, 1000 programmers, writers and trainers, developing a fictional backstory for a specific fake person with a consistent fake  history in loving detail, years “conversing” with people to get its “personality” straight and perhaps decades of Turing tests with random people..).

“Eventually” could mean a very long interrogation of highly sophisticated testing (which was the same kind of thing used to train the thing!!!)...likely time  durations longer than what was used to test and correct it during training would be required to finally discover the sham.  And imitation of human flaws of course would be built in as well... 

This thing would likely satisfy Turing’s original test quite handily...  but eventually... perhaps the sham would reveal itself... using your more strict test.

I can’t help but think if the developers know the rules of the tests, the kinds of statistics or queues relied on to detect a sham... they would figure out a way to train the behemoth to game those aspects as well...

Anywho... this is all statistics and child’s play compared to making something which undeniably IS conscious.

 

 

Edited by StrictlyLogical
Link to comment
Share on other sites

On 9/10/2019 at 6:58 AM, MisterSwig said:

Perhaps. But Eiuol and I have decided to discuss this topic on our new YouTube show. So I'll hold my thoughts until then.

Here is the episode.

We discuss dementia and free will starting around 23:30 until about 27:05. Basically, my idea is that bad choice-making might be a factor in some forms of dementia. I also wonder if the brain has a limit to how much memory it can store. If so, someone who's lived for, say, two hundred years might simply start "overwriting" critical memories for maintaining his sense of self. 

Link to comment
Share on other sites

7 hours ago, StrictlyLogical said:

Anywho... this is all statistics and child’s play compared to making something which undeniably IS conscious.

This is nearly 3 hours of Sam Harris discussing AI with various people (Neil Degrasse Tyson comes in at around 1.5 hours and Dave Reuben at almost 2.5). I don't agree with everything he says (in fact it reminded me of all the aspects of Sam that I despise) but it ended up helping me reformulate precisely what I'm trying to say, here.

He repeatedly mentions the possibility that we'll create something that's smarter and more competent than we are, but lacking consciousness; a "superhuman intellect in which the lights aren't on". What I was trying (very, very clumsily) to say by way of the Turing test example is that that's a contradiction in terms.

 

Consciousness is a process; something that certain organisms DO. This process has both an identity (influencing the organism's behavior in drastic and observable ways) and a purpose. I don't think there could ever be some superhuman artificial intellect that was better than us at everything from nuclear physics to poetry WITHOUT having all of its lights on; after all, such capacities are why any of us even have such "lights" in the first place.

 

This obviously is relevant to the Turing test, but in retrospect (now that I've formulated exactly what I mean) that really wasn't the best route to approach this from. But now that we're all here, anyway...

 

As any REAL Terran will already know, AlphaStar is Google's latest AI system. Having already mastered Chess and Go, this one plays StarCraft. There'll be no link for anyone who doesn't know what StarCraft is - how dare you not know what StarCraft is?! Anyway; AlphaStar learned to play by watching thousands upon thousands of hours of human players and then practicing against itself, running the game at far faster than it's supposed to go, for the equivalent of something like ten thousand years. It beat the human world-champion a year or two ago, so suffice it to say that it is very good at StarCraft.

My question is what it would be like if Google tried to train another Neural Net to participate on this very forum, as a kind of VERY strict Turing Test. What would such a machine have to say?

 

Well, from reading however-many millions of lines of what we've written there are certain mannerisms it'd be guaranteed to pick up; things like "check your premises" or "by what standard" (or maybe even irrelevant music videos). And from the context of what they were said in response to it'd even get a sort of "feel" for when it'd be appropriate to use any given thing.

Note that this approach would be radically different from the modern "chatbox" setup - and also that it could only ASSOCIATE certain phrases with certain other phrases (since that's all a neural net can really do), without the slightest awareness of what things like "check your premises" actually MEANT.

Given enough time, this system (let's call it the AlphaRandian) would NECESSARILY end up saying some bizarre and infuriating things, precisely BECAUSE of its lights being off.

 

In a discussion of the finer procedural points of a proper democracy it might recognize that the words "society" and "the will of the people" were being tossed around and say "there is no such thing as a society; only some number of individuals". And if questioned about the relevance of that statement it'd probably react like (let's face it) more than a few of us often do and make some one-liner comeback, dripping with condescension, which nobody else could comprehend. On a thread about the validity of the Law of Identity it might regurgitate something halfway-relevant about the nature of axioms, which might go unchallenged. On the morality of having promiscuous sex it might paraphrase something Rand once said about free love and homosexuals, which (being incapable of anything more than brute association) it would be totally incapable of making any original defense for, and most likely incapable of defending at all. It would very rapidly become known to all as our most annoying user.

And further: since the rest of us do have all our lights on, it'd only be a matter of time before we started to name what it was actually doing. There would be accusations of "anti-conceptuality" and "mere association, with no integration". And since this is OO, after all, it would only be a matter of time before it pissed someone off SO much that they went ahead and said that it "wasn't fully conscious in the proper sense of the term". We all would've been thinking it, long before that; past a certain point it'd only need to make a totally irrelevant "check your premises" to the wrong guy on the wrong day and he'd lay it out explicitly.

And if Google came to us at any time after that to say "you probably can't guess who, but someone on your forum is actually the next generation of chatbot!" we'd all know who, out of all the tens (or hundreds?) of thousands of our users, wasn't a real person.

 

Granted, that was one long and overly-elaborate thought experiment, and you might disagree that it would, in fact, play out that way (although I did put WAY too much thought into it and I'm fairly certain it's airtight). I only mention it as one (of what's going to be many) example of my primary point:

You cannot have human-level intelligence without consciousness.

 

7 hours ago, MisterSwig said:

Here is the episode.

We discuss dementia and free will starting around 23:30 until about 27:05. Basically, my idea is that bad choice-making might be a factor in some forms of dementia. I also wonder if the brain has a limit to how much memory it can store. If so, someone who's lived for, say, two hundred years might simply start "overwriting" critical memories for maintaining his sense of self. 

That's fucking amazing!

Link to comment
Share on other sites

9 hours ago, MisterSwig said:

Here is the episode.

We discuss dementia and free will starting around 23:30 until about 27:05. Basically, my idea is that bad choice-making might be a factor in some forms of dementia. I also wonder if the brain has a limit to how much memory it can store. If so, someone who's lived for, say, two hundred years might simply start "overwriting" critical memories for maintaining his sense of self. 

How'd you do that?

Link to comment
Share on other sites

8 hours ago, Harrison Danneskjold said:

Given enough time, this system (let's call it the AlphaRandian) would NECESSARILY end up saying some bizarre and infuriating things, precisely BECAUSE of its lights being off.

Define bizzarre and infuriating.

If it can be defined, you can build a filter for it... or train around it.

 

 

8 hours ago, Harrison Danneskjold said:

And if Google came to us at any time after that to say "you probably can't guess who, but someone on your forum is actually the next generation of chatbot!" we'd all know who, out of all the tens (or hundreds?) of thousands of our users, wasn't a real person.

 

You forgot to mention, after we guess who the fake person is, Google hires us at exorbitant salaries with decades long contracts to train the thing to APPEAR to think.. up to a certain wall of evasion, non-integration, and level of effort... where it is to APPEAR either unwilling or incapable of going any further...

This kind of wall, IS a trait of many real humans.  The behemoth need only to APPEAR to have it.

 

I like this:

 

8 hours ago, Harrison Danneskjold said:

And further: since the rest of us do have all our lights on, it'd only be a matter of time before we started to name what it was actually doing. There would be accusations of "anti-conceptuality" and "mere association, with no integration". And since this is OO, after all, it would only be a matter of time before it pissed someone off SO much that they went ahead and said that it "wasn't fully conscious in the proper sense of the term". We all would've been thinking it, long before that; past a certain point it'd only need to make a totally irrelevant "check your premises" to the wrong guy on the wrong day and he'd lay it out explicitly.

BUT this annoying "person" is outwardly the same as a real person who might troll the forum.

 

This sounds nice:

 

8 hours ago, Harrison Danneskjold said:

You cannot have human-level intelligence without consciousness

 

but it is (inadvertently) a straw man. [It LITERALLY is likely true but you are attempting to use it to mean something else]

 

The claim that something can APPEAR to have human level-intelligence is NOT the same as the claim that something HAS human-level intelligence.  Remember the ice berg, and remember the communication of the product of intelligence is not the same as the presence of intelligence.  IMHO You may have starting thinking about the definition of intelligence in terms of the rationalists (the many non Objectivists who you have referenced).... some of whom no doubt equate the concept intelligence with anything which produces what we see intelligent things as communicating.

For centuries only rational humans (no animals or plants) could add 2+2 to get 4.  A naïve person, looking at the output of a calculator (or an abacus for that matter) and without knowing how it works, might equate the paltry superficial product of the "symbol 4" in response to the input of symbols 2, +, and 2, with the kind of intelligence we need to add 2 + 2 in our minds and say "4", and hence that person might ascribe human intelligence (even rationality of some kind) to the machine.  On getting to the "4" which is communicated, a human and a abacus are not the same and do not do the same things: that they produce the same superficial result is not indicative of how that result was produced.

What intelligence IS, is not simply taking inputs and producing outputs... intelligence is not only "processing information".  In fact, the word intelligence, which predates calculating machines by centuries, implicitly means a specific kind of higher consciousness, which plants and insects lack, and which Dolphins, Chimps and Man possess (to varying degrees).  Although intelligent consciousnesses can process information, processing information is not the process of intelligent conscious thought itself.

The RATIONALISTS have taken the concept of a type of consciousness in reality and attempted to redefine it in terms of abstract information, which is disastrous and anti-conceptual... it involves some wall of evasion, non-integration or is anti-effort...therefore, I posit that a sufficiently trained behemoth CAN impersonate a RATIONALIST...  :)

...............

I'd like to add we are fallible, and finite. 

A sufficiently sophisticated machine can generate an image which looks absolutely real up to the precision of our eyes, in terms of resolution, our knowledge of shading and perspective, and our experience of things in the real world.  Our ability to consciously (unaided by scientific instrumentation) identify aspects of reality are limited, even if you were to ascribe high regard to intuition and pattern recognition, we can now be gamed by artificial pictures, sounds and video, (to varying degrees in various contexts) but one day the behemoths will be able to fake all of these and more to a point an unaided human would be unable to tell an artificial scene from a real one. I propose that in principle, the this kind of gaming of a finite fallible individual consciousness is in principle unlimited (only limited by the then current level of brute processing power), and eventually an unaided individual can and will be fooled by a blind behemoth of sufficient training and capacity.  THIS WILL HAPPEN FOR THE NORMAL TURING TEST RELATIVELY SOON (<100 years).

 

I WILL AGREE with you that given an army of scientists and likewise an unlimited time and scientific instrumentation with commensurate processing power on its side, studying text messages from a fixed capacity behemoth "of disguise", the sham would EVENTUALLY be revealed through scientific investigation.

........

 

On the flip side, I would NOT support a claim that JUST BECAUSE a single particular individual (no matter how smart) was fooled (for no matter how long... a decade of texting?) that the entity on the other side was human, that we MUST THEREFORE CONCLUDE that irrespective of whatever WE KNOW the thing on the other side to BE, it must be the case that it ACTUALLY WAS conscious.

That would be a  "GET AWAY WITH IT" card if I ever saw one.  The mere fact that something APPEARS through text communication to possess human intelligence to a finite person over any finite time, does NOT mean that something MUST be conscious... ALL it means is that it was sophisticated enough to APPEAR so... and appearances can be (and in this case ARE) deceiving.  One look under the hood and this sham evaporates.

 

What should we call what we have achieved when this happens?  NOT consciousness, or sentience or human intelligence... indeed it is not intelligence at all.

Recall the story about the man who was confused about what kind of Elephant a Toy Elephant was... "we have big elephants and smart elephants and toy elephants...  they all are KINDS of elephants aren't they?"  He is conflating the identity of a REAL animal with variations in size and smarts, with something which is actually only a TOY in the shape of (i.e. which mimics the outer three dimensional form of) an Elephant and not a kind of Elephant at all. 

Artificial Intelligence is not intelligence any kind, not any more than a Toy Elephant is a any kind of Elephant. But in as much as "Toy Elephant" is perfectly valid to describe a TOY which looks like an Elephant, "Artificial Intelligence" is perfectly valid to describe something artificial which takes on the appearance of intelligence.

 

............

The rationalists secretly dream of a day when they can FOOL everyone, FOOL them all about there being a human on the other side of a paltry little text machine... and through evasion and anti-concepts fool them into thinking that their blind gargantuan of a toy is sentient. They sigh with ecstasy at the thought of one day announcing to the world of fools:

  LO, WE HAVE CREATED CONSCIOUSNESS ... LOOK UPON IT AND WONDER!

 

That day, likely before my death,  I'll be shaking my head in disappointment and disgust... as any good Objectivist would.

 

Edited by StrictlyLogical
Clarification and addtional substance
Link to comment
Share on other sites

3 hours ago, StrictlyLogical said:

The claim that something can APPEAR to have human level-intelligence is NOT the same as the claim that something HAS human-level intelligence.

And what is "human-level intelligence" in the first place? Is this a "you know it when you see it" sort of thing? Individual humans have different levels of intelligence. We have various standards of measurement. If an AI scores a 90 IQ (human average), does that make it human?

Link to comment
Share on other sites

22 minutes ago, MisterSwig said:

And what is "human-level intelligence" in the first place? Is this a "you know it when you see it" sort of thing? Individual humans have different levels of intelligence. We have various standards of measurement. If an AI scores a 90 IQ (human average), does that make it human?

Agreed.

Link to comment
Share on other sites

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

IMHO You may have starting thinking about the definition of intelligence in terms of the rationalists (the many non Objectivists who you have referenced).... some of whom no doubt equate the concept intelligence with anything which produces what we see intelligent things as communicating.

Maybe. I've been describing my reasoning skills as "rusty" but the more I review what I've been posting, the more atrocious thought-habits I discover. Don't be surprised if I drop off this site sometime soon: I'm considering taking one copy of Atlas and one of the ITOE to a remote mountaintop somewhere.

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

The RATIONALISTS have taken the concept of a type of consciousness in reality and attempted to redefine it in terms of abstract information, which is disastrous and anti-conceptual... it involves some wall of evasion, non-integration or is anti-effort...therefore, I posit that a sufficiently trained behemoth CAN impersonate a RATIONALIST...  :)

Except for that cheeky line at the very end, I have no other "maybes" for the rest of that. That's exactly why I wanted to avoid using those terms in the first place. I won't be happy if you're right about such flawed concepts STILL being a factor in this thread - because that's exactly what I think of them, too.

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

Recall the story about the man who was confused about what kind of Elephant a Toy Elephant was... "we have big elephants and smart elephants and toy elephants...  they all are KINDS of elephants aren't they?"  He is conflating the identity of a REAL animal with variations in size and smarts, with something which is actually only a TOY in the shape of (i.e. which mimics the outer three dimensional form of) an Elephant and not a kind of Elephant at all. 

Artificial Intelligence is not intelligence any kind, not any more than a Toy Elephant is a any kind of Elephant. But in as much as "Toy Elephant" is perfectly valid to describe a TOY which looks like an Elephant, "Artificial Intelligence" is perfectly valid to describe something artificial which takes on the appearance of intelligence

That's another excellent point. The kinds of "artificial intelligence" we have today really shouldn't be called "intelligence" at all; it only serves to confuse the issue.

It doesn't yet make me think I'm wrong about the nature of "artificial intelligence" whenever we manage to actually achieve it. But if you know a better term for our modern toys then I'd prefer to use something else.

Actually, the "bot" suffix might suffice. Speaking personally, that would convey to me exactly what we have today and be totally inappropriate for a truly thinking machine. I'll use that for now.

 

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

The rationalists secretly dream of a day when they can FOOL everyone, FOOL them all about there being a human on the other side of a paltry little text machine... and through evasion and anti-concepts fool them into thinking that their blind gargantuan of a toy is sentient

Depending on exactly whom you mean, I would very much disagree with that.

In Computing Machinery and Intelligence Alan Turing said:

Quote

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is 
dangerous. If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the
meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is 
expressed in relatively unambiguous words.

That doesn't sound like the thrill of tricking someone, to me.

Sam Harris and Isaac Arthur aren't eager to fool anyone, either; in fact, since they're both hard-line agnostic as to whether the Turing Test would indicate consciousness or not, they'd probably agree more with you than me. Nor can I think of any of the computer scientists who're currently working on it (because I've listened to a few in my time) who cared enough about what others thought to ever be accused of such a motive.

They're operating on some twisted premises about the nature of consciousness and most of them are wildly overoptimistic, but not that.

 

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

THIS WILL HAPPEN FOR THE NORMAL TURING TEST RELATIVELY SOON (<100 years).

I believe the last time we had this discussion it was established that the Turing Test has already been "beaten" by a grotesquely crude sort of chatbot which was programmed to monologue (with an obscene frequency of spelling and grammatical errors) about its sick grandmother. The judges "just felt so strongly" it had to be a real boy. The thing I remember most clearly was being absolutely enraged when I reviewed some of the transcripts from this test, saw that a single shred of THINKING would've showed it as an obvious fraud and revised who I thought should qualify as the judge of a proper Turing Test.

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

That day, likely before my death,  I'll be shaking my head in disappointment and disgust... as any good Objectivist would.

I'll be screaming at my computer screen.

 

On 9/18/2019 at 9:45 AM, StrictlyLogical said:

The mere fact that something APPEARS through text communication to possess human intelligence to a finite person over any finite time, does NOT mean that something MUST be conscious... ALL it means is that it was sophisticated enough to APPEAR so... and appearances can be (and in this case ARE) deceiving.  One look under the hood and this sham evaporates.

That's the thing, though. What would one expect to see if one looked at the inner workings of another consciousness? And would a machine consciousness (if possible) look anything like an organic consciousness "under the hood"?

The question of what one would find under your hood or mine is big enough to warrant its own thread, let alone some artificial kind of consciousness.

 

---

 

Since we agree that a REAL human-level intelligence necessitates consciousness (thank you) I'm not sure what else I want to start before I return from that remote mountaintop. But this, I really must share.

This is amazing.

 

According to Wikipedia, I'm not the first one to think of training a Neural Net to participate in an online forum.

They called it Mark V Shaney

And Mark was much more amazing than what I hypothesized in that last post. Mark was something really special...

Quote

It looks like Reagan is going to say? Ummm... Oh yes, I was looking for. I'm so glad I remembered it. Yeah, what I have wondered if I had committed a crime. Don't eat with your assessment of Reagon and Mondale. Up your nose with a guy from a firm that specifically researches the teen-age market. As a friend of mine would say, "It really doesn't matter"... It looks like Reagan is holding back the arms of the American eating public have changed dramatically, and it got pretty boring after about 300 games.

My friends, a new age is dawning.

 

PS: Do we already have a few bots hanging around here?!?!!

Edited by Harrison Danneskjold
PostScript
Link to comment
Share on other sites

On 9/18/2019 at 4:46 PM, MisterSwig said:

It's just the two of us, so lots of thinking and working together. We're trying to produce one 30-minute show per week.

No, like what programs did you use to put the slideshow together with your own audio?

Edited by Harrison Danneskjold
Link to comment
Share on other sites

6 hours ago, Harrison Danneskjold said:

Don't be surprised if I drop off this site sometime soon: I'm considering taking one copy of Atlas and one of the ITOE to a remote mountaintop somewhere.

Nonsense.  The value of discussion is to work out things... not to bandy about things one has already worked out.  You belong here as you are.

6 hours ago, Harrison Danneskjold said:

Depending on exactly whom you mean, I would very much disagree with that.

First, I only attributed rationalists with such a motive... there are many scientists who do not fall into that category... Second, I was mostly being colorful, in reality the mistake is an honest one, especially for rationalists, although being fooled by the fool who fools himself creates the same result only by a slightly different route.

6 hours ago, Harrison Danneskjold said:

That's the thing, though. What would one expect to see if one looked at the inner workings of another consciousness? And would a machine consciousness (if possible) look anything like an organic consciousness "under the hood"?

The question of what one would find under your hood or mine is big enough to warrant its own thread, let alone some artificial kind of consciousness.

My point is that the sham evaporates when you see the simplicity and the mechanistic brute force of fake intelligence.. I agree that until we understand consciousness when we look at a real intelligence it will be baffling but once we have a science of consciousness we’ll be able to identify its fundamentals.

 

I do agree with most of what you say and perhaps now believe we are in agreement in principle.

I’ll not concede but state (i was never in disagreement with you on this) that the thing I think you see is that things are what’s they are and the properties they exhibit, how they act etc is in accordance with their nature.  This is solid Objectivism... in principle and in reality the fake behemoth will never exhibit everything a real consciousness does... the PRACTICAL problem with a text interface is that it is an EXCEEDINGLY poor instrument for identification of things in reality.  

Only a real  Monet would look like a Monet to an expert under bright lights and close up... enough for people to pay Via Sotheby’s millions based on that assessment of reality. But a common person wearing a partial blindfold at 100 feet in a dimly lit room?... well now that’s not a fair test is it?

Link to comment
Share on other sites

6 hours ago, Harrison Danneskjold said:

No, like what programs did you use to put the slideshow together with your own audio?

Oh, just Windows Live Movie Maker for video editing. Picasa for photos. I record myself with the Lexus Audio Editor app on my phone. Eiuol records on his desktop mic, I think. And now we're using Skype to record a phone chat segment.

Link to comment
Share on other sites

10 hours ago, StrictlyLogical said:

Nonsense.  The value of discussion is to work out things... not to bandy about things one has already worked out.  You belong here as you are.

Yes, but thinking is not a team sport. I've been working these ideas out in any odd moments I've been able to find, but the problems they've highlighted will take a bit more than that. The "remote mountaintop" bit was a dash of my own color, though; it shouldn't take more than a week or two, once I get to it.

 

Besides. The Golden Age arrived today. :thumbsup:

Link to comment
Share on other sites

  • 2 weeks later...
On ‎9‎/‎20‎/‎2019 at 7:15 PM, Harrison Danneskjold said:

Yes, but thinking is not a team sport. I've been working these ideas out in any odd moments I've been able to find, but the problems they've highlighted will take a bit more than that. The "remote mountaintop" bit was a dash of my own color, though; it shouldn't take more than a week or two, once I get to it.

You never need isolate yourself while you are working things out.  I am not sure what kinds of "interactions" you have dealt with in the past, but please don't be uncomfortable working through ideas for yourself WHILE exchanging those ideas with us here.

We selfishly appreciate your presence and participation here.

:thumbsup:

Link to comment
Share on other sites

On 9/30/2019 at 7:34 PM, StrictlyLogical said:

You never need isolate yourself while you are working things out.  I am not sure what kinds of "interactions" you have dealt with in the past, but please don't be uncomfortable working through ideas for yourself WHILE exchanging those ideas with us here.

We selfishly appreciate your presence and participation here.

:thumbsup:

Thank you. A lot.

It's mainly just thinking in non-essentials. I spent several weeks on the Immigration thread, looking at the whole thing in the wrong (specifically non-essential) way. And over here I moved the goalposts from "the Turing Test will work" to "it'd work with the right judge" to "human-level intelligence requires consciousness" - every step of which was better than the last one, but all of which indicate superficial thinking. It's a little infuriating. But now that I've identified what it is I've been trying to develop some better thought-habits. Nothing's gonna happen overnight, of course, but I think I'll be ready to restate my case (or possibly retract it) soon.

As for the kinds of "interactions" I'm used to - since being open about my mental struggles has occasionally taught some very good liars to get even better, whatever you suspect about it is probably true. But I solved those problems a long time ago.

And I'll be back with a vengeance soon! :thumbsup:

 

Link to comment
Share on other sites

  • 6 months later...

Starting just a tad before 4/5th's of the way into Chapter 2 of the Storm-Sculptor better articulates the question attempted to be framed here.

Using the context leading up to the expression "I feel as if that meeting—"

Kind of a 'foreshadowing' parallel to the movie titled, The Perfect Storm

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...