Jump to content
Objectivism Online Forum

Rate this topic

Recommended Posts

For those who don't know, at some point we will be able to cure senescence (or the aging process) as long as we don't blow ourselves up first.

 

Now, to cure senescence is not the same as curing death. We'd still be able to starve (meaning we'd all still need jobs), catch ebola, suffocate or get run over. Nobody would be Ayn Rands "immortal and indestructible robot"; there simply wouldn't be any hard cap on our potential life spans. In fact, given enough time without the biggest cause of death in the western world (old age) it's a statistical certainty that we would all die in some other way eventually. One person might drown after 350 years while another might get shot at 5000 years old but sooner or later we would all kick the bucket, with or without senescence.

 

My question is whether Egoism would still apply without that hard cap of having "say, 80 years left to live" and what (if anything) would change about its specific applications. I'm also assuming this would be an unqualified good but I'm sure that's gonna come up, too.

 

What do you think?

Share this post


Link to post
Share on other sites
11 hours ago, Harrison Danneskjold said:

For those who don't know, at some point we will be able to cure senescence (or the aging process) as long as we don't blow ourselves up first.

 

Now, to cure senescence is not the same as curing death. We'd still be able to starve (meaning we'd all still need jobs), catch ebola, suffocate or get run over. Nobody would be Ayn Rands "immortal and indestructible robot"; there simply wouldn't be any hard cap on our potential life spans. In fact, given enough time without the biggest cause of death in the western world (old age) it's a statistical certainty that we would all die in some other way eventually. One person might drown after 350 years while another might get shot at 5000 years old but sooner or later we would all kick the bucket, with or without senescence.

 

My question is whether Egoism would still apply without that hard cap of having "say, 80 years left to live" and what (if anything) would change about its specific applications. I'm also assuming this would be an unqualified good but I'm sure that's gonna come up, too.

 

What do you think?

You know this HD.  

Even if age did NOT kill (NO cap) and there were only a continuous tiny chance of death through accident/negligence/murder the alternative between life and death still exists and the necessity and possibility of values, the choice to live, morality etc etc etc are all still applicable.  

Only when something has NO life/death alternative facing it, no possibility of loss of life (imagine Rands INDESTRUCTIBLE robot) would there be no objective values etc. and egoism would be inapplicable.

 

Edited by StrictlyLogical

Share this post


Link to post
Share on other sites
16 hours ago, Harrison Danneskjold said:

My question is whether Egoism would still apply without that hard cap of having "say, 80 years left to live" and what (if anything) would change about its specific applications. I'm also assuming this would be an unqualified good but I'm sure that's gonna come up, too.

Our potential life span affects the choices we make. For example, now that people don't typically die at the age of 30 or so, they often wait until their 30s to have children, because they selfishly pursue a business career instead. Before modern science extended the average lifespan, women were compelled to marry young and have children before they died from something that we now cure with a simple injection. If we extend our lifespan to, say, 200 years, perhaps we'll see other changes in our general life choices.

Share this post


Link to post
Share on other sites
8 hours ago, StrictlyLogical said:

You know this HD.

Yes, and I wish more of the people here did. I'm sure you remember all the times I've had to explain that curing senescence would not automatically fix all the problems in everyone's life, that morality would still apply and that it's something we should try to do, if we can. These aren't just the isolated misconceptions of one or two people. And the other day I found the same argument being had over on that Immigration thread (where it doesn't belong), between one guy who thought we'll all be immortal in 20-30 years, at which time we should just build a perfectly moral philosopher-AI-king, and another who thought that Egoism wouldn't apply to such a world.

I have so many things to say about that whole tangent (as I'm sure you know). Hopefully I can say them all one more time here and then just refer back to this whenever they come up again.

 

One thing off the top of my head is that a general AI, by its very definition, would come with all the weaknesses and pitfalls of a traditional brain. It couldn't be infallible or omniscient and it would neither be automatically rational nor automatically "perfectly moral". Anything you programmed to have no choice to be irrational or evil could not be conscious. That's not how it works.

 

2 hours ago, MisterSwig said:

If we extend our lifespan to, say, 200 years, perhaps we'll see other changes in our general life choices.

Exactly. Personally, one of the first things I'd do would probably be to read through the entirety of the Critique of Pure Reason (which I really ought to do anyway, but life is short - you know?).

Share this post


Link to post
Share on other sites

Actually, on the subject of general AI, the fact that it'd have free will (by definition) is also why I don't think Aasimov's 3 laws would even work. If something is actually conscious in the same way that we are then it would be able to choose to be suicidal or homicidal, just like we can.

Which makes me doubt how profitable it'd be to actually build one. You wouldn't want to build it just to take care of household chores or anything (essentially enslaving it) because it'd come with all the practical difficulties of enslaving an organic person (and history shows that this includes the possibility of a violent slave revolt).

 

The butter-passing-robot in Rick and Morty that can ask about the meaning of life (which, for it, is to pass butter) is funny, but not at all realistic. If you only wanted a butter-passing-robot then you'd only want it to be as smart as it'd need to be for that purpose.

 

And while I'm sure there are plenty of people who'll keep trying to build a general AI, just to see if it's possible (which is why I've tried my hand at it a few times), in the future we'll probably build a whole lot of Roombas and very, very few Hal's.

 

Edited by Harrison Danneskjold
Neat little music video

Share this post


Link to post
Share on other sites
On 9/1/2019 at 9:15 PM, Harrison Danneskjold said:

I'm also assuming this would be an unqualified good but I'm sure that's gonna come up, too.

One counterargument mode by some science fiction stories is that it would cause things to stagnate.  I'm skeptical of this counterargument, but uncertain.

Share this post


Link to post
Share on other sites
31 minutes ago, Doug Morris said:

One counterargument mode by some science fiction stories is that it would cause things to stagnate.  I'm skeptical of this counterargument, but uncertain.

If Ayn Rand, Immanuel Kant, Thomas Jefferson and Sir Isaac Newton were all still alive today (and all caught up on the latest intellectual developments) "stagnant" is probably one of the last words I'd apply to the results. I'm not saying they'd all be making one contribution after another to human well-being (which is why I threw Kant in the mix) but they'd certainly be doing SOMETHING.

The much more common argument I've heard on OO is that Egoism wouldn't apply to a potentially-infinite life span and that it'd therefore be wrong of us to even attempt it. It's a misapprehension (either about Egoism or about what "curing senescence" would really mean) that SL seems to have already dealt with for me. B) Thanks, by the way.

 

 

The only other argument I've heard on OO was presented as "you'd certainly get bored after the first million years".

Personally, if I managed to live for a million years (in spite of all the dangers that will always exist) and if I did get bored then I could always take a handful of ecstasy, moon the IRS building and choose some form of death that'd be suitable for who I was in life. It's not like it will ever be hard for someone to die if that's what they want. And both of those "if's" are pretty damn big ones, with or without senescence.

 

But if you're struck by some other thought that keeps you uncertain - that's exactly what this thread is for. :thumbsup:

Edited by Harrison Danneskjold

Share this post


Link to post
Share on other sites

You've made a good point about the stagnation argument.

If I somehow managed to live for a million years, how much would the me of a million years from now have in common with the me now?  How much would the me of a million years from now even remember about the me now? 

Share this post


Link to post
Share on other sites
3 hours ago, MisterSwig said:

Unless we cure dementia, you probably won't even be volitional for more than one or two hundred years.

In which case I'd take a handful of ecstasy, organize an orgy on the White House Lawn and die a death that'd "fit with" who I was in life.

What argument are you trying to make?

 

5 hours ago, Doug Morris said:

If I somehow managed to live for a million years, how much would the me of a million years from now have in common with the me now?  How much would the me of a million years from now even remember about the me now? 

That, I don't know.

I don't know if I'd remember typing out this post (or even what this forum was) 1000 years from now. Since a large part of what we call "forgetfulness" are mechanisms in the brain that destroy unused information to make room for more useful stuff, I could not (honestly) tell you how much I would have in common with the "me" from 10,000 years in the future. But there are a few things.

Just as each of us have a few things that have remained constant since our earliest memories (in spite of such waste-removal-systems) I can be pretty sure that I would have SOME things in common with myself-10,000-years-from-now.

 

I can promise that this future me wouldn't have any problem explaining its disagreements to people, but would absolutely loathe the possibility of forcing any such agreement onto someone (or, at least, the pretense of an agreement, since it'd know that the real thing can't be gotten that way). I could tell you what kind of music it'd probably like, too.

As for the rest (the bits I couldn't currently account for) - can any of us actually say what we'll be like in our old age? Or how much of this we'll remember?

 

At least I know I'll be one of those old men that gets a cane just to beat little whippersnappers with. It's part of my personal character.

Share this post


Link to post
Share on other sites

HD -  You need to own and read the Golden Age trilogy by John C. Wright.

DO IT.  You WILL thank me later for suggesting it to you.  DO not read any reviews or spoilers just buy it... (if you have to ... just buy the first book, used, paperback... less than 10 bucks now)

The Golden Age

The Pheonix Exultant

The Golden Transcendence

 

and please let me know what you think and feel after reading them.

 

 

 

Share this post


Link to post
Share on other sites
5 hours ago, Harrison Danneskjold said:

What argument are you trying to make?

That the human mind probably has a time limit. It's very hard to get past 100 years without dementia.

Share this post


Link to post
Share on other sites
49 minutes ago, MisterSwig said:

That the human mind probably has a time limit. It's very hard to get past 100 years without dementia.

That sounds like a general "argument" against any treatment or cure for senescence.

 

The human heart probably has a time limit, after which deterioration is inevitable OR the human heart is always and unavoidably deteriorating.

Human lungs probably have a time limit, after which deterioration is inevitable OR the human lungs are always and unavoidably deteriorating.

The human liver probably has a time limit, after which deterioration is inevitable OR the human liver is always and unavoidably deteriorating.

...

etc etc for all organs and natural systems of a human being.

 

Any treatment regime for senescence, as such, by definition is global treatment of a human and implicitly includes any and all specific treatments aimed at reducing or eliminating deterioration of all specific organs and natural systems in a human being... including the brain... so that all the various organs and systems function as they should... i.e. so that the mind (what the brain does) does not deteriorate over time.

Edited by StrictlyLogical

Share this post


Link to post
Share on other sites
15 hours ago, Harrison Danneskjold said:

As for the rest (the bits I couldn't currently account for) - can any of us actually say what we'll be like in our old age? Or how much of this we'll remember?

 

11 hours ago, StrictlyLogical said:

HD -  You need to own and read the Golden Age trilogy by John C. Wright.

DO IT.  You WILL thank me later for suggesting it to you.  DO not read any reviews or spoilers just buy it... (if you have to ... just buy the first book, used, paperback... less than 10 bucks now)

The Golden Age

The Pheonix Exultant

The Golden Transcendence

 

and please let me know what you think and feel after reading them.

 

 

 

 

Share this post


Link to post
Share on other sites
14 hours ago, StrictlyLogical said:

HD -  You need to own and read the Golden Age trilogy by John C. Wright.

Alright! Alright! I have Audible; I'll see if I can get it tonight. But it had better not suck!

 

13 hours ago, MisterSwig said:

That the human mind probably has a time limit. It's very hard to get past 100 years without dementia.

And what if it does?

Like SL pointed out, that'd be one of the things we'd be trying to cure. Like Isaac Arthur pointed out in the video in the OP, it's something we've already been trying to address for as long as we've been extending human life. And like I apparently said last night (despite being blackout drunk) anyone who truly did not want to live any longer would not have to. Staying alive is hard; dying is ridiculously easy. And I don't see any evidence to suggest that dementia is some special kind of unsolvable problem in the first place.

I don't want to discourage you from voicing such concerns (which are, after all, exactly what this thread is for) but could you please be a bit more specific?

Share this post


Link to post
Share on other sites
46 minutes ago, Harrison Danneskjold said:

I don't want to discourage you from voicing such concerns (which are, after all, exactly what this thread is for) but could you please be a bit more specific?

If I understand you, you're saying that curing senescence will allow us to live productive lives for hundreds or thousands of years. I'm saying that probably won't be the case, unless we can cure dementia, whose onset is not always age-related. So stopping aging is not enough.

There's also a deeper conceptual issue here, which I can get into once you've addressed my concern about dementia. 

Share this post


Link to post
Share on other sites
On 9/4/2019 at 11:27 PM, MisterSwig said:

If I understand you, you're saying that curing senescence will allow us to live productive lives for hundreds or thousands of years. I'm saying that probably won't be the case, unless we can cure dementia, whose onset is not always age-related. So stopping aging is not enough.

There's also a deeper conceptual issue here, which I can get into once you've addressed my concern about dementia. 

 

Well, I started researching dementia again last night (which I have done to excessive lengths before, in order to prove a point to Don Athos) and couldn't find a single thing to suggest that it's not age-related. On the contrary; although they agreed that age isn't the only factor in its development, every single thing I read said that it is the main factor.

You understand my position perfectly, though, despite my current inability to understand yours.

 

Do you think it would violate some metaphysical rule to cure dementia (along the same lines as omniscience would)?

Share this post


Link to post
Share on other sites
On 9/4/2019 at 9:34 AM, MisterSwig said:

That the human mind probably has a time limit. It's very hard to get past 100 years without dementia.

Harrison is on the right track I think. But I think some things should be added to the discussion.

Not every case is age-related, to be sure, but it's a form of damage no matter what, and age will always be a factor at least. You can never remove age as a factor, and it is well-known and documented and studied how no matter who you are, you will experience cognitive decline especially with memory. On some level, it's the brain breaking down overtime. So you might say the purely biological human brain as it is has a hard limit before neurons in the connections between them start to deteriorate (and who knows, maybe that limit can be extended very far). But if you can replace the parts that break down, then the hard limit doesn't matter anymore. Neural prosthetics are a thing these days. Those also might have hard limits, but then you replace them again. 

To me, curing aging is more about finding the ways of going past biological limits caused by natural decay and disintegration.

Share this post


Link to post
Share on other sites

Another great book (and I'm going to try and track that other one down tomorrow, SL) is the Terminal Man by Michael Crichton. It's about a man who gets an experimental brain implant to treat his "thought seizures". They accidentally put the chip too close to a pleasure center, so he eventually becomes addicted to the tiny electric shocks it gives him...

 

Also, Elon Musk has apparently made a company (Neuralink) specifically for brain-computer interfaces. Hopefully not ones that'll make anyone psychotic, but you never can tell with Elon. ;)

Share this post


Link to post
Share on other sites
On 9/6/2019 at 11:46 PM, Harrison Danneskjold said:

Do you think it would violate some metaphysical rule to cure dementia (along the same lines as omniscience would)?

Perhaps. But Eiuol and I have decided to discuss this topic on our new YouTube show. So I'll hold my thoughts until then.

On 9/8/2019 at 6:26 PM, Eiuol said:

To me, curing aging is more about finding the ways of going past biological limits caused by natural decay and disintegration.

Do you mean temporal limits? How does one exceed a natural limit?

Share this post


Link to post
Share on other sites
4 hours ago, MisterSwig said:

How does one exceed a natural limit?

First you must recall that all natural limits are natural limits of something... i.e. WHAT the natural limit pertains to.

Those limits of nature obey identity, just like everything else.  The limits of any particular object, natural system, or material etc. depend upon the identity of that object, natural system, or material... change the identity of the object, natural system or material... (in the relevant way) and you  change the natural limit, precisely because you changed the WHAT to which the natural limit pertains to.

In fact, it is more correct to identify the fact that the natural limit of X1 simply is not the same natural limit of X2 (if different in the relevant ways).

 

When dealing with the manmade, one must also identify the natural limits of man can DO to the WHAT that has its own natural limits, eg. in the act of changing X1 into an X2.

 

"What IS the natural limit on what rock can be made into by man?"

Rock on its own perhaps could be made into bricks, but must also be crushed and combined with water and lime etc to make bridges and buildings etc...but then some rocks are made of iron ore... and others contain silicon...  and we know what these can be made into by Man doing things like smelting, casting, reduction, etc..

 

The natural limits to what Man can DO to something natural are limited by the WHAT and what Man can achieve... in the end it all comes down to what is physically, chemically, and biologically possible.

 

 

Share this post


Link to post
Share on other sites
5 hours ago, MisterSwig said:

Do you mean temporal limits?

You could phrase it that way, but I said biological so that I could emphasize biological parts of your body (in contrast to prosthetics or implants). 

 

Share this post


Link to post
Share on other sites
18 hours ago, StrictlyLogical said:

Rock on its own perhaps could be made into bricks, but must also be crushed and combined with water and lime etc to make bridges and buildings etc...but then some rocks are made of iron ore... and others contain silicon...  and we know what these can be made into by Man doing things like smelting, casting, reduction, etc..

To me, a "limit" is the point at which it becomes impossible to have more of something. It represents the line between a possible and impossible amount. The capacity of my stomach, for example, has a limit. I can only fit so much food in there at once. The uses for a rock also have a limit, based on its nature. But a brick is not a rock anymore. By shaping the rock, you've changed its identity, and therefore its natural limits. To exceed man's biological limits, we would need to change man into something not-man, which means we have not actually exceeded man's limits. We've merely used him to make something else.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...