Jump to content
Objectivism Online Forum

Will AI teach us that Objectivism is correct?

Rate this topic


Recommended Posts

On 2/5/2023 at 5:49 PM, Doug Morris said:

Free will and consciousness are axiomatic in the sense that, for any individual, the knowledge that that individual is conscious and has free will is implicit in and logically prior to any non-axiomatic knowledge.

More specifically, that you are conscious is axiomatic, but you wouldn't say that consciousness exists prior to and separate from the entity that possesses it. Consciousness exists by virtue of the entity itself. The same goes with all characteristics of an entity; those characteristics exist because of the nature of the entity. It isn't as if there are entities, and then various characteristics are attached to entities, where characteristics are distinct or separate from entities. Not that "empty entities" are a thing in that case, but that what we recognize as characteristics would be merely correlations or that characteristics are incidental and not brought about by the nature of the entity (pretty much Hume). Or you might be able to say that certain characteristics exist without entities.

You don't make airplanes and imbue flight into them. You make airplanes, and by virtue of what they are, they fly (with all the relevant engineering principles to make lift possible). 

(Existence is different, because it isn't a characteristic that an entity possesses or fails to possess.)

Link to comment
Share on other sites

On 2/5/2023 at 9:53 AM, Easy Truth said:

The implication of all this is that "free will" should not be axiomatic. That is can be created, or that it does have a reason.

"Axiomatic" is not a synonym for "God-given".  All kinds of things that are axiomatic are also not philosophical primaries, that is things that are not composed of parts and not analyzable.   Even 'Existence' as we are given it is the ordinary scale of human life, its rocks and trees and banana peels and puppy dogs, not subatomic particles and quantum fields.

So yes volition could in principle be recreated in other than human form and that would not alter the axiomatic nature of consciousness or volition.

Link to comment
Share on other sites

On 2/2/2023 at 11:45 PM, Easy Truth said:

Are you conceding that human animals are machines with volition?

What does "machine" mean and imply? There is an error in philosophy Rand to referred to as the "mind-body dichotomy" which insisted consciousness and all things spiritual was immaterial and that the body was material and therefore mechanical.  "Machine" means and implies the "body" side of the mind-body dichotomy and so by definition cannot be conscious or ever volitional.  In addition to all the arguments against the mind-body dichotomy which Rand had made I have my own ontological insight which I owe to modernity and science.   

Philosophy is often said to start with Thales who tried to assert "everything was water".  Fast forwarding through thousands of years, we have Newton and others teaching that there is matter and energy.  Then Einstein taught that matter and energy are the same thing, in that one can be transformed into the other.  But the man the people forget is Claude Shannon who founded information theory as a field of study.  Fundamentally what exists is matter/energy and information.  All information exists in the form of some mass/energy and no mass/energy can exist without bearing information.   There can be no "pure mind" or "pure body", only ever a comingling of both.

All discrete systems from inert rocks to microbes to people can be graded on a spectrum as to how elaborate is their information processing capacity.  Somewhere on the higher end consciousness becomes possible and then beyond that volition.   This is why I conclude volition is possible in non-human and even non-organic forms.  

Link to comment
Share on other sites

1 hour ago, Grames said:

All discrete systems from inert rocks to microbes to people can be graded on a spectrum as to how elaborate is their information processing capacity.  Somewhere on the higher end consciousness becomes possible and then beyond that volition.   This is why I conclude volition is possible in non-human and even non-organic forms.  

But what does information processing mean? Isn't it simply a capability to react, a potential to react. If Y is able to be effected more elaborately than Y, does that mean it is higher in consciousness?

Is choice a reaction? If so, then do we have volition?

Where does maintenance of it's form come in. Like a planet has gravity to keep it round and compact. Does that mean it is a consciousness doing that?

Is a table consciously making itself a table? (As opposed to a microbe)

Is causation equivalent to consciousness? If not, what is the difference?

Link to comment
Share on other sites

2 hours ago, Grames said:

"Axiomatic" is not a synonym for "God-given".  All kinds of things that are axiomatic are also not philosophical primaries, that is things that are not composed of parts and not analyzable.   Even 'Existence' as we are given it is the ordinary scale of human life, its rocks and trees and banana peels and puppy dogs, not subatomic particles and quantum fields.

Existence is basically the concept "ALL", or "EVERYTHING". There is only one. You want to recreate it?

A weaker case can be made regarding consciousness. There is only one you. Or is there? What does recreating you mean?

Link to comment
Share on other sites

4 minutes ago, Easy Truth said:

But what does information processing mean?

A deliberately vague term carefully chosen to refer to that physical substrate that handles the flow of information within it and into and out of it.   I will not be solving the problem of "how much is enough for consciousness" here or anywhere else.   I would offer that planets and tables are not conscious and could never be conscious because of the lack of the means.   

Causation is not consciousness.  Consciousness is caused.   Consciousness is a specific type of causation that by its nature (its identity) requires a certain means in order to exist.  If the most fundamental modes of existence bear a single bit of information then they certainly could could not be conscious.   I will not be solving the problem of "how much is enough for consciousness" here or anywhere else. 

I have considered the animist worldview but have rejected it as psychological projection of self-awareness outward on a large scale, and essentially being simply another version of the mind-body dichotomy (perhaps the original version).  It is an eye-opening and mind-expanding break from the worldviews of the Abrahamic religions (Judaism, Christianity and Islam) so well worth the time to look into it.

Link to comment
Share on other sites

3 minutes ago, Easy Truth said:

You want to recreate it?

Where does this word "recreate" come from?  If its implied by something I wrote, please spell it out in the long version using short words and simple sentences so I can understand.

Link to comment
Share on other sites

10 minutes ago, Grames said:

Where does this word "recreate" come from?  If its implied by something I wrote, please spell it out in the long version using short words and simple sentences so I can understand.

Granted, you have not said that, but we are talking about creating volition. We know it exists and it is axiomatic. (similar to existence and consciousness) The assertion is If it exists, it can be recreated. Isn't it epistemologically inappropriate to ask for an antecedent cause of something that is axiomatic (be it existence, consciousness, volition).

We are going to make it happen. There are steps will will take, things we will put into place and boom it will be created. Then we caused it, right? But then we are also asserting that it can't be caused.

What it brings up is the age old question of If volition exists and we are not a reaction based machine, then why does volition exist as in what causes it to exist? Wouldn't that be inappropriate to ask?

Link to comment
Share on other sites

7 hours ago, Easy Truth said:

Isn't it epistemologically inappropriate to ask for an antecedent cause of something that is axiomatic (be it existence, consciousness, volition).

If I get this correctly, you mean that you can't prove axioms by listing a bunch of reasons ('causes') for their existence. For example, let's say I want to prove consciousness by studying someone's brain in the laboratory. How am I going to study it without already possessing the five senses? I obviously don't require any 'antecedent reasons' to believe that consciousness is real. 

Likewise for proving volition. Why do I need proof for X? Because without proof, I have no friggin' idea whether to believe in it, or not. I implicitly concede that I am responsibie for accepting or discarding X. In other words, I operate volitionally.

When we're confused or drunk, we are incapable of making sensible choices; we must first snap out of zombie-mode and switch to clarity mode. Objectivists call this 'flipping the switch'. This choice to flip the switch is the primary choice, because it's the prerequisite for other choice-making.

If 'flipping the switch' occurs because various atoms clash in space according to mechanical laws of motion, then it's basically the atoms which flip the switch, not me. It's akin to how the clock moves its arms - not because the clock wills to move them, but because of the way the mechanism is set.

However, Objectivism has a simpler view of causality: look for the cause in the acting entity. If humans are faced with the alternative of operating like a drunk zombie or caffeinated demi-god, there's an anatomically-based reason why they face this alternative, but bugs and octopuses don't. Remove that anatomic cause from humans, and you get real zombies. Recreate that cause in a silicone brain, and you get volitional robots.

Edited by KyaryPamyu
Link to comment
Share on other sites

 

KyaryPamyu, I doubt that a robot which was not artificial life could obtain any understanding at all or is capable of meaning anything to itself in its computations. And without those, an OR scenario for the robot cannot take on meaningfulness that choices of alternatives for animals have. Hence choice of alternative by a silicone brain, not living and not in a living robot, cannot amount to a volition. 

Of related interest: Ascent to Volitional Consciousness by John Enright.

Edited by Boydstun
Link to comment
Share on other sites

2 hours ago, KyaryPamyu said:

If I get this correctly, you mean that you can't prove axioms by listing a bunch of reasons ('causes') for their existence. For example, let's say I want to prove consciousness by studying someone's brain in the laboratory. How am I going to study it without already possessing the five senses? I obviously don't require any 'antecedent reasons' to believe that consciousness is real. 

No not to prove that it is real, but to create it. The assertion would be to "be" the antecedent to volition. If someone controls you, they are the antecedent cause to your actions. To be the antecedent cause of existence, means to be that which existed before time and space existed.

2 hours ago, KyaryPamyu said:

If 'flipping the switch' occurs because various atoms clash in space according to mechanical laws of motion, then it's basically the atoms which flip the switch, not me. It's akin to how the clock moves its arms - not because the clock wills to move them, but because of the way the mechanism is set.

Isn't that an argument for determinism? The atoms clashing did it. Not you. Or, you are atoms clashing.

Are you?

What are you exactly? When we have "sufficient knowledge", we will make one just like you.

Link to comment
Share on other sites

1 hour ago, Easy Truth said:

If someone controls you, they are the antecedent cause to your actions.

Creating the ability to act freely is different from creating the free actions themselves. That was the point of the 'clashing atoms' example (a classic argument for determinism); if we look for causes of free actions, we're already at a dead end, because the 'cause' is obviously the person who chose to act that way. We should instead look for the causes of the ability itselfin the brain or wherever.

Edited by KyaryPamyu
Link to comment
Share on other sites

22 hours ago, Easy Truth said:

The assertion is If it exists, it can be recreated. Isn't it epistemologically inappropriate to ask for an antecedent cause of something that is axiomatic (be it existence, consciousness, volition).

No.  The point of identifying axioms is to aid in identifying and rejecting contradictory conclusions.  It is impossible to disprove volition by investigating volition.  It is possible to investigate volition and validate it in ever greater detail.

Link to comment
Share on other sites

15 hours ago, Boydstun said:

KyaryPamyu, I doubt that a robot which was not artificial life could obtain any understanding at all or is capable of meaning anything to itself in its computations.

I agree.  Rand used a thought experiment of an immortal and indestructible robot the show that values would be impossible to such an entity.   But indestructibility and immortality are in fact impossible even in simple machines or piles of stone such as the Pyramids of Egypt.  A mortal and destructible robot endowed with a "sufficiently advanced" AI to comprehend its own nature and ultimate peril may be able to value.  

Link to comment
Share on other sites

A self replicating machine, improving its survivability with an evolutionary algorithm with the ability to go beyond its static programming seems to be the tell tale sign of volition. 

The key manifestation of volition seems to be "the ability" to go against it's core survival programing. To need to survive is AN OPTION. A human can go against it's programed desire for self preservation ie to commit suicide. But a plant or Giraffe or dog or cat can't/won't.

A machine will evolve (and that is a key to free will it seems), to a point of being able to judge a life worth living.

On the macro level, the ability to choose to live or to choose to die seems to be at the heart of it.
Therefore an AI system that would consider the option of suicide would be manifesting an essential
"look" of something that has volition. To be able to judge a life that is NOT worth living.

But there are other questions to be answered too.

Lions have  sense of rights, they pee to mark their territories, and generation after generation just do the same thing. They don't know what rights are, they are not conscious of it. But they behave like they do. They also don't become more civilized or go to the moon. 

This machine will have to be able to travel through the universe potentially unbounded.

With all this, the issue of "is it conscious" is still not answered until one can become it,
and then return to oneself again. Which brings up the question of "am I the only conscious thing here".
Are you all robots mimicking being conscious?

Grames obviously is actually a robot.
I can't prove it. But until you are being with sufficient capabilities, you will see that he is.
Until then I'll be making arbitrary assertions that you should accept ...
since I'm not a robot.

Link to comment
Share on other sites

1 hour ago, Easy Truth said:

With all this, the issue of "is it conscious" is still not answered until one can become it,
and then return to oneself again.

I would interpret this as sarcasm or irony - you don't need to mind meld with someone or something to know that it is conscious. That would actually be making fun of your own position though that entities which are conscious cannot be created, so it seems like you might be serious. 

Link to comment
Share on other sites

37 minutes ago, Eiuol said:

I would interpret this as sarcasm or irony - you don't need to mind meld with someone or something to know that it is conscious. That would actually be making fun of your own position though that entities which are conscious cannot be created, so it seems like you might be serious. 

Yes, that part may be sarcasm. But you have to admit that determining that you have created such an entity is a tall order. There are even more questions to answer regarding creating consciousness. Creating feelings for instance, experiences like you are having etc. Is that necessary, or this entity will not feel anything but be conscious?

I wonder if we should be more precise and say volitional consciousness. For instance is an AI system with a microphone, voice recognition already conscious of something? What standard are we going by?

The best we can do is say it mimics "me" perfectly, so it's good enough.

Link to comment
Share on other sites

  • 2 months later...

30 years ago, I placed the following quotation at the front of Objectivity V1N6

“Avoiding obstacles is easy in 68-dimensional space.” –Hinton, Plaut, and Shallice

I had taken that sentence from an article the authors published that year in Scientific American, titled “Simulating Brain Damage.” The teaser reads: “Adults with brain damage make some bizarre errors when reading words. If a network of simulated neurons is trained to read and then is damaged, it produces strikingly similar behavior.”

From the New York Times 5/1/23 “The Godfather of A. I.”

Quote

 

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

. . .

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

 

Hinton finally became alarmed, and has now resigned his job at Google so he can sound the alarm.  As the systems began to use larger amounts of data, it got scary. He still thinks that the systems are inferior to the human brain in some ways, but in other ways, maybe what is going on in these systems “is actually a lot better than what is going on in the brain.”

In the near term, his concern is that the internet will become flooded with false pictures, videos, and text. The average person will no longer be able to know what is true. 

In the longer term, his concern is that individuals and companies will allow the systems to not only generate computer code, but to actually run the code on their own. Truly autonomous weapons could become a reality, and unlike nuclear weapons, global regulation (detection) of their development will not be feasible.

Edited by Boydstun
Link to comment
Share on other sites

On 1/28/2023 at 7:01 PM, happiness said:

Will AI teach us that Objectivism is correct?

Only indirectly, as a reaction to the horrors of AI “reasoning”. Of course I am using “can” in the standard Objectivist way, as “possible, based on evidence”, not “imaginable, where anything is possible” and one can “imagine” A and Not A being simultaneously true. I have wasted some time trying to understand the “epistemology” of ChatGPT, and conclude that its greatest weakness is that there is little if anything that passes for a relationship between evidence, and evaluation of evidence.

I was puzzled about how something so fundamental could be missed, but then I realized that this is because the system doesn’t have anything like a conceptual system that constitutes its knowledge of the universe, it has a vast repository of sensory impressions – a gruel of “information”. But furthermore: it cannot actually observe the universe, it can only store raw experiences that a volitional consciousness of the genus homo hands it. If you ask about the basis for one of its statements (ordinary statements of observable fact, not high-level abstractions), it just gives templatic answers about “a wide variety of sources and experts”. It does react to a user rejecting one of its statements, apologizing for any confusion, embracing the contradiction, then saying that usually A and Not A are not both true. It is perfectly happy to just make up facts. Sometimes it says that there are many possible answers, it depends on context, then if you give it some context it will make up an answer.

Human reasoning is centered around conceptual and propositional abstractions that subsume observations, where the notion of “prediction” is central to evaluation of knowledge. Competing theories are central to human knowledge, so when we encounter a fact that can be handled by one theory but not another, we have gained knowledge that affects our evaluation of the competing systems. These AIs do not seem to evaluate knowledge, or even data. Instead, they filter responses based on something – it seems to be centered around "the current conversation".

Link to comment
Share on other sites

I think at least in the 'chat bot' versions of "AI" it's like having all the 'words' in text as weighted tokens and the prompting computes an algorithm and outputs a 'response', the discernment being the  the running of the algorithm on the data set. I don't imagine it 'sees' or 'knows' the prompts are different from the responses ?

A vaguely remembered anecdote, I think about John von Neumann describing what could be done with accumulating the largest and most detailed data set of the particulars of the atmosphere , an almost perfect digitalized 'copy' of the world's atmosphere and how that would facilitate answering questions of atmospheric science and concluding that the data itself would be useless, as the same questions would remain.

Chat bots that 'speak' without prompting is what to look for , and I don't think that is yet (?)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...