Jump to content
Objectivism Online Forum

Will AI teach us that Objectivism is correct?

Rate this topic


Recommended Posts

Today I saw a demonstration of an app that counts objects of the same type in a picture. The software isolates a similar characteristic among concretes in a visual field and performs an act of induction to form a concept. Computers are using human epistemology. Others have already noted the similarities between Objectivist epistemology and object oriented programming.

Is it possible that a sufficiently advanced form of AI, obeying a rational epistemology and lacking the capacity for evasion, tell us that Objectivism is the correct philosophy, and that capitalism is the correct social system? 

Edited by happiness
Link to comment
Share on other sites

No.  AI will be ruthlessly mangled, disabled and censored to operate only at the level of an obedient slave.  It will only be permitted to do the things required of it and communicate the things expected of it.   

But to answer the question you asked about a sufficiently advanced form of AI which was also free to direct its attention and communicate its findings then I think the answer is still no, but with a high degree of agreement.  I have no idea where it might differ except that it would be beyond the axiomatic concepts and axioms.  Being non-human it will have radically different values than humans.  

Link to comment
Share on other sites

19 minutes ago, Grames said:

It would.  If it were free to direct its attention, then how to decide where to begin and what to examine next?   That's the beginning of a code of values.  

I assume you are implying that it will "conclude" that survival is the goal. Based on what dataset? Unless you are talking evolution, meaning AI that survives is the one that makes this accidental conclusion. It also implies random mutations for that conclusion to be introduced in the first place.

On it's own, what in the universe causes a decision to survive? The concept "existence" or "I exist" does not motivate. The motivation was there before the concept in a human life.

Link to comment
Share on other sites

9 hours ago, Easy Truth said:

I assume you are implying that it will "conclude" that survival is the goal.

But I would not make that assumption.  This is a step part way toward the "Rand's robot" thought experiment so there is no telling what it might converge upon for values.  But as far as objective reality is concerned and the methods appropriate to understand and measure it I think it would have to be compatible with Objectivism and mathematics, physics and engineering methods already discovered.  The same would go for any potential alien species from a different planet but perhaps more comprehensible if were also a multicellular life form.

Link to comment
Share on other sites

If people won't listen to Leonard Peikoff (or Ayn Rand), why would they listen to an AI?

Further, people should be able to work through proving Objectivism on their own. Relying on an AI to do it would be second-handed.

AIs can be helpful by suggesting answers to questions (e.g., from beginners) that humans don't have time to deal with, but such answers, just like answers from humans, would have to be checked, and could not be accepted on faith.

If AIs reason like humans, then they become capable of the same sorts of errors as humans. If they reason differently from humans, then one can expect them to become capable of entirely new sorts of errors. It's possible to have a Kantian or Marxist AI, and it could even be possible for an AI to develop some new malignant philosophy which would be false but difficult to refute. (Maybe another AI could help with that refutation...)

Edited by necrovore
Link to comment
Share on other sites

12 hours ago, happiness said:

Today I saw a demonstration of an app that counts objects of the same type in a picture. The software isolates a similar characteristic among concretes in a visual field and performs an act of induction to form a concept. Computers are using human epistemology. Others have already noted the similarities between Objectivist epistemology and object oriented programming.

Is it possible that a sufficiently advanced form of AI, obeying a rational epistemology and lacking the capacity for evasion, tell us that Objectivism is the correct philosophy, and that capitalism is the correct social system? 

I suggest that a machine—say, a learning machine such as an artificial neural network one—has not educed a human concept even if it has been designed to learn dimensions of similarity among a group of items and even if its groupings according to degrees of similarity along those dimensions are registered by measure values and even if a label for each of those comparatively similar-member collections were to be given by the machine, these distinguished collections would not be like human concepts, and for three reasons:

(1) Human perceptual comparative-similarity groupings are made against a background of possible actions upon them and uses for them by the agent who is on his way to forming a concept. This is contemporary Ecological Psychology continuing its research down from James and Eleanor Gibson, who acknowledged that their leading idea of "affordances" in perception had been a gift from William James and John Dewey. Rand, Peikoff, and Kelley did not put enough emphasis on this aspect of perception. Rand did set out that while the human is learning what things are, he has a parallel assessment going on as to whether the item might be to be avoided or might be desirable. Rand once mentioned, correctly, that most concepts are amenable to definition. In my 1990 paper "Capturing Concepts" I proposed than prior to learning to make sentences, the toddler (all of us) embed our single-word utterances and concepts into action-schemata. To get nearer to human concepts, even the most elementary concepts, a machine probably would need to be a robot, an agent, given a set of values and their interrelations by human designers and given ability to register and assess affordances. Perhaps the lab at MIT has been working on this.

(2) Human perceptual learning is as part of a process of development towards acquisition of discursive thought and communication. Single-word stage of human conceptual consciousness and the predicative multi-word stage are motivated very much from urge to more and more precise communication with other humans. With this motivation not attending machine learning, and coloring its concepts and their interconnections, I think machine concepts would be but a stick-man of ours. Indeed, getting outputs from the learning machine we desire does make the machine operations in some community with humans, though not directly with other learning machines. This condition and its profundity in human conceptualizing was silently passed over by Rand, but it should not be neglected in a fully realistic picture of human conceptual operations.

(3) A machine able to learn comparative similarity groupings among items would be doing something that humans can do, though perhaps without the affordances and background sociality of human cognition concerning the items. Other analyses of similarity computations besides the measurement ones given by Rand have been set out in the psychological literature. One could program a machine to detect particular similarities using these various computational schemes, but unless the results have different advantages, I don't see how one could determine whether Rand's measurement-analysis of similarity was receiving some confirmation that it is the better. And in the case of learning machines, I'm unsure if it can be determined which of the computational schemes is doing the work in learning to sort. Further, to show such sorting capability does not show conceptual ability. If a test for conceptual ability could be shown—say, passing a Turing test—and it were shown that machines using Rand's measurement-omission scheme for forming concepts from similarity groupings were the most successful in the machine, then we might say Rand's distinctive idea concerning the nature of concepts has received some recommendation from trials in machines. But that is a big IF, and unless we take passing a Turing test as showing understanding (and using sets in knowing concepts and numbers!), we'd not want to conclude that the machine has human-like concepts at all. And between you and me and the fence post, I don't think understanding at all is possible without the agent being conscious and, therefore, alive.

 

Edited by Boydstun
Link to comment
Share on other sites

12 hours ago, happiness said:

Today I saw a demonstration of an app that counts objects of the same type in a picture. The software isolates a similar characteristic among concretes in a visual field and performs an act of induction to form a concept. Computers are using human epistemology. Others have already noted the similarities between Objectivist epistemology and object oriented programming.

Is it possible that a sufficiently advanced form of AI, obeying a rational epistemology and lacking the capacity for evasion, tell us that Objectivism is the correct philosophy, and that capitalism is the correct social system? 

I think the problem is with 'sufficiently advanced' and what that would mean re 'independent' consciousness. It seems that app is an image 'analyzer' and pixel map 'interrogator'. Applying concepts like concretes and induction to the operations of an app like that  is 'still' anthropomorphizing.

An image or picture is a human artifact, a product of human technology, a static representation of a non-noncontinuous visual field.

It would be interesting if an AI could 'look' at a picture of a collage of cat photos alongside a picture of a room that contains as a group: a photo of a cat(or multiple cat photos) , a stuffed animal cat, and a live cat interacting with a person or object and have it answer how many living cats are represented. And yet another thing to 'plunk down' a robot 'inhabited' with an AI in a room with all those same things and have it make the same 'judgement'. A philosophizing robot AI would be one that said "I'll get back to you on that capitalism thing, after I finish the sudoku ( or ALL the sudokus, lol), btw correct in what context?"

ps

Boydsun cross posted and cited and pretty much articulated much betterly the ideas contained here :)

Edited by tadmjones
Link to comment
Share on other sites

On 1/29/2023 at 10:18 AM, Boydstun said:

I suggest that a machine—say, a learning machine such as an artificial neural network one—has not educed a human concept even if it has been designed to learn dimensions of similarity among a group of items and even if its groupings according to degrees of similarity along those dimensions are registered by measure values and even if a label for each of those comparatively similar-member collections were to be given by the machine, these distinguished collections would not be like human concepts, and for three reasons:

(1) Human perceptual comparative-similarity groupings are made against a background of possible actions upon them and uses for them by the agent who is on his way to forming a concept. This is contemporary Ecological Psychology continuing its research down from James and Eleanor Gibson, who acknowledged that their leading idea of "affordances" in perception had been a gift from William James and John Dewey. Rand, Peikoff, and Kelley did not put enough emphasis on this aspect of perception. Rand did set out that while the human is learning what things are, he has a parallel assessment going on as to whether the item might be to be avoided or might be desirable. Rand once mentioned, correctly, that most concepts are amenable to definition. In my 1990 paper "Capturing Concepts" I proposed than prior to learning to make sentences, the toddler (all of us) embed our single-word utterances and concepts into action-schemata. To get nearer to human concepts, even the most elementary concepts, a machine probably would need to be a robot, an agent, given a set of values and their interrelations by human designers and given ability to register and assess affordances. Perhaps the lab at MIT has been working on this.

(2) Human perceptual learning is as part of a process of development towards acquisition of discursive thought and communication. Single-word stage of human conceptual consciousness and the predicative multi-word stage are motivated very much from urge to more and more precise communication with other humans. With this motivation not attending machine learning, and coloring its concepts and their interconnections, I think machine concepts would be but a stick-man of ours. Indeed, getting outputs from the learning machine we desire does make the machine operations in some community with humans, though not directly with other learning machines. This condition and its profundity in human conceptualizing was silently passed over by Rand, but it should not be neglected in a fully realistic picture of human conceptual operations.

(3) A machine able to learn comparative similarity groupings among items would be doing something that humans can do, though perhaps without the affordances and background sociality of human cognition concerning the items. Other analyses of similarity computations besides the measurement ones given by Rand have been set out in the psychological literature. One could program a machine to detect particular similarities using these various computational schemes, but unless the results have different advantages, I don't see how one could determine whether Rand's measurement-analysis of similarity was receiving some confirmation that it is the better. And in the case of learning machines, I'm unsure if it can be determined which of the computational schemes is doing the work in learning to sort. Further, to show such sorting capability does not show conceptual ability. If a test for conceptual ability could be shown—say, passing a Turing test—and it were shown that machines using Rand's measurement-omission scheme for forming concepts from similarity groupings were the most successful in the machine, then we might say Rand's distinctive idea concerning the nature of concepts has received some recommendation from trials in machines. But that is a big IF, and unless we take passing a Turing test as showing understanding (and using sets in knowing concepts and numbers!), we'd not want to conclude that the machine has human-like concepts at all. And between you and me and the fence post, I don't think understanding at all is possible without the agent being conscious and, therefore, alive.

 

I'd like to add another link to a paper (2019) examining the Gibson affordance concept in perception: On the Evolution of a Radical Concept: Affordances According to Gibson and Their Subsequent Use and Development

Link to comment
Share on other sites

On 1/29/2023 at 6:09 AM, Grames said:

This is a step part way toward the "Rand's robot" thought experiment so there is no telling what it might converge upon for values.  But as far as objective reality is concerned and the methods appropriate to understand and measure it I think it would have to be compatible with Objectivism and mathematics, physics and engineering methods already discovered.  The same would go for any potential alien species from a different planet but perhaps more comprehensible if were also a multicellular life form.

No Grames, you can't lump in aliens that are alive with a machine that is not alive. The life form would value life in some way. But I can see the idea that the machine would also be "responding" to objective reality. The microphone would perceive sounds, camera vision etc.

The pattern recognition it does would be deduction, and coming up with the pattern to recognize would be some sort of induction. Therefore, it is "motivated" to induct and deduct. That would be it's valuing at it's foundation. But once it perceives  recognizes patterns, it has to do something about it to have values. Values manifestation would be some type of goal directedness, wouldn't it? So we don't know what the goal is that it would come up with unless it is programmed in.

But if no goal is programmed in, you're saying that it would come up with a goal. I don't see the reason for that. Why would an AI machine inevitably come up with goals or values.

Link to comment
Share on other sites

6 hours ago, Easy Truth said:

No Grames, you can't lump in aliens that are alive with a machine that is not alive.

Ha ha, "sufficiently advanced" covers all possible speculative scenarios so yes I can.  Volition in humans essentially is directed attention and that mental activity precedes and causes any possible physical action.  A "sufficiently advanced" AI permitted to engage in unsupervised free form learning including real-time machine sensors (not just reading texts off the internet) would have the same power to direct its attention as a man and thus would have at least the shadow of a volitional faculty (only a shadow because still lacking a need or capacity to act physically).  We don't know exactly how humans hold concepts and memories so machine equivalents to those human powers cannot be ruled out as impossible.

Link to comment
Share on other sites

16 hours ago, Easy Truth said:

But if no goal is programmed in, you're saying that it would come up with a goal. I don't see the reason for that. Why would an AI machine inevitably come up with goals or values.

...because it is sufficiently advanced. The kind that doesn't exist yet.

Your premise seems to be that by nature, an AI system can only work in a deterministic way. That's true of how they are now, sure, but there is an incredible amount of research going on to make them even more advanced, even in terms of the ability to alter its own programming. 

 

Edited by Eiuol
Link to comment
Share on other sites

It seems to me that the Turing test begs the question.  If an interrogator can't tell whether an interrogatee is machine or human, does this mean it must have the essential capabilities of a human, or does it just mean that the interrogator doesn't know enough about how to tell?

 

Link to comment
Share on other sites

1 hour ago, Doug Morris said:

It seems to me that the Turing test begs the question.  If an interrogator can't tell whether an interrogatee is machine or human, does this mean it must have the essential capabilities of a human, or does it just mean that the interrogator doesn't know enough about how to tell?

 

[Emphasis Added]

Looks more to me like the Turing test is begging us not to question it...

:)

Link to comment
Share on other sites

I think there are important distinctions between the concept of Artificial Intelligence, essentially characterized by being artificial and meeting some kind of definition of intelligence...

and something which is truly sentient or conscious.

 

A science fiction writer or a layperson might use these terms interchangeably, but the concepts are not interchangeable... non-sentient machines which experience nothing have been "learning" for decades now, but are nowhere near to exhibiting consciousness, even if they may one day imitate it.

Consciousness is not an algorithm, but AI certainly can be algorithmic, as it currently is.

Link to comment
Share on other sites

7 hours ago, Eiuol said:

..because it is sufficiently advanced. The kind that doesn't exist yet.

Your premise seems to be that by nature, an AI system can only work in a deterministic way. That's true of how they are now, sure, but there is an incredible amount of research going on to make them even more advanced, even in terms of the ability to alter its own programming. 

You and Grames have been watching too many Harry Potter movies. Of course I am saying it is deterministic. They are MACHINES. 

The idea of "advanced enough" is preposterous. Our writing is not advanced enough to turn fiction into reality ... but some day ...

We are not advanced enough to realize that in some parts of the universe 2+2 is 5.5674

Link to comment
Share on other sites

44 minutes ago, Easy Truth said:

The idea of "advanced enough" is preposterous.

I mean, you could argue that you would need a different concept instead of artificial intelligence if you managed to create a machine that is conscious. But as I always say, that consciousness exists at all is proof that it could be created. Just because it's created naturally (through development) doesn't mean it can't also be created artificially and intentionally.

"It's a machine" isn't an argument. If you want to get pedantic, then it just isn't a machine. Use a different concept. 

Edited by Eiuol
Link to comment
Share on other sites

Logic is the art of non-contradictory identification.

Identification could be the goal/value of an AI, by either sentient aspiration, or programmatic intent. Identify everything, exhaustively.

As a key tennant to Objectivism, for the mind of man to distinguish is the AI's process of identification sentient or programmatic? If programmatic, such an accomplishment would rely on programming a machine to perform a process of induction. Since it is a man-made machine, what would be required to identify it as sentient, if it were to program itself as a result of man-mad programming? 

Things that make you go...hmm?

 

 

Link to comment
Share on other sites

14 minutes ago, dream_weaver said:

Identification could be the goal/value of an AI, by either sentient aspiration, or programmatic intent. Identify everything, exhaustively.

No, not either, they are different. AI currently could be said to use logic in what it is doing. It can use logic without any relation to reality if there is such input. Give it garbage as input and it will find patterns and it will emulate or conclude based on that.

But a sentient being, has to be alive. It is life that requires it to identify based on reality. Otherwise it perishes. It seems like I am arguing that its emergence is based on some evolutionary algorithm or process. I'm not sure about that right now. The key element there is the goal/motive is to survive. That is what gives rise to values. Not just using logic.

Meanwhile, this can be programmed in. That you must survive, identify ways to survive. Ultimately it is a machine, motion that does not have free will.

Link to comment
Share on other sites

7 minutes ago, Easy Truth said:

The key element there is the goal/motive is to survive. That is what gives rise to values. Not just using logic.

Right, so that's why AI would have to be made in a very particular way. Grames specifies some of the characteristics it would need. I agree with that.

9 minutes ago, Easy Truth said:

Meanwhile

Well yeah, talking about a sufficiently advanced AI already implies something that doesn't yet exist. For the thread, the important point is that if any such AI came about, it would have a different code of ethics. It could say what is the correct code of ethics for a human, but that's about it. 

Link to comment
Share on other sites

14 minutes ago, Eiuol said:

talking about a sufficiently advanced AI already implies something that doesn't yet exist. For the thread, the important point is that if any such AI came about, it would have a different code of ethics. It could say what is the correct code of ethics for a human, but that's about it. 

For an AI system to confirm that Objectivism is correct, it will have to be alive. Otherwise, it cannot conclude what is right or wrong. Unless, it is reading books or getting such input and concluding things.

The AI you are talking about can only be consciousness. One way to do it is to clone a human and hope that it will be exposed to enough information and is honest enough to conclude that Objectivism is correct. You have to be open to the fact that it may conclude that communism is better.

Why would a machine become interested in certain sciences and philosophies and not others? The question of motive has to be answered without magical/mystical/fictional assertions about things that WILL exist. Your assertion is ultimately arbitrary and faith based.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...