Jump to content
Objectivism Online Forum

What, objectively, is artificial intelligence?

Rate this topic


Recommended Posts

I have seen many threads on this board that discuss artificial intelligence (AI) or computers with consciousness, etc. I haven't seen any that define the term.

(edit: If anyone knows of a thread that defines AI, could you post a link here? thanks!)

I think that defining a term that has no referents in reality is philosophically interesting. (at least, I personally need to chew on / practice the process of making a definition.) The interesting philosophical question for me in defining AI relates to whether the concept is arbitrary. Nobody is suggesting that there are any actual entities to which it applies today. The question of whether it is even possible hinges on what you mean by possible. If "possible" means "having some evidence that such exists in reality", then I don't think anyone is even arguing that AI is possible.

However, it seems obvious to me that many concepts in science are valid, even though they fall into a similar category. For example: alien life. (Life existing on planets other than the Earth.) Now we obviously know that life does exist somewhere in the galaxy -- because we know that life exists on Earth. This establishes that it is metaphysically possible for planets to support life, when the conditions are right. This seems like enough of a basis to justify some peeking around at neighboring planets to see what the conditions on them are like. And thus it also seems to me that the concept of alien life is not arbitrary -- at least to the scientists who know the relevant facts.

Now on to AI:

DEFINITION: 

Artificial Intelligence(n): an entity of volitional consciousness designed by human scientists and built out of non-conscious parts.[/code]

If you find that this is not the definition you are thinking of then please reply to this post and begin your response with your own definition (genus & differentia).

ELABORATION:

* genus is "entity of volitional consciousness".

* differentia is "designed by human scientists and built out of non-conscious parts."

Note that if human scientists merely take (for example) human stem cells, grow a brain from them, and then find a way to connect it to sensors/motors and find that the result is conscious (and not completely insane) then they will have created a CYBORG not an AI.

But if the scientists design the entity from the level of some non-conscious building blocks up, it is an AI even if the designers use chemicals & carbon instead of silicon and magnetic oxides.

DISCUSSION

First of all, note the status of this definition: The existence any entities that would be covered by it is purely a hypothetical. It is not formed by looking at a set of AI's then inducing what they have in common that differentiates them from non-AIs in the larger genus.

Rather it is formed by taking a valid concept (being of volitional consciousness), and using a differentiating characteristic that [i]does[/i] exist in reality, but never in conjunction with this genus.

It can be arbitrary to combine genus & differentia this way: 'unicorn' is not valid as a concept describing an animal. So you couldn't define it as a "horse" "with a horn". But it does seem that the word 'unicorn' has valid definition under the genus "a mythical beast".

I think the concept AI falls into a special category of "subjects of scientific research".

But have I identified the correct genus & differentia? i.e. is [u]this[/u] combination valid?

-- Josh

Link to comment
Share on other sites

I think the concept AI falls into a special category of "subjects of scientific research". 

But have I identified the correct genus & differentia?  i.e. is this combination valid?

Josh, I'm a bit confused as to your purpose here. Are you trying to sharpen the definition of AI in order to bring it into the realm of science? Clearly your definition stands in defiance to virtually any definition or characterization of AI that can be found in the scientific literature for decades. Your use of "volitional consciousness" is sufficent to accomplish this, since such is essentially a foreign notion to the cognitive sciences. But, regardless, what you wind up with is a more specific definition of that which is arbitrary, so what is the purpose?

Link to comment
Share on other sites

DEFINITION: 

Artificial Intelligence(n): an entity of volitional consciousness designed by human scientists and built out of non-conscious parts.[/code]

[right][post=64227][/post][/right]

Uhh...

I don't understand why you'd want to define something that doesn't exist. What do you need a hypothetical definition for? Artificial Intelligence is a valid term in the world of science, and even in the computer gaming world, and it isn't anything like what you are talking about. It only talks about how well a computer is programmed to mimick the actions of real intelligent beings. The better the AI, the better the level of interaction you can have with characters in the game or with a computer program.

Link to comment
Share on other sites

I don't support any attempt to "legitimize" anti-concepts that have their roots in the non-field of cognitive science. The two worst of these being "artificial intelligence" and "computation." The history of these terms is stemmed in very bad philosophy; that is, they were created in a non-field in order to express anti-concepts versus the concept of consciousness which is just misapplied there.

It's my opinion that they should be done away with entirely, written as entries in the annals of our philosophical horror files.

Link to comment
Share on other sites

I don't support any attempt to "legitimize" anti-concepts that have their roots in the non-field of cognitive science. The two worst of these being "artificial intelligence" and "computation." The history of these terms is stemmed in very bad philosophy; that is, they were created in a non-field in order to express anti-concepts versus the concept of consciousness which is just misapplied there.

It's my opinion that they should be done away with entirely, written as entries in the annals of our philosophical horror files.

Why do you lump computation with AI? What's the problem with that concept?

Link to comment
Share on other sites

Well, neither "computation" nor "Artificial Intelligence" are borne out of Cognitive Science. CogSci is a new field, starting to pick up steam in the 80s. "Artificial Intelligence" goes as far back as Alan Turing in 40s-50s, who may be termed one of the founders of Computer Science as a theoretical field (as opposed to simply the practical task of programming). "Computation" is a much older term, and if you mean it in the strict sense of applying only to computing machines, then it goes back to 19th century and Charles Babbage.

I am still debating it, but perhaps "Artificial Intelligence" is a concept of questionable use. "Computation" on the other hand, is an important concept in mathematical and computer science theory. Why would you want to eject it as well?

Link to comment
Share on other sites

Well, neither "computation" nor "Artificial Intelligence" are borne out of Cognitive Science. CogSci is a new field, starting to pick up steam in the 80s. "Artificial Intelligence" goes as far back as Alan Turing in 40s-50s...

Different commentators date cognitive science to different eras, some as far back as Turing. Regardless, Turing was a spiritual founder of the cognitive science movement in every sense of its modern incarnation.

I am still debating it, but perhaps "Artificial Intelligence" is a concept of questionable use. "Computation" on the other hand, is an important concept in mathematical and computer science theory. Why would you want to eject it as well?

Good point, I would just reject its application to the study of consciousness because it is at the root of the computer/brain analogy that is so ingrained in cognitive science.

Link to comment
Share on other sites

I don't support any attempt to "legitimize" anti-concepts that have their roots in the non-field of cognitive science. The two worst of these being "artificial intelligence" and "computation." The history of these terms is stemmed in very bad philosophy; that is, they were created in a non-field in order to express anti-concepts versus the concept of consciousness which is just misapplied there.

It's my opinion that they should be done away with entirely, written as entries in the annals of our philosophical horror files.

'Artificial intelligence', both as a term and as a discipline, has very little to do with trying to develop machine consciousness and I would be curious as to why you thought the field was founded on "bad philosophy". Most research in AI, as far as I know, has to do with developing systems capable of performing actions which had previously been associated solely with intelligent entities (hence the term 'artificial' intelligence) - for instance natural language parsing, perception, expert systems, playing convincing games of chess, and so on. The purpose, generally speaking, is not to build machines with 'volitional consciousness' or whatever, but rather to a) design systems which are able to deal with practical problems in an efficient manner, and B) mimic certain aspects of the behavior of 'naturally' intelligent entities.

Saying that a computer capable of passing the Turing test, for instance, is artifically intelligent sounds like a perfect valid application of the term, and doesnt in any way commit the speaker to any beliefs regarding consciousness.

Link to comment
Share on other sites

Hi guys,

I'm sorry, I thought I made my purpose clear. I have noted a reasonable number of threads in this forum that discuss artificial intelligence or the concept of consciousness projected onto computers. I thought a definition would help to clarify these discussions. I also hoped that others would at least pose other definitions of the term if they disagreed with defining it this way.

My real motivation in discussing this topic was to get to a discussion of the metaphysical vs epistemological possibility, and whether something is still arbitrary for the scientist if it is metaphysically possible.

I chose poorly in selecting AI as the concrete basis of that discussion. AI is philosophically very controversial; the target of AI researchers may not even be consciousness. (I thought it was.) Perhaps the search for extraterrestrial life would be a better subject for a discussion of science exploring for which there is evidence that a thing could potentially exist but not evidence that it actually exists.

In any case, I found a thread in the anger management blog that discusses metaphysical and epistemological possibility, and clarifies the subject. (What I got out of that entry: The identification of a "potential" is a new item of knowledge. It justifies scientific exploration, and is not arbitrary. It does not imply that the potential has been actualized. It is still arbitrary to assert even that an actualization is "possible". But science can validly explore potentials without claiming that they are possible.)

-- josh

Link to comment
Share on other sites

'Artificial intelligence', both as a term and as a discipline, has very little to do with trying to develop machine consciousness and I would be curious as to why you thought the field was founded on "bad philosophy". Most research in AI, as far as I know, has to do with developing systems capable of performing actions which had previously been associated solely with intelligent entities ...

I am not going to take the time now to detail the entire history of "artificial intelligence," but essentially AI grew out of 1950s information theory, game theory, and what was then known as cybernetics. In the mid-50s MIT and Stanford pioneered hardware and programming tools for von Neumann machines, and the term "artifical intelligence" was coined when the "perceptron" was developed, a precusor for what we now refer to as neural net machines. The 60s and 70s saw the LIst processing language (LISP), robotic communication, and psychological toys like Eliza, the Rogerian psychotherapist program.

But all this represented a very small group until the micro-electronics revolution helped AI fully blossom, and the 80s saw the foundation for the field which has evolved today, spreading across the whole realm of the cognitive sciences. There still remains the engineering-minded contingent of this group, but it is the world of ideas, both philosophical and scientific, which define the field. Here are the words which established the basis for modern AI, back in 1985.

"The fundamental goal of this research is not merely to mimic intelligence or produce some clever fate. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. (Artificial Intelligence, The Very Idea, J. Haugeland, MIT Press, 1985.)

Bowzer is certainly right as to the fundamental base and goal of AI and its essential connection to consciousness. Sometimes the concepts are hidden beneath a veneer of pseudo-scientific jargon, but nevertheless the facts remain and even those who deny the existence of a volitional consciousness seek to make their "artificial intelligence" act as if such a consciousness did exist. The bad philosophy lies in the same sort of nonsense that Bowzer, myself, and others have been arguing against with the consciousness-as-a-digital-algorithm crowd.

Link to comment
Share on other sites

"The fundamental goal of this research is not merely to mimic intelligence or produce some clever fate. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. (Artificial Intelligence, The Very Idea, J. Haugeland, MIT Press, 1985.)

There are only two types of people who could have said that:

1. Computer illiterates

2. People who know nothing about consciousness.

I reject this "definition" of AI and I stand by what I said: Computer programs can't and never will be able to surpass their own programming. The above definition states exactly that - that the goal of the AI research is to develop a machine which will be able to grow and evolve beyond the algorithms used to make it. This is an absurdity.

However, I don't think that the whole idea of an AI should end up in a horror file, just because someone misstated its purpose. The fact that he did and that he was taken seriously probably should, but not the whole concept of AI. There are applications of it already and programmers work on it. I did it too, although it was just for fun. Computer game programmers are doing it when producing algorithms to guide the NPC movements. Robots too have AI. Consider, for example, the drones that NASA is planning to send to the surface of Mars. I've seen lots of documentaries in which, in combination with their sensors, the drones are supposed to find their way through or around certain obstacles. These too posess AI.

The goal of the AI should thus not be entirely rejected, but instead redefined. I think the proper definition would be that an AI is a programming that enables a machine to perform a certain task in an enviroment that has not been predefined. (Note that this enviroment can be both artificial and real.)

Link to comment
Share on other sites

There are only two types of people who could have said that:

1. Computer illiterates

2. People who know nothing about consciousness.

...

However, I don't think that the whole idea of an AI should end up in a horror file, just because someone misstated its purpose.

I'm not sure that my point is being understood. It wasn't/isn't just one guy that is claiming that artificial intelligence = conscious machines; from its very inception (as Stephen has shown), the field of artificial intelligence has had as its aim conscious machines. In fact, quite to the contrary, an entire field of study has spawned from the idea that machines can be/are conscious: cognitive science. Haugeland's statement is by no means the exception here.

I agree with others that the technology coming from people who say that they design "artificially intelligent systems" is wonderful. It's the philosophers that I have a problem with, the Minskys and the Dennetts.

Link to comment
Share on other sites

I'm not sure that my point is being understood. It wasn't/isn't just one guy that is claiming that artificial intelligence = conscious machines; from its very inception (as Stephen has shown), the field of artificial intelligence has had as its aim conscious machines. In fact, quite to the contrary, an entire field of study has spawned from the idea that machines can be/are conscious: cognitive science. Haugeland's statement is by no means the exception here.

Yes, exactly. Earlier there existed perfectly good concepts, such as robotics, that could be expanded to accomodate an ever-increasing technology. But artificial intelligence blurred the distinction between man and machine, between a volitional consciousness and determinstic behavior; intelligent is exact what a machine is not.

Intelligence is an attribute of consciousness, not a property of a determinstic machine composed of inanimate matter. The choice of "intelligence" in the creation of AI amounted to an attempt by the materialists to smuggle in a concept that does not apply. The attempt was successful and the entire field is now polluted with vague ambiguities that give rise to notions like consciousness arising from digital algorithms. That we have had to spend such an inordinate amount of time on this forum refuting such nonsense, is itself a testament to how insidious the basic premises of AI are. An entire generation is growing up, accepting uncritically as a matter of faith, notions that obliterate the distinctions between life and inanimate matter, and ideas unable to separate the different causal principles behind the actions of a volitional consciousness, and deterministic behavior.

Link to comment
Share on other sites

Yes, exactly. Earlier there existed perfectly good concepts, such as robotics, that could be expanded to accomodate an ever-increasing technology. But artificial intelligence blurred the distinction between man and machine, between a volitional consciousness and determinstic behavior; intelligent is exact what a machine is not.

Intelligence is an attribute of consciousness, not a property of a determinstic machine composed of inanimate matter. The choice of "intelligence" in the creation of AI amounted to an attempt by the materialists to smuggle in a concept that does not apply. The attempt was successful and the entire field is now polluted with vague ambiguities that give rise to notions like consciousness arising from digital algorithms. That we have had to spend such an inordinate amount of time on this forum refuting such nonsense, is itself a testament to how insidious the basic premises of AI are. An entire generation is growing up, accepting uncritically as a matter of faith, notions that obliterate the distinctions between life and inanimate matter, and ideas unable to separate the different causal principles behind the actions of a volitional consciousness, and deterministic behavior.

Robotics doesn't cover the whole issue of what is now called artificial intelligence (not even by restraining yourself to the way I defined the term). And I don't think that a mere word is confusing the whole generation. I think it is movies such as AI, I, Robot, The Matrix, Terminator, Star Wars and Star Trek (to mention just a few among many others), which do the job. You can see the same thing hapenning to Genetic Engineering (even though this term is better coined). When I knew nothing about it, I too thought that by cloning an individual you copy his thoughts as well, and it was the movies which made me think so. Now I know that cloning a human being is practically an impossibility. And here are all the institutions on their feet to ban human cloning and stem cell research. It's an absurdity that arises from an absurdity. I think it's the same thing with Artificial Intelligence. If someone can phrase a better word or term, I'll be alright with it, but I don't think that, defined properly and in a proper context, this one would be a problem.

Link to comment
Share on other sites

Robotics doesn't cover the whole issue of what is now called artificial intelligence ...

Did you miss the part where I said that a concept such as robotics "could be expanded to accomodate an ever-increasing technology?" Anyway, you seem to be missing the whole point that Bowzer originally made, and that I amplified upon. I cannot say it more clearly than I have already done, so other than suggesting that you re-read what we wrote, I having nothing more to add.

Link to comment
Share on other sites

I am not going to take the time now to detail the entire history of "artificial intelligence," but essentially AI grew out of 1950s information theory, game theory, and what was then known as cybernetics. In the mid-50s MIT and Stanford pioneered hardware and programming tools for von Neumann machines, and the term "artifical intelligence" was coined when the "perceptron" was developed, a precusor for what we now refer to as neural net machines. The 60s and 70s saw the LIst processing language (LISP), robotic communication, and psychological toys like Eliza, the Rogerian psychotherapist program. 

But all this represented a very small group until the micro-electronics revolution helped AI fully blossom, and the 80s saw the foundation for the field which has evolved today, spreading across the whole realm of the cognitive sciences. There still remains the engineering-minded contingent of this group, but it is the world of ideas, both philosophical and scientific, which define the field. Here are the words which established the basis for modern AI, back in 1985.

"The fundamental goal of this research is not merely to mimic intelligence or produce some clever fate. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. (Artificial Intelligence, The Very Idea, J. Haugeland, MIT Press, 1985.)

I dont know much about the history of AI, but I have read a few contemporary textbooks and the vast majority of what I have encountered pertains to producing machines which solve problems, rather than producing consciousness. A search of an academic AI research journal results in 0 matches for 'consciousness'. The fact that some AI researchers might believe that machine consciousness is possible, or even the ultimate goal of their discipline, doesnt seem particularly relevant. Many people in philosophy, including the 'founders' of the subject held that it constitutes merely the disinterested pursuit of "knowledge for the sake of knowledge". Many physicists believe that physics has no goal other than to produce models that adequately describe empirical phenomena. Does this mean that we should dismiss philosophy and physics as being invalid disciplines? Or should we instead concentrate on the positive things that they have achieved whlie ignoring the more irrational comments coming from individual practitioners?

Link to comment
Share on other sites

I dont know much about the history of AI, but ...

Then learn it, and also learn to read the vast literature in the field and see the level of interest and debate that surrounds AI and consciousness.

I have read a few contemporary textbooks
Graduate and professional texts, or popular books?

and the vast majority of what I have encountered pertains to producing machines which solve problems, rather than producing consciousness.

If you focus on a "how to" book you are less likely to discover the broader ideas.

A search of an academic AI research journal results in 0 matches for 'consciousness'.

You chose a free internet journal, not a professional subscription journal. You should look instead at Minds and Machines, Lecture Notes in Artificial Intelligence, Neural Networks, Journal of Experimental and Theoretical Artificial Intelligence, Computers in Human Behavior, or any of the dozens of other journals that discuss these issues. For instance, from Minds and Machines, here is Paul Schweizer of the Centre for Cognitive Science, University of Edinburgh, Scotland, defending against criticism by those of the standard view, taking, at least in part, my perspective:

"My conclusion is that the functionalist/computationalist approach is unable to provide an adequate theory of consciousness, precisely because we are not conscious in virtue of computational structure. Hence I reject the standard functionalist line that any physical system realizing the same computational arrangement that's implemented in my brain would perforce have the same conscious experiences as me." (Mind and Machines, V. 12, pp. 143-144, 2002.)

Or, read Owen Holland's very recent "The future of embodied artificial intelligence: Machine consciousness?," Lecture Notes in Artificial Intelligence, 3139, pp. 37-53, 2004.

The fact that some AI researchers might believe that machine consciousness is possible, or even the ultimate goal of their discipline, doesnt seem particularly relevant.

You are ignoring the foundation on which the field was built, and you are evidently not at all aware of the degree that this issue permeates the literature in every related cognitive field.

Link to comment
Share on other sites

Did you miss the part where I said that a concept such as robotics "could be expanded to accomodate an ever-increasing technology?" Anyway, you seem to be missing the whole point that Bowzer originally made, and that I amplified upon. I cannot say it more clearly than I have already done,  so other than suggesting that you re-read what we wrote, I having nothing more to add.

I'm not missing the point, I'm disagreeing with it. I can't deny the history of AI, but people have made mistakes before but that never resulted in dismissing the whole area of research. Myself, as a programmer, can dismiss the whole idea of consciouss machines and still claim that the term AI be used for certain pieces of code (which I previously defined). And as I previously stated, I don't think that the very term is causing the confusion; if that were so, then why doesn't the phrase "it's raining cats and dogs" cause the same kind of confusion? It's the same thing, only that this phrase is describing something more tangible, concrete and everyday. Artificial Intelligence is a concept which can be used legitimately, but has been defined improperly; and mind you, it's not the first one to suffer that kind of fate; just think of the concept "selfishness."

I'm not forgetting that machines cannot be intelligent, though. That is why I said that if someone can come up with a better term to describe the concept as I defined it earlier, I'll be fine with it.

Link to comment
Share on other sites

"The fundamental goal of this research is not merely to mimic intelligence or produce some clever fate. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. (Artificial Intelligence, The Very Idea, J. Haugeland, MIT Press, 1985.)

It is my understanding that AI researchers are motivated by a long-term goal of producing a "mind" in a machine. This is why I went with "volitional consciousness" in my definition.

Thank you Stephen for finding this reference.

Link to comment
Share on other sites

It is my understanding that AI researchers are motivated by a long-term goal of producing a "mind" in a machine. This is why I went with "volitional consciousness" in my definition.

Thank you Stephen for finding this reference.

You're welcome (I think), but I do not understand your point. Could you explain it to me in a little more detail?

Link to comment
Share on other sites

  • 2 weeks later...

This is my favorite hobby...

First, I want to reference some sites: AI Horizon: An introduction to general AI, and Perceptron Tutorial.

This sparked me yet again to study the subject of AI. I looked into Holmesian Logic for a reference on reason, and through reasoning the perceptrons could store in a MemorY(computer memory) what happens. Through experiments, the AI could infer the answer to a problem, and after accomplishing this it could move onto other subjects and expand.

Borrowing Freud's terms, the ego, superego, and id pay a role in intelligence. The ego is the conscious of self; the superego would be the liason between the conscious and subconscious, it is essentially the unconscious part of the AI with unconscious morality; the id is would be the reasoning thing.(I won't try and hide it, I am no shrink, if anyone knows about this it'd help a lot!).

Using the C++ source code, would it not be possible to create AI?

Link to comment
Share on other sites

There are several threads throughout this BBS concerning the topic of "Artificial Intelligence." Please search the board lest we all repeat ourselves yet again.

<FC: ... Please check before starting a new topic next time.>

There may be many threads on this forum that discuss AI, but none of the others define what it is. Because the term is used in different ways, and there are significant philosophical implications in these differences, the specific purpose of this thread was to define it.

Plato's post would have been better suited for one of the other threads.

Link to comment
Share on other sites

<FC: I merged the two threads that both had, for their purpose, figuring out the meaning of AI. Notice that I didn't merge the two Existentialism threads, though they have identical titles, because one is in "Basic Questions", and the other is in "Metaphysics and Epistemology", and their contents reflect their forum locations.>

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...