Jump to content
Objectivism Online Forum

Animal And Human Consciousness

Rate this topic


Recommended Posts

First of all, hey everyone!

I have mostly been studying Objectivism by myself, so far, and I agree with everything I have read so far. I've only been reading about it for 3 months or so, though, so I guess I am still a newbie here :D. I discuss some parts of it with others at times, but it's a rather tedious process at times. Especially when I talk about it with my cousin, because he (in my opinion at least) takes the few pages he's read from VoS out of context. For example, one of the things we've disagreed about is that man's means of survival is reason, because for some reason he refuses to accept that she's not talking about momentary, physical survival, but long term survival.

Anyway, I was talking about the difference between the consciousness of animals and humans a couple of days ago, and the following issue came up. From my current understanding, the main difference between humans and every other kind of animal is that we can function on the conceptual level, and they cannot (and the fact that we have volition, but I think that is part of having being able to think in terms of concepts).

However, he gave this example of someone teaching a gorilla (I think it was) sign language, and he said that the woman who was doing that eventually got the gorilla to start making new combinations from the "words" it knew. Does this ability not require a conceptual faculty? If it does, then that would mean that the difference between humans and other types of living creatures isn't as black and white as I thought earlier. It does not change anything about what is part of the human nature, of course, but still. Does anyone here have more knowledge about this they are willing to share? I could very well be overlooking something here, and I'd love to know if I am...

Something else that I have not found the solution to is about the way animals learn. Because their consciousness is not able to function on a conceptual level, it means they learn things on the perceptual level (or lower, but I don't think that is possible?) But then, if animals can learn certain actions like that, why are humans unable to learn similar things without using their conceptual faculty, and use that to survive? Is this because we are volitional, and the actions an animal "knows" are necessitated by its nature?

A second thing I wasn't sure about is that, just like humans can imitate their parents without thinking much (and survive in an agricultural society, for example) animals can as well. I think I read in one book on objectivism that for humans someone had to have thought of the method in the first place, which is a perfectly valid argument to sustain that we do need to think to survive, but how then did the first animal learn a certain task? How is this different from the way a human figures something out? Or does the nature of an animal automatically steer it towards certain types of behaviour without any conscious process of learning?

This distinction between how animals and humans function and survive is important to validate the statement that reason is man's (sole) means of survival, because if you can't justify this distinction properly you couldn't contradict the claim that humans can survive the way animals do. To me it seems practically self-evident that humans need to use reason to survive, due to the enormous mountains of evidence, but I would prefer to be able to answer questions such as this.

Thanks in advance for your time <_<

Link to comment
Share on other sites

Anyway, I was talking about the difference between the consciousness of animals and humans a couple of days ago, and the following issue came up. From my current understanding, the main difference between humans and every other kind of animal is that we can function on the conceptual level, and they cannot (and the fact that we have volition, but I think that is part of having being able to think in terms of concepts).

One thing that's important to remember on this particular topic is that the question of what kind of consciousness animals have is a scientific one, and not a philosophic one. Ayn Rand wrote a couple of time that only humans have a conceptual consciousness, and she may have been right, but it would probably take a zoologist who specializes in higher primates to say for sure whether or not they can use concepts.

That said, using primitive signs does not necessarily indicate use of concepts. What it indicates is that they have the ability to memorize a large number of perceptual concretes (signs) and combine them in different ways. I've read that higher primates might be able to use the signs to deal with things we would normally integrate into first-level concepts, but that there is absolutely NO evidence that any process of abstraction is occuring. Evidence of abstraction would consist of moving beyond first-level concepts and making abstractions from abstractions to form concepts like "furniture" and "fruit" as opposed to "table" and "chair" or "banana" and "apple."

Although Ayn Rand said that no animal except humans can use concepts, what Objectivism qua philosophy has to say on the subject is that if an animal can use concepts, that they will use them in such and such a way and have rights. Personally, although I do not have any sort of expertise in zoology, I agree with Ayn Rand entirely on this until and unless I see an animal do something highly conceptual (like multiplication or building a primitive machine).

If it does, then that would mean that the difference between humans and other types of living creatures isn't as black and white as I thought earlier.
The line is very black and white. Have you ever seen an animal build a house, use an ATM, drive a car, go to the moon, get married, read a clock, or any of the other many, many uniquely human activities that require making abstractions from abstractions?

Or does the nature of an animal automatically steer it towards certain types of behaviour without any conscious process of learning?

Ding, ding, ding! This can be validated by introspection. On the perceptual level, you really have no control over your consciousness. You can choose to look here and not there, or close your eyes and not see anything at all, but that is a selection of stimulus more than it is actually directing perception itself. On the conceptual level, however, you have complete control over what you think about, how hard you think about it, what facts you use in thinking about it, how long you think about it and all sorts of other things. Dealing with abstractions is volitional, while dealing with percepts is not.

This distinction between how animals and humans function and survive is important to validate the statement that reason is man's (sole) means of survival, because if you can't justify this distinction properly you couldn't contradict the claim that humans can survive the way animals do. To me it seems practically self-evident that humans need to use reason to survive, due to the enormous mountains of evidence, but I would prefer to be able to answer questions such as this.

Well, it's important to realize that a human can survive physically without using reason, but they would be surviving qua animals, not qua rational beings.

Link to comment
Share on other sites

I think we're going to need to define a criteria for when something is 'operating with concepts'. As far as I'm concerned, a computer running a fairly low level AI program can operate with concepts. For instance, when a neural network has been trained to recognise trees, I think its fair to say that it has formed the concept of 'tree'. Obviously this is very different from the way humans use concepts since a) the computer isnt conscious, and B) the concepts are isolated and fragmented rather than being integrated within a complete language and form of life (since computers arent embodied).

And of course, computers which can using concepts do not have rights.

Edited by Hal
Link to comment
Share on other sites

Ah, you answered my questions superbly <_< Thank you.

That was something that crossed my mind as well, that if some animals would manage to operate on the conceptual level, then that would require us to "update" some definitions and perhaps create a new class of concepts to describe that if they do not fit the current ones, but due to knowledge being contextual it wouldn't invalidate that part of her writings, for example. (Because I assume that when she wrote most of her work the current state of knowledge about how animal minds work was that they couldn't do conceptual thinking).

The line is very black and white. Have you ever seen an animal build a house, use an ATM, drive a car, go to the moon, get married, read a clock, or any of the other many, many uniquely human activities that require making abstractions from abstractions?
You're right, of course. I didn't think of it like that.

Well, it's important to realize that a human can survive physically without using reason, but they would be surviving qua animals, not qua rational beings.

I know, and from my experience anyone arguing that therefore reason is not required for our survival is just doing it for the argument's sake, because I don't think anyone would actually want to live like that. When you properly define survival, however, (through the whole of the lifespan of a rational being) then as far as I can tell you can make a very strong case for our survival requiring reason.

The reason why I don't discuss these topics much with people in my immediate vicinity is that because most people hold a philosophy radically different from Objectivism you need to very exactingly define everything you talk about.

And still, I sometimes run into a brick wall, when someone merely shrugs and says something to the effect of: Well, if you don't like it here, you can always leave the country (in regard to politics, in this case), which I think is such a cheap argument. That's mostly how my friends defend our welfare state (in the Netherlands, where I live), when it comes down to it.

Link to comment
Share on other sites

I think we're going to need to define a criteria for when something is 'operating with concepts'. As far as I'm concerned, a computer running a fairly low level AI program can operate with concepts. For instance, when a neural network has been trained to recognise trees, I think its fair to say that it has formed the concept of 'tree'. Obviously this is very different from the way humans use concepts since a) the computer isnt conscious, and <_< the concepts are isolated and fragmented rather than being integrated within a complete language and form of life (since computers arent embodied).

And of course, computers which can using concepts do not have rights.

Unless your computer has a mind (i.e. is conscious), then it isn't using concepts--in the Objectivist sense of the term anyway. I don't know if you're using a different concept of "concept" here, but I'm positive you know that, in Objectivism, a concept is: a mental integration of two or more similar units.

Link to comment
Share on other sites

<_< the concepts are isolated and fragmented rather than being integrated within a complete language and form of life (since computers arent embodied).

Hmm, isn't integration an essential part in forming concepts? How can you call what a computer does conceptual "thinking" if they don't integrate their concepts?

Link to comment
Share on other sites

As far as I'm concerned, a computer running a fairly low level AI program can operate with concepts. For instance, when a neural network has been trained to recognise trees, I think its fair to say that it has formed the concept of 'tree'.

I think this is accurate if you're very literal about it: the computer can make use of the concept "tree" . . . which was entirely defined and created by its human programmer/trainer. The computer didn't form the concept, it was given the concept. People can actually operate this way if they so choose: it's like forming a floating abstraction.

Of course, I could be misunderstanding how this "training" process works, I haven't been keeping up with computer developments. Does this neural network thing actually exist, or are you projecting future developments?

The neatest on-the-market things I've seen at this stage are handwriting and voice recognition programs, and even then you have to train yourself to be specific enough for the computer; it's ability to recognize patterns only extends so far, whereas a human can still identify things when they are seriously distorted.

Heck, I use that "word recognition" thing on Blogger to prevent bots from posting comments.

Link to comment
Share on other sites

Dr. Binswanger makes a point in one of his lectures,saying: "computers do not add". Strictly speaking, humans add, using abacuses, or computers. [Caveat: Paraphrasing from memory and from my own understanding.]

[i don't mean to hijack a thread that is not about computers and AI, so if the discussion on this gets any more involved, let's split it into its own thread.]

Edited by softwareNerd
Link to comment
Share on other sites

However, he gave this example of someone teaching a gorilla (I think it was) sign language, and he said that the woman who was doing that eventually got the gorilla to start making new combinations from the "words" it knew. Does this ability not require a conceptual faculty?
Briefly, no. Koko has not learned words nor does she combine them to systematically express "ideas". The scientific claims regarding ape language abilities is vastly overstated and, from an experimental POV, undercontrolled. I have not seen evidence that they can learn first-level concepts, much less abstractions built on those concepts. These experiments also tend to take the animals intent to be self-evident, but careful studies (see especially the work with Nim) have shown that apes tend to generate vast strings of random signs weighted towards reward signs like "banana", "cookie", "play".
Link to comment
Share on other sites

Ok then, that's clear now. However, to form even first level concepts is a huge jump from the perceptual level, and I think that if an animal would be able to do that, but not go further, it would still qualify as them having a conceptual faculty. I think the definition Rand uses comes down to the ability to integrate perceptual data into concepts, and I don't think there is a condition that says you need to be able to form higher-level concepts there...

This would of course depend on the exact requirements you set upon the conceptual faculty, but if they are very strict (if you have to be able to both express ideas and pretty much function like a human here to qualify) then I think we would need to make another category for those things that do fall in between (or, in case there are no such beings at the present time, we would need to make one later on); or do you all think that perceptual level plus first level concepts would cover it adequately?

Edited by Maarten
Link to comment
Share on other sites

Unless your computer has a mind (i.e. is conscious), then it isn't using concepts--in the Objectivist sense of the term anyway.I don't know if you're using a different concept of "concept" here, but I'm positive you know that, in Objectivism, a concept is: a mental integration of two or more similar units.

Is being conscious necessary to have a mind? We often talk about unconscious mental processes for example, and a lot of concepts are formed unconsiously. A computer is capable of integrating two or more similar units, and I see no basis for objecting to the claim it is performing mental processes, regardless of whether its conscious.

Hmm, isn't integration an essential part in forming concepts? How can you call what a computer does conceptual "thinking" if they don't integrate their concepts?

I mean that they dont integrate their concepts with each other. A computer can see lots of specific trees and form the concept 'tree' by integrating the specific trees it has seen, but this might be the only concept which it has. For humans, we dont possess isolated concepts - our concepts exist within a holistic system of language. A human baby couldnt just learn the word/concept 'tree', without first learning other parts of the English language.

I think this is accurate if you're very literal about it: the computer can make use of the concept "tree" . . . which was entirely defined and created by its human programmer/trainer. The computer didn't form the concept, it was given the concept.
This isnt true; computers can form concepts which they werent given explicitly. When youre training a neural network for instance, you have to be careful which test cases you give it because its possible for it to learn a concept completely different from the one you wanted it to (and a concept which may well not exist within the English language). I remember reading about a case where one was being trained to recognise tanks hidden in forests, and it ended up forming the concept of 'tank covered in leaves' or something similar (ie it could correctly identify tanks surrounded by trees, but failed to identify tanks that werent in forests).

Does this neural network thing actually exist?
Yeah, training a NN to recognise trees would be quite trivial these days; an advanced NN actually managed to drive a car across America. They are quite interesting since they represent a very different approach to concept formation than the classical models. Previously it was thought that concept formation was largely a matter of identifying the necessary and sufficient conditions for an object to fall into a category (ie forming the concept of 'tree' would require you to learn a rule that was entirely sufficient to classify trees - ie a set of features which all trees possessed, and nothing apart from trees possessed). The neural network (connectionist) approach throws all this out the window, and operates in an entirely non-rule based manner, using techniques of statistical pattern recognition and 'family resemblence' type approaches instead. A NN couldnt state the explicit rule its using to classify trees as trees (since it isnt actually using one), but then neither could a human.

If theres one thing that modern AI has shown, its that terms like 'reasoning with concepts' and 'mental processing' are inherently vague, and need to be sharpened up a lot before you can say that a given entity is or isnt carrying them out.

Edited by Hal
Link to comment
Share on other sites

However, to form even first level concepts is a huge jump from the perceptual level, and I think that if an animal would be able to do that, but not go further, it would still qualify as them having a conceptual faculty.
That would be a correct conclusion; however, as I said, animals don't learn first-level concepts. As a hypothetical experiment to be carried out by some psycho-biologist, it would be interesting to see what it would be like to design an animal that could form first-level concepts but not concepts integrating lower-level concepts. I think after consulting with a decent psycho-epistemologist, they would discover that it is utterly impossible and self-contradictory.
Link to comment
Share on other sites

For instance, when a neural network has been trained to recognise trees, I think its fair to say that it has formed the concept of 'tree'.
No, I think it would be fair to say that the man who trains it has constructed an device that at some crude level behaves like a person who has formed the concept "tree". Contrast what the child does and the neural net does: the child observes everything, and integrates it so as to identify this and that as instances of one thing more general, and integrates other things into other units, and also sorts out the labeling. As far as I know, there is no neural net that can just "go learn", thereby identifying trees, dogs, cats, tables, cars, shrubs, chairs and who knows about "furniture" and "animal". The concepts have to be formed by the experimenter, and by drilling the machine you can create a statistical model. That model starts off very concrete-bound, so it's unsurprising when it fails to fully get the extension of a particular concept.
A NN couldnt state the explicit rule its using to classify trees as trees (since it isnt actually using one), but then neither could a human.
The difference it that a neural net can never state the explicit rule since it simply doesn't have them; whereas humans are not aware of the nature of these rules, but they can be discovered, because we do have them.
Link to comment
Share on other sites

That would be a correct conclusion; however, as I said, animals don't learn first-level concepts. As a hypothetical experiment to be carried out by some psycho-biologist, it would be interesting to see what it would be like to design an animal that could form first-level concepts but not concepts integrating lower-level concepts. I think after consulting with a decent psycho-epistemologist, they would discover that it is utterly impossible and self-contradictory.

A concept is just a group of percepts that have been synthesised and retained in the mind. Now, a dog seems able to do this; given the choice between eating a brick and eating a plate of dogfood, he will obviously go for the dogfood. Why? Because he is somehow able to identify dogfood as being 'something I can eat'. In other words, it has abstracted certain features of edible foods it has encountered in the past, and now is roughly able to classify food as being edible or not. There are many other examples like this sort of thing; a housetrained cat knows that it is only meant to poo in the litter tray. And what is a litter tray? Well its something that looks roughly like that.

Being able to make percerptual distinctions like this involves something being retained in the mind. In order to be capable of classifying things into different categories, some form of abstraction from previously encoutnered percepts must take place. It seems very obvious that animals do classify objects, and as such, they are abstracting from percepts. Whether or not you want to call this 'concept formation 'seems like a matter of semantics. I suppose you can stipulate that it isnt 'really' a concept unless it exists within a system of language, but this seems rather ad hoc.

Think about how you'd actually train an animal to perform an action when it sees a certain type of object, using operant conditioning or whatever. The animal is quite clearly abstrating and retaining something.

As far as I know, there is no neural net that can just "go learn", thereby identifying trees, dogs, cats, tables, cars, shrubs, chairs and who knows about "furniture" and "animal". The concepts have to be formed by the experimenter, and by drilling the machine you can create a statistical model.
This is what happens with humans too though. A child doesnt invent concepts out of thin air - it learns its conceptual scheme when it learns its first language, and this scheme preexists its birth. A child will generally classify the world in the same way as people in its surrounding culture, because that it how it has been taught to do things. Colour classification is the most obvious example of this. Children generally dont invent the concept of 'yellowish purple' - they interanlise the same way of breaking up the colour spectrum as those around them.

I doubt that a child can just 'go learn' a language, nor can it form the concept 'table' on its own, independently of any kind of reenforcement learning or whatever. If it could, it would be possible for humans to function quite well without being socialised. This does not appear to be the case (eg, feral children).

edit: Thats not to say that adults arent capable of challenging the common-language framework. Ayn Rand's objection to the word 'selfish' would be an obvious example of this occurring. But this is not what happens with children who are learning a language/worldview for the first time.

The difference it that a neural net can never state the explicit rule since it simply doesn't have them; whereas humans are not aware of the nature of these rules, but they can be discovered, because we do have them.

Theres no reason at all to think that humans use rules based on necessary and sufficient conditions. To ask the classic Wittgensteinian question, what do all 'games' have in common? What do light blue and dark blue have in common other than that we happen to call them both 'blue'?

When we form definitions, we identify the most important attributes. But this does not mean that every single object in this category possesses these attributes, and no other objects outside it. Given the way languages evolve, it would be amazing if there were actually any rigid rules governing the application of our common-language terms.

edit: If you think you have discovered an explicit rule for the way we classify things, it would be very easy to program this rule into a computer (dog(x) <=> Legs(x,4) && Barks(x)). If you try this however, you'll almost certainly find that it doesnt work, since your rule will have obvious exceptions and misclassify large numbers of objects.

Edited by Hal
Link to comment
Share on other sites

Now, a dog seems able to do this; given the choice between eating a brick and eating a plate of dogfood, he will obviously go for the dogfood. Why? Because he is somehow able to identify dogfood as being 'something I can eat'. In other words, it has abstracted certain features of edible foods it has encountered in the past, and now is roughly able to classify food as being edible or not. There are many other examples like this sort of thing; a housetrained cat knows that it is only meant to poo in the litter tray. And what is a litter tray? Well its something that looks roughly like that.

Hal, these are non sequitur. The simple fact that an animal can learn behavior does not imply the existence of concepts. It only implies they have a capacity to learn - it does not imply abstraction. A dog chooses meat over bricks because meat smells and tastes good. It will eat anything that tastes like meat, even if this is some crunchy stuff from a bag. In other words, dogs are preprogrammed with the inclination to eat things with certain smells and tastes. With cats, they can be trained to use a litter box, but this says nothing about the existence of a concept. It only says that the cat's brain has made a connection: if(need to poo) then (go to sandy box), a process which can take place entirely on a perceptual level. This behavior is probably instinctual anyway - I've seen cats outside bury their scat in sandy areas.

Link to comment
Share on other sites

Hal, these are non sequitur. The simple fact that an animal can learn behavior does not imply the existence of concepts. It only implies they have a capacity to learn - it does not imply abstraction. A dog chooses meat over bricks because meat smells and tastes good. It will eat anything that tastes like meat, even if this is some crunchy stuff from a bag. In other words, dogs are preprogrammed with the inclination to eat things with certain smells and tastes. With cats, they can be trained to use a litter box, but this says nothing about the existence of a concept. It only says that the cat's brain has made a connection: if(need to poo) then (go to sandy box), a process which can take place entirely on a perceptual level.

Being able to identify an object as a sandbox isnt purely perceptual, since if I havent managed to abstract the features of the previous sandboxes I've seen, I wont know that this object in front of me is a 'sandbox'.

Litter trays can look slightly different from each other; knowing that these different objects are all litter trays requires some form of abstraction from percepts. If you replace the cat's litter tray, it should be able to realise that the replacement is still a litter tray, even though it doesnt look exactly the same. Similarly, a cat is able to recognise other cats as being cats, and classify them in a different way from the way it classifies dogs and mice. This again requires abstraction and the ability to categorise objects.

My dog food example was admittedly bad, since this can be explained purely via conditioned responses to the smell as you pointed out.

Edited by Hal
Link to comment
Share on other sites

Is being conscious necessary to have a mind? We often talk about unconscious mental processes for example, and a lot of concepts are formed unconsiously. A computer is capable of integrating two or more similar units, and I see no basis for objecting to the claim it is performing mental processes, regardless of whether its conscious.

The way I use the term "mind," yes. My use means: the integrated relationship of both consciousness and the brain.

Link to comment
Share on other sites

From my understanding in OPAR Peikoff says that our subconscious consists of those ideas that are not currently in our conscious attention, but were accepted at a certain point. I don't think there is a good basis for claiming that you somehow end up with all sorts of ideas in your subconscious that bypass your conscious mind altogether.

If this were true, then you could also argue that it should be possible to directly transfer "knowledge" (not sure if it could still be called that, perhaps information is a better word) into someone's mind and thusly teach them things without them ever doing anything. This seems to contradict several things said about the way our mind functions...

Link to comment
Share on other sites

If this were true, then you could also argue that it should be possible to directly transfer "knowledge" (not sure if it could still be called that, perhaps information is a better word) into someone's mind and thusly teach them things without them ever doing anything. This seems to contradict several things said about the way our mind functions...

I dont see why this is necessarily impossible. When you learn something, I assume all that happens physically is that your brain moves into a different state, with changes in the connections between your neurons and the like (I dont know the details). If neuroscience were sufficiently advanced, it could be possible to induce these neuronal changes directly by physically pushing things about in the brain, without anything happening in consicousness. I'm speculating here obviously; we dont know enough about how the brain works yet.

Link to comment
Share on other sites

Well, I think that at least part of the Objectivist epistemology depends on the fact that objective knowledge is gained through an active, reality based process (grasped by a human consciousness). If it were possible to transfer knowledge to someone else, then that would not constitute as an active process for the receiver, which would either invalidate the concept of objectivity or disqualify the "knowledge" in question as objective knowledge, on the grounds that you did not receive it through sensory-perceptual means.

The last is basically the problem with this approach, even if it were possible. Because you didn't form the concepts you're now getting as input through your brain (or whatever method they use) that vastly increases the occurency of stolen concepts.

Link to comment
Share on other sites

Well, I think that at least part of the Objectivist epistemology depends on the fact that objective knowledge is gained through an active, reality based process (grasped by a human consciousness). If it were possible to transfer knowledge to someone else, then that would not constitute as an active process for the receiver, which would either invalidate the concept of objectivity or disqualify the "knowledge" in question as objective knowledge, on the grounds that you did not receive it through sensory-perceptual means.

It's pure science-fiction, but premusably, in the process Hal was describing, the perceptual memory would have to be a part of the "data transfer." If (and that's a VERY big if) a scientific discoverry were to be made which enable knowlede-transfers, however, I'm afraid we'd need some new philosophy to go along with it (although it woudl not invalidate the accuracy of Objectivism under the old conext).

EDIT: I also want to add that I've read sci-fi which (sort of) talks about this. In Frank Herbert's Dune Chronicles (mostly the last two books), there are women who can transfer memories and knowledge to one another, although they do it by mystical, unscientific means.

Edited by dondigitalia
Link to comment
Share on other sites

Since the dogfood and catbox examples are dead, I won't belabor those points, except to add that the closest relatives of domestic cats bury their scat: it's a widespread genetic trait. I dispute the claim that cats are very good at all at identifying intended urinals. They are good at identifying smells and know the look of sand, which largely suffices for getting cats to pee in the right place, barring political protests. If you have some decent experimental evidence that shows that cats can form the concept "cat box", I'd like to see it.

It seems very obvious that animals do classify objects, and as such, they are abstracting from percepts.
Tell me about this -- I don't know of this obvious evidence. It is true that animals can form some kind of perceptual "similarity" judgment, but that doesn't mean there are categories. If you can show an example of an animal with actual categories, then I'll believe.
The animal is quite clearly abstrating and retaining something.
I think that is the crux of it: the ability to remember is not the same as forming a concept.
This is what happens with humans too though. A child doesnt invent concepts out of thin air - it learns its conceptual scheme when it learns its first language, and this scheme preexists its birth.
Well, I beg to differ. A child does invent concepts, although it's not out of thin air and nobody ever has claimed that concept formation is random. The child will generally classify the world in bizarre ways orthogonally related to the concepts formed by the people around him. Usually they straighten out their conceptual system to conform to that of the rest of the world; if you'd like, I could point you to some work on child language acquisition and especially semantic acquisition (i.e. the concepts that children have).
I doubt that a child can just 'go learn' a language, nor can it form the concept 'table' on its own, independently of any kind of reenforcement learning or whatever.
Whaddaya mean by "whatever"? Children learn language without any of that stimulus-response reinforcement stuff, or lessons, or whatever. All they require is raw data to be the basis of the induction. That's not possible for neural nets.
Link to comment
Share on other sites

Tell me about this -- I don't know of this obvious evidence. It is true that animals can form some kind of perceptual "similarity" judgment, but that doesn't mean there are categories.
I would say that similarity judgement necessarily involves categorisation - I think that seeing an object 'as a kind of something' is largely a similarity judgement (what else could it be?). I identify my television as a television partly because of certain resemblances it has to other televisions that I've seen.

If you can show an example of an animal with actual categories, then I'll believe.I think that is the crux of it: the ability to remember is not the same as forming a concept.
Then what is the difference? What is being remembered here? If a concept is just a bunch of percepts that are retained, then it seems that remembering percepts for the purposes of similarity judgements is necessarily conceptual.

Well, I beg to differ. A child does invent concepts, although it's not out of thin air and nobody ever has claimed that concept formation is random. The child will generally classify the world in bizarre ways orthogonally related to the concepts formed by the people around him.
This is pretty much the same as neural networks (and other methods of supervised machine learning). The AI forms some classification scheme, and the supervisor checks to see if its formed the right one (see my above example about tanks in forests).

Children learn language without any of that stimulus-response reinforcement stuff, or lessons, or whatever. All they require is raw data to be the basis of the induction. That's not possible for neural nets.
Yes it is. The technical name term for this is 'unsupervised learning' (ie categorising observations without being explicitly guided as to what categorisations to make). Edited by Hal
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...