Jump to content
Objectivism Online Forum
Edwin

Have there been any attempts to automate Concept Formation?

Rate this topic

Recommended Posts

Have there been any attempts to automate Concept Formation or Measurement Omission or identification of Conceptual Common Denominators?

I understand that concept formation requires volition, which has not yet been automated. But surely someone would have tried to automate some automatic element of concept formation no?

Share this post


Link to post
Share on other sites

what if you replace volition with instincts? and then replaced instincts with design?

When I think about concept formation, I really do think about pattern recognition. Sometimes, venn diagrams, bayesian networks, stochastics. Sometimes genetic/evolutionary algorithms.

How does a computer form a concept? Perhaps like in voice recognition? From the phone of the cell phone, there are millions of ways to say "call wife". The common elements are recognized to perhaps, speed dial #1, the other stuff is discarded.

is this what you are talking about?

Share this post


Link to post
Share on other sites

Have there been any attempts to automate Concept Formation or Measurement Omission or identification of Conceptual Common Denominators?

Probably not. I don't see a point in doing that, before computers get better at recognizing patterns first. After that, this seems like something that would be much easier to accomplish.

Share this post


Link to post
Share on other sites

Concept formation is a volitional process, done by volitional beings, so in the sense that others have answered your question: computers do not and cannot form concepts. Computers do what their programmers program them to do, nothing more, nothing less.

If you are asking whether the process of concept formation can be identified, formulated and taught? The answer is yes, Ayn Rand did just that.

Share this post


Link to post
Share on other sites

Concept formation is a volitional process, done by volitional beings, so in the sense that others have answered your question: computers do not and cannot form concepts.

"volitional beings" (humans) are a mechanism. That is all we are. There is nothing mystical about our brains. Why are you saying that there cannot be any other kind of mechanism that has the same attribute?

If you had said "computers do not form concepts", I would be right there with you. But they absolutely could, and I bet they will.

Edited by Nicky

Share this post


Link to post
Share on other sites

"volitional beings" (humans) are a mechanism. That is all we are. There is nothing mystical about our brains. Why are you saying that there cannot be any other kind of mechanism that has the same attribute?

If you had said "computers do not form concepts", I would be right there with you. But they absolutely could, and I bet they will.

The implication from this is that, fundamentally, humans are no different than computers. That a computer is, fundamentally, just a more complicated mouse trap, which is true and that a human is, fundamentally, just a more complicated computer, which is not true and smacks of determinism.

There is a fundamental difference between computers and humans. Computers take input, algorithmically process it, and output the product. Humans have free will and can create new inputs, new algorithms and completely original outputs -- never thought of before. Computers do not and will not ever do that.

If someone ever develops a volitional, self-automated, self-regulated, self-motivated, thinking ... thing, (a proposition I find highly dubious and unwilling to discuss) it won't be called a computer anymore. It will be a being or entity, probably with rights, with a unique name designating how fundamentally different it is from everything that came before it.

There is no evidence that anyone ever will develop such a thing and so your bet that they will is much more mystical than the fact that we possess free will, whether that process is fully understood or not.

Share this post


Link to post
Share on other sites

I made a mistake. The part of your post I should've addressed, instead of the one I addressed (because it's a much more relevant objection), was this:

Concept formation is a volitional process, done by volitional beings

Concept formation is done by volitional beings, on account that we're the only ones doing it, and we can choose to do it or not do it. We don't have to do it.

But it's not a volitional process. There is only one right way to do it: if you want to do it right, you don't have options about it. Just because something has no choice but to do it right, doesn't mean it can't:

That which [man’s] survival requires is set by his nature and is not open to his choice. What is open to his choice is only whether he will discover it or not, whether he will choose the right goals and values or not. (Ayn Rand, VoS)

That is the extent of our choices. There aren't multiple right ways to process our input. There is only a right way and a wrong way, and we get to choose. If a computer of equal ability to process reality doesn't get to choose not to do it, that makes it better at it, not worse.

I don't think volition belongs in this conversation. If the decision to fully focus and sustain that focus is made, this decision (this constant decision, made over and over again) determines the "output".

That which you call your soul or spirit is your consciousness, and that which you call “free will” is your mind’s freedom to think or not, the only will you have, your only freedom, the choice that controls all the choices you make and determines your life and your character. (Galt's Speech)

Humans...can create...new algorithms.

This is the part that's relevant to the issue, not volition. Our nature as volitional beings isn't what's causing us to be able to do this. The mechanism we use to do it (which starts with concept formation), does. There is no reason why that mechanism couldn't be reproduced, or why a new mechanism that does that couldn't be designed, without first creating a volitional being. The process of creating a new algorithm is just as "deterministic"(for lack of a better word) as the process of executing one. And by "deterministic" I mean that there is only one right way to do it, in any one context.

Such a mechanism wouldn't be an independent being with rights, it would be the same thing current computers are: an extension of our mind. Great as it is, our mind has a limited capacity. We could and should design computers which are better at this process than even we are, and use them. These computers would not take over the world, they would not "look down on us as inferior beings", they would simply be better at gathering input and applying reason to it to produce the right output, than we are.

Humans...can create new inputs

Not true. Our only input is reality. What we create based on that input is by definition not input.

Our thoughts are based in the reality we sense around us. Our lies are broken logic, applied to that same reality. Neither our creations, nor our lies (hallucinations, dreams etc.), are independent of our input (which is the reality we sense around us).

Our only starting point, for our thoughts, is reality. Our choice lies only in processing our input properly or improperly. We don't have the third option, of ignoring our input and finding a new source to use a starting point for our thoughts.

It's a long post, so I should sum it up:

Our input doesn't determine our "output". But it does determine what the right output is. We don't need to build a computer that isn't deterministic. We just need one to find the one right answer which is determined by its input. For that, we need to improve its ability to collect input (especially by pattern recognition), its ability to categorize and process it, and finally, we need to figure out a way to have it create the right algorithm for solving a specific problem it comes across, without human help at each turn. But that is all a deterministic process, not a volitional one.

Share this post


Link to post
Share on other sites

Have there been any attempts to automate Concept Formation or Measurement Omission or identification of Conceptual Common Denominators?

I understand that concept formation requires volition, which has not yet been automated. But surely someone would have tried to automate some automatic element of concept formation no?

Measurement omission occurs in the formation and recognition of percepts, and there are attempts to make artificial perceptual systems. Go see this thread. The conceptual level is still far beyond the reach of present research.

Share this post


Link to post
Share on other sites

Concept formation is done by volitional beings, on account that we're the only ones doing it, and we can choose to do it or not do it. We don't have to do it.

But it's not a volitional process. There is only one right way to do it: if you want to do it right, you don't have options about it. Just because something has no choice but to do it right, doesn't mean it can't:

Objectivism almost completely identifies volition with the conceptual faculty. The remainder is the issue of focus which activates thinking and awareness at the conceptual level.

A concept forming machine can not take the form of a word-shuffling self-aware dictionary. There was a project (I forget the name or related university) which began back in the 1980's during the first AI interest boom which was precisely that, and it was a useless dead-end. Computerized rationalism is all that it was. There are computer programs that will prove (or disprove) mathematical hypotheses but computerized deduction is not concept formation.

Share this post


Link to post
Share on other sites

A concept forming machine can not take the form of a word-shuffling self-aware dictionary. There was a project (I forget the name or related university) which began back in the 1980's during the first AI interest boom which was precisely that, and it was a useless dead-end.

You are probably thinking of this: http://en.wikipedia.org/wiki/Cyc

More information: http://aaaipress.org/ojs/index.php/aimagazine/article/download/510/446

For those who don't know about it, the project was an attempt to create a knowledge base that a computer can use to deduce from. In terms of AI, it was useful, and great for if you want to ask about what is already known, but in terms of concept-formation, it didn't really lead anywhere. Expert systems using something like Cyc are used today for diagnostics which are easily codified.

Edited by Eiuol

Share this post


Link to post
Share on other sites

Objectivism:The Philosophy of Ayn Rand

Chapter 2—Sense Perception And Volition

pg. 55

The actions of consciousness required on the sensory-perceptual level are automatic. On the conceptual level, however, they are not automatic. This is the key to the locus of volition. Man's basic freedom of choice, according to Objectivism, is: to exercise his distinctively human cognitive machinery or not; i.e., to set his conceptual faculty in motion or not. In Ayn Rand's summarizing formula, the choice is: "to think or not to think."

Share this post


Link to post
Share on other sites

You are probably thinking of this: http://en.wikipedia.org/wiki/Cyc

More information: http://aaaipress.org...ownload/510/446

For those who don't know about it, the project was an attempt to create a knowledge base that a computer can use to deduce from. In terms of AI, it was useful, and great for if you want to ask about what is already known, but in terms of concept-formation, it didn't really lead anywhere. Expert systems using something like Cyc are used today for diagnostics which are easily codified.

I still don't recall, but that must be it. Thanks. The one good result was negative, demonstrating that an expert system in order to be useful should be focused on a narrow subject matter, just like a human expert.

Share this post


Link to post
Share on other sites

Concept formation is done by volitional beings, on account that we're the only ones doing it, and we can choose to do it or not do it. We don't have to do it.

Right, we don't have to do it, we must choose to do it or not. We only have to do it if we want to survive. It is an interesting fact that we are the only animals that form concepts. Why do you think that is? If other animals could do it, don't you think they would since it is such a good survival tool.

The volitional faculty is the conceptual faculty is the rational faculty.

But it's not a volitional process. There is only one right way to do it: if you want to do it right, you don't have options about it.

Concept formation is a completely volitional process. First, you must choose to do it, as you say above. Then, when looking out at the world, you must choose to focus your mind. Then you must focus on certain existents as apart from others. Then you must focus on how they are similar and different from other existents. You may choose to focus on certain attributes as apart from others.

That is the extent of our choices. There aren't multiple right ways to process our input. There is only a right way and a wrong way, and we get to choose.

There are many different ways to solve the same math problem (as an example).

The process of creating a new algorithm is just as "deterministic"(for lack of a better word) as the process of executing one. And by "deterministic" I mean that there is only one right way to do it, in any one context.

Again, just as an example, think of a computer programmer actually creating a new algorithm: there are many different ways to accomplish the same goal in computer programming.

Such a mechanism [...] would be the same thing current computers are: an extension of our mind. Great as it is, our mind has a limited capacity. [...]. These computers [...] would simply be better at gathering input and applying reason to it to produce the right output, than we are.

Computers are essentially calculators and libraries and are nothing like our minds. Our brains may have a limited capacity for storage (though that limit hasn't even been approached), which is one thing concepts help with, but they have an unlimited capacity to think of new solutions and create original products -- something computers can't do. Computers don't reason.

Humans [...] can create new inputs

Well, my point was that we create new outputs, which then become new inputs, like electricity or even just new ideas which we then can think about anew. The man-made, once it is made, is part of reality, that is true. But the man-made is a product of volition and didn't have to be, it could have been otherwise.

For that, we need to improve its ability to collect input (especially by pattern recognition), its ability to categorize and process it, and finally, we need to figure out a way to have it create the right algorithm for solving a specific problem it comes across, without human help at each turn. But that is all a deterministic process, not a volitional one.

Computers can catalog information into the programmer's categories but it can't create new categories and it can't create new algorithms, creation is a volitional process.

Share this post


Link to post
Share on other sites
If someone ever develops a volitional, self-automated, self-regulated, self-motivated, thinking ... thing, (a proposition I find highly dubious and unwilling to discuss) it won't be called a computer anymore. It will be a being or entity, probably with rights, with a unique name designating how fundamentally different it is from everything that came before it.

There is no evidence that anyone ever will develop such a thing and so your bet that they will is much more mystical than the fact that we possess free will, whether that process is fully understood or not.

Granted we're talking a highly speculative far-out future (given man's volition), but there is nothing in theory that contradicts the possibility that man could create such a thing. After all, we as men are just a gathering of pieces of the universe -- we can be figured out and replicated.

Share this post


Link to post
Share on other sites

Granted we're talking a highly speculative far-out future (given man's volition), but there is nothing in theory that contradicts the possibility that man could create such a thing. After all, we as men are just a gathering of pieces of the universe -- we can be figured out and replicated.

Well I suppose there is nothing in theory that contradicts the possibility of creating a teleportation device that disintegrates humans at one location and reintegrates them at another, but I find that highly dubious also. I'm not sure of where the science stands at this point but I'd be willing to bet that, given what we know, the possibility of a man-made, rational automaton should be relegated to the arbitrary for now -- highly speculative at the least.

Your last sentence, while perhaps true, sounds like an inaccurate description to me and frankly a little denigrating. We are greater than the sum of our parts. A human being as a whole possesses properties that none of its parts possess.

Share this post


Link to post
Share on other sites

This is an interesting question, I will review the AI literature and post a response at a later time. My understanding is that "concepts" are a very, very rich sort of data, and any "automated concept formation" that has been created is only of very limited scope so far.

Share this post


Link to post
Share on other sites

Well I suppose there is nothing in theory that contradicts the possibility of creating a teleportation device that disintegrates humans at one location and reintegrates them at another, but I find that highly dubious also. I'm not sure of where the science stands at this point but I'd be willing to bet that, given what we know, the possibility of a man-made, rational automaton should be relegated to the arbitrary for now -- highly speculative at the least.

Your last sentence, while perhaps true, sounds like an inaccurate description to me and frankly a little denigrating. We are greater than the sum of our parts. A human being as a whole possesses properties that none of its parts possess.

I think of it as the sum being a direct reflection of the parts together. Nothing more nor less -- definitely not in a negative way. It's a simple fact that we are made up of elements of the universe.

And I am positive that the breaking of humans down and then rebuilding them is within the scope of mankind. We have one up on evolution in that we can observe and reason, whereas the universe just had gradual automatized morphing. I don't even scoff at your teleportation example. Why would you consider these things dubious? All of the technology that we have created would have been considered dubious by some people at one point in history.

Share this post


Link to post
Share on other sites

I think of it as the sum being a direct reflection of the parts together. Nothing more nor less -- definitely not in a negative way. It's a simple fact that we are made up of elements of the universe.

So would you describe us or our consciousness as a "mechanism"? I don't think the facts are so simple, there is something more. Elements are not animated, self-replicating or volitional. There is some emergent property in the whole that does not exist in the parts.

And I am positive that the breaking of humans down and then rebuilding them is within the scope of mankind. We have one up on evolution in that we can observe and reason, whereas the universe just had gradual automatized morphing. I don't even scoff at your teleportation example. Why would you consider these things dubious?

I'm not sure how you can be "positive" (as in "certain") about either of these propositions, you called them "highly speculative" before. There is something in the arrangement of the parts, particularly after we have acquired knowledge, something new that didn't exist when we were born, that would be destroyed by such a process. But I'm not much interested in discussing it further, I'll just agree with your former statement and say that I find it highly speculative.

Do you agree that concept formation is a volitional process? And that that is something that will never be duplicated by a computer (knowing the definition of what a computer is and does)? That a volitional process is something a non-volitional mechanism cannot (by its nature) perform?

Share this post


Link to post
Share on other sites

I don't know if a being forced to make perfect conepts would be better off than one that wouldn't. After all mankind has evolved because of the fact that ancient humans were willing to prioritize what was important for the purposes of short term survival. Given some sort of instinctual imperative not to ignore facts, one may be paralyzed by the very first problem they come across, trying to gleam the truth about it when more pressing matters at hand. For instance, ancient man recognized weather patterns, and came up with various good and bad theories about I am sure. However he could not sit around and be a meterologist or develop science or even the discipline of saying he did not know. He had to accept his prejudices and develop useful concepts like how to hunt animals and which foods were good for him.

I believe the relationship between volition and rationality is one of resource management.

Share this post


Link to post
Share on other sites

My position is that a strong general artificial intelligence is possible because natural intelligence exists. If the same causes can be brought about then the same effects will ensue, even the emergent effects. Such a creation will not be a computer as we define it, so the commercial and utilitarian justification for creating it in the first place won't apply (it won't be as reliable as a computer, or as transparently understandable and fixable when it makes errors).

Edited by Grames

Share this post


Link to post
Share on other sites
Elements are not animated, self-replicating or volitional.

And yet, we are made up of elements!

That a volitional process is something a non-volitional mechanism cannot (by its nature) perform?

I would put it: volition is just another arrangement of the universe to be learned by man... the universe knowing itself through volition, again.

Such a creation will not be a computer as we define it, so the commercial and utilitarian justification for creating it in the first place won't apply (it won't be as reliable as a computer, or as transparently understandable and fixable when it makes errors).

Don't forget about the curious humans who do things just because they want to see and find out.

Share this post


Link to post
Share on other sites

So would you describe us or our consciousness as a "mechanism"? I don't think the facts are so simple, there is something more. Elements are not animated, self-replicating or volitional. There is some emergent property in the whole that does not exist in the parts.

That's true for my laptop as well. A mechanism is more than the sum of its parts.

Right, we don't have to do it, we must choose to do it or not. We only have to do it if we want to survive. It is an interesting fact that we are the only animals that form concepts. Why do you think that is?

The other animals that could do it (homo erectus, neanderthalensis, etc.) went extinct, as a result of climate change and the superior adaptability of our species.

There are many different ways to solve the same math problem (as an example).

Again, just as an example, think of a computer programmer actually creating a new algorithm: there are many different ways to accomplish the same goal in computer programming.

In both cases, there is only one way that is the most efficient. That one way is the right way. Choosing the second best way is irrational.

I should of course add "within the context of one's knowledge" to all the sentences above. I didn't bother, because that's understood. A computer (or a piece of software, to cut to the chase - the computers that could do think like we do already exist, what doesn't is the software) that has no volition would also operate with the context of its knowledge.

Computers can catalog information into the programmer's categories but it can't create new categories and it can't create new algorithms, creation is a volitional process.

Computers can write algorithms. An algorithm is just a list of instructions, that processes input a certain way. Writing software that spits out lists of instructions is a piece of cake. I can do it in ten minutes, and you're gonna have an endless stream of algorithms, none of which have ever been written by a human being before.

Moreover, give me a set of relatively simple mathematical functions, and I can write software that automatically writes one or many algorithms that calculate any one function of the set. I can make it as smart or as stupid as you'd like. I can have it instruct you to calculate any exponential function in a few steps, 100, or a random number between the two. So computers can also write algorithms that have a purpose.

The reason why computers can't do that for most real life problems is because it makes no sense to teach them: whenever there's a problem to be solved, it's much easier to just solve it, than to formalize every single element of the problem (and its context), so that software (a mathematical entity) can deal with it.

The problem isn't getting computers to be creative. The problem is translating a problem and its full context into a well defined mathematical language they can use as input (formalizing it). From that point on, having a computer come up with algorithms is nothing.

Share this post


Link to post
Share on other sites
Such a creation will not be a computer as we define it, so the commercial and utilitarian justification for creating it in the first place won't apply (it won't be as reliable as a computer, or as transparently understandable and fixable when it makes errors).

So its usefulness depends on how complex a set of problems it can solve, and how many errors it makes solving them. Albert Einstein's mind wasn't as reliable as a computer, and it definitely wasn't transparently understandable or fixable when it made errors. But it was still pretty damn useful.

And unlike with Albert, we just need to get this right once, and then we can replicate the best version over and over again. All the other ones we can throw away.

Edited by Nicky

Share this post


Link to post
Share on other sites

He isn't talking about a bunch of magnets agregrating into a an algorithm that solves itself. He is talking about the creation of a conciousness that is made up of unusual materials. It would have to have all the properties of conciousness which means it would have to be allowed to develop ideas on its own, with the ability to choose to make mistakes, take things for granted, and make shortcuts.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...