Jump to content
Objectivism Online Forum
Sign in to follow this  
William O

Stolen Concept Fallacies Outside of Philosophy

Rate this topic

Recommended Posts

Stolen concept fallacies show up frequently in philosophy, but they are less common elsewhere for some reason. I'd like to use this thread to collect examples of stolen concept fallacies that don't involve philosophy. I think that doing this might help to illuminate the concept more.

I have seen one example of the stolen concept fallacy that didn't involve philosophy. Two people on Reddit were discussing the concept of a superorganism, which is an interacting community of smaller organisms like a termite mound. One of posters in the discussion reasoned that every organism is really a superorganism, because every organism is composed of cells. This steals the concept of a superorganism, which was originally intended to distinguish communities of organisms from individual organisms composed of cells. When the term is used this way, the concept of a superorganism loses its original context and meaning.

What are some examples of stolen concept fallacies that you've come across outside of philosophy?

Share this post


Link to post
Share on other sites

.

Thanks for the topic and the example, William. Here are some related ruminations.

Learning is defined in my 1976 AMERICAN HERITAGE DICTIONARY, as a noun, as “acquired wisdom, knowledge, or skill” and, as a verb, as “gaining knowledge, comprehension, or mastery of through experience or study.”

In PRINCIPLES OF NEURAL SCIENCE (2013, 5th edition, Kandel et al., editors), we have, consistently with the dictionary, but going beyond it: 

“Learning refers to a change in behavior that results from acquiring knowledge about the world, and memory is the process by which that knowledge is encoded, stored, and later retrieved.” (1441)

In this encyclopedic authoritative reference, the types of memory and what is known of their neural bases is presented. There is a section “Long-Term Memory Can Be Classified as Explicit or Implicit.”

Implicit memory sections following that one often sound like stolen-concept talk. “Implicit memory stores forms of information acquired without conscious effort and which guide behavior unconsciously. Priming is a type of implicit memory . . . . Two types of priming have been proposed [conceptual priming and perceptual priming, with much evidence]” (1452)

Implicit memory is detailed further throughout the next Chapter titled “Cellular Mechanisms of Implicit Memory Storage and the Biological Basis of Individuality.”

On the face of it, there appears to be a stolen concept fallacy in that these tremendous advances are talked of as implicit memory when one is reporting physical and chemical changes in neurons in the nervous systems of animals not possessing consciousness. Memory would seem to be something that entails consciousness in our first conception of memory, yet today we talk of memory in such a thing as a snail. (Rand assumed that even insects have consciousness, but that is incorrect by our present lights, and I set it aside.)

In the April 1968 issue of THE OBJECTIVIST the brain researcher Robert Efron wrote: “The concept ‘memory’ depends upon and presupposes the concept of consciousness, cannot be formed or grasped in the absence of this concept and represents, within wider or narrower limits, a specific type or state of conscious activity.” (This paper was reprinted, with adaptations, from one Dr. Efron had presented at a conference in philosophy of science the preceding year at Univ. of Pitt.) Efron argued that in the preceding 50 years, experimental psychologists had destroyed the concept of memory. Similarly for the concept of learning. Many of the instances of talk of memory at the time of his paper remain junk talk today, or rather, junk if taken literally. However, since that time, it looks to me that the extensions of the concept of memory down into the neural processes of even animals not featuring any consciousness is not really a stolen concept. The loop back to the concept with consciousness in it is very long, setting our conscious brain within its developmental story, evolutionary story, and dependencies of specific conscious processes on specific unconscious processes, all among the neuronal activities. It seems to me this best, fullest story can be told without slipping into eliminative reductionism, and is not a stolen-concept fallacy regarding memory or learning.

Share this post


Link to post
Share on other sites

I see this all the time when people discuss the possibility of an above-human level artificial intelligence (ASI, or artificial superintelligence). People who are scared of this AI say that we, as puny humans "could never hope to understand its motivations." Yet very often, these same people will begin to discuss the actions of this ASI with an implicit understanding of at least some of its motiviations. IE, that it would take actions to sustain its existence, that it has the motive of self-preservation. Even though they said that an ASI's motivations "could not be understood." So they steal the concept of "understanding" and smuggle it back in.

On 11/5/2017 at 7:51 AM, Boydstun said:

.On the face of it, there appears to be a stolen concept fallacy in that these tremendous advances are talked of as implicit memory when one is reporting physical and chemical changes in neurons in the nervous systems of animals not possessing consciousness. Memory would seem to be something that entails consciousness in our first conception of memory, yet today we talk of memory in such a thing as a snail. (Rand assumed that even insects have consciousness, but that is incorrect by our present lights, and I set it aside.)

Insects and snails do seem to "remember," though. And people can be subconsciously primed or suggested. If this is not memory, is there a better term for it?

Edited by CartsBeforeHorses

Share this post


Link to post
Share on other sites
13 hours ago, CartsBeforeHorses said:

I see this all the time when people discuss the possibility of an above-human level artificial intelligence (ASI, or artificial superintelligence). People who are scared of this AI say that we, as puny humans "could never hope to understand its motivations." Yet very often, these same people will begin to discuss the actions of this ASI with an implicit understanding of at least some of its motiviations. IE, that it would take actions to sustain its existence, that it has the motive of self-preservation. Even though they said that an ASI's motivations "could not be understood." So they steal the concept of "understanding" and smuggle it back in.

Isn't that just a contradiction? It doesn't sound like they're stealing the concept of understanding, it sounds more like they are implying that they both can and cannot understand the ASI's motivations.

Share this post


Link to post
Share on other sites
On 11/11/2017 at 7:58 AM, William O said:

Isn't that just a contradiction? It doesn't sound like they're stealing the concept of understanding, it sounds more like they are implying that they both can and cannot understand the ASI's motivations.

I think that every stolen concept fallacy leads to a contradiction... denying a concept while then affirming that concept implicitly. This might be just a regular contradiction, but I think it's still worth pointing out because of its pervasiveness in the tech sphere, and beyond... see Sam Harris' Ted Talk on the issue.

While I think that there are certain precautions that it's wise to take when dealing with a new and poorly understood technology, denying man's understanding of what he has built essentially renders him powerless, like some Maximum Overdrive scenario.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×