Jump to content
Objectivism Online Forum

Evolving Metaphors That Try To Explain Human Intelligence.


Recommended Posts

Aoen has an essay of Robert Epstein edited by Pam Weintraub titled The Empty Brain. Part of the article describes how the human use of metaphor to seek clarity and understanding about intelligence over the last two millennia.

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.

This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.

Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.

The article contains passages with regard to the brain, as well as memory,  from the neurosciences, cognitive science, psychology and more.

Edited by dream_weaver
Link to comment
Share on other sites

Great article.  It dovetails with some of the points raised recently on the post "Is this about right"?

From the Aeon article:

Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework.

These are the two scientists who's website I provided a link to in the other post.

Link to comment
Share on other sites

That was one of the more recent threads that came to mind when I stumbled across it. There have been a few dealing with artificial intelligence that came to mind as well.

Another parallel that has some resonance:

The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.

and from Chapter 2 of The Romantic Manifesto:

SINCE religion is a primitive form of philosophy—an attempt to offer a comprehensive view of reality—many of its myths are distorted, dramatized allegories based on some element of truth, some actual, if profoundly elusive, aspect of man's existence. One of such allegories, which men find particularly terrifying, is the myth of a supernatural recorder from whom nothing can be hidden, who lists all of a man's deeds—the good and the evil, the noble and the vile—and who confronts a man with that record on judgment day.

That myth is true, not existentially, but psychologically. The merciless recorder is the integrating mechanism of a man's subconscious; the record is his sense of life.

The parallel is drawn from another point suggested in the article:

Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

From The Romantic Manifesto, one could draw that an allegory is so entrenched (one of the ways philosophy 'sets' the course of human history) that it is difficult to separate what the integrating mechanism of man's subconscious is doing from whatever metaphor(s) is(/are) guiding the researchers.

In my struggle to come to terms with the nuances of Objectivism, it is her view on art/concepts that has to be one of the most contentious. Epstein/ Weintraub 's article provides some indicators as to why this is so.

PS: I didn't note from reading the article that the two scientists you mentioned were the same. Until drawn to my attention, that was a fact that had eluded me.

Edited by dream_weaver
Link to comment
Share on other sites

2 hours ago, New Buddha said:

Great article.  It dovetails with some of the points raised recently on the post "Is this about right"?

I think there are some MAJOR errors, including implicit radical behaviorist, overly broad generalizations, and practically denying conceptual thought entirely. It's a terrible article, if for no other reason than it totally straw mans what a representation is. Buddha, it's not proposing a theory of perception, it's proposing that the mind is really reacting to the world and there are no concepts - period. 

Even the part about metaphors is bad. Yes, the brain does process information, not metaphorically. There are in fact concepts to form that are in SOME sense retrieved. No, it's not LOSSLESS memory, but you wouldn't theorize that it would be. There are constraints to how much information can be there, we'd expect that. A good theory tries to explain why this is so, and one really good idea is that a concept is only some information. Your mind doesn't need to know ALL details of a dollar bill for it to represent dollar bills in a useful way. Rectangular, 1's on the corners, George Washington's portrait; this is plenty. Also, this doesn't need to be a visual representation necessarily. Absurdly enough, the writer uses this as a reason to deny representations ENTIRELY.

 

 

Link to comment
Share on other sites

Funny, I took the two dollar bill representations to be implicitly in support of conceptualization.

The first drawing of a dollar bill pictured was drawn from a conceptual referent. The second drawing was brought into play via the presence of a perceptual dollar bill present.

I found the overall article to be against the computer metaphor for human intelligence, in general, rather than affirming or denying representationalism.

Link to comment
Share on other sites

There is a lot going on in the Article, and it is somewhat confusing.  Here's how I break it down.

1.  In a recent post I responded to Boydstun's mention of George Laykoff, and I said something along the line that Laykoff is very much about "Language as Metaphor".  This means, in essence, to Laykoff and others who share his opinions, that all thought is in the form of Metaphor, and more specifically, it is Metaphor tied to our bodies presence/movement in the environment.    Our language is replete with with Metaphors.  We say such things as, "You make a good point",  or when we are discussing something with a person we might say, "I'm starting to see where you are coming from".

2.  However, Von Neumann, Turing, etc., were in no way being "metaphorical" at all in their proposal(s) that the Mind does exactly what computers do.  Or, to put it another way, that computers can be made to do what the mind does.  Their positions tie in to the Analytic Philosophy, Formal Logic, the Vienna Circle, Logical Positivism, Godel, etc., but it has nothing to do with "thinking as metaphor".  It does have to do with Representationalism and Computationalism (understanding that there are various definitions for these, of course.)

3.  The Author of the article is anti-Representationalism, or at least sympathetic to it, because he states:

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball ....

And then there is the reference to Chemero, and the two scientists on the web site that I also linked to.

4.  I'm not certain why the Author of the article spent so much time discussing metaphors, when the only anti-representationalist people he mentioned are also anti-language-as-metaphor.

Edited by New Buddha
Link to comment
Share on other sites

6 hours ago, dream_weaver said:

Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.

 

As an aside - does anyone know a good starting point to learn about Markov Models? I'm in the middle of reading that book; I won't be able to understand most of it until I can understand those and the only online information I've found has mostly consisted of unintelligible scribbles.

Edited by Harrison Danneskjold
Clarity
Link to comment
Share on other sites

Quote

[Our brains do not have]: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently.

 

My mind indisputably has information, data, knowledge, lexicons, representations, memories and images (and all the rest in a more restricted sense) and my mind is what my brain does

If you took apart a calculator you'd find lots of wires, batteries and capacitors, but you would not find a single number. Analogously, when we dissect brains we can only see a bunch of 'wires' (in a sense); yet those wires do process information at least as much as a calculator does.

 

No, the "information processing" metaphor isn't very good. Precisely what "information" is, is left so far open that it might include anything at all (which makes it a lousy explanation). Some metaphor is necessary for grappling with this problem, though, and I have yet to see a better alternative. It's certainly light-years ahead of the "spirit of God" or hydraulic theories.

 

In any case, the statement that "there is no knowledge in any human brain" is extremely false.

Link to comment
Share on other sites

9 hours ago, Harrison Danneskjold said:

 

My mind indisputably has information, data, knowledge, lexicons, representations, memories and images (and all the rest in a more restricted sense) and my mind is what my brain does

...

No, the "information processing" metaphor isn't very good. Precisely what "information" is, is left so far open that it might include anything at all (which makes it a lousy explanation). Some metaphor is necessary for grappling with this problem, though, and I have yet to see a better alternative. It's certainly light-years ahead of the "spirit of God" or hydraulic theories.

What makes it a metaphor? I would say a mind literally processes information. Just because the mind doesn't work like a Turing Machine doesn't mean information is not processed. You are already agreeing it seems so I don't see why you say information processing is a metaphor. Still, you're right to think that information has not been formally defined for the mind, it's not possible to measure how much information a mind has except vaguely as "knowing a lot". You don't need to be able to PRECISELY measure all aspects of something for it to be real. It's not like you need to measure wavelengths to know that color is real. Likewise, you don't need to have a future discovery of "brainbytes" to say the mind has information.

Information is abstract, it's not supposed to be a "thing". The author of this article is making an empiricist error, where anything conceptual is unreal or meaningless if it isn't concrete - ALL concepts are floating abstractions to this person. NOTHING is real unless you can measure it exactly. The author is labeling it all as a metaphor because you can't see 1s or 0s, then safely rejects all use of information in discussing the mind. "Metaphor" here is a kind way to say "illusion".

There's good reason Logical Positivism was big the same time Behaviorism was big - they feed off each other. The only real psychological phenomena are behaviors because you can precisely measure them. Mental states? Totally imaginary! Meaningless! This article is an attempt to bring back Behaviorism. In the process, it makes claims as a Logical Positivist might.

" As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways. "

Just read that. It's not saying this is SOME of the story. The claim is that this is the WHOLE story essentialized.

Link to comment
Share on other sites

16 hours ago, Eiuol said:

What makes it a metaphor? I would say a mind literally processes information. Just because the mind doesn't work like a Turing Machine doesn't mean information is not processed.

Sorry; I was using "the IP metaphor" as shorthand for "the computer/brain metaphor". My bad.

It is a vacuous truth that the brain "processes information" because it's composed of vacuous concepts.

16 hours ago, Eiuol said:

Just read that.

No arguments here.

 

The article made one very good point: that our computing metaphor is flawed. It then went on to declare that 'none of us are born with, nor can we ever develop (among other things) knowledge', which was precisely where I stopped reading.

"We know that we can know nothing" has already been done more thoroughly than the Kardashians.

Link to comment
Share on other sites

18 hours ago, Harrison Danneskjold said:

...which was precisely where I stopped reading.

 

Which was the 6th paragraph of the Article.  So, you felt compelled to comment on an article which you read about 10%?

You should have read the next paragraph:

"We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."

Link to comment
Share on other sites

On 8/15/2016 at 9:16 AM, Eiuol said:

" As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways. "

Just read that. It's not saying this is SOME of the story. The claim is that this is the WHOLE story essentialized.

You are looking for - and hoping to find - "Mystics of the Spirit and/or Mystics of Muscle".   Just like Harrison stops reading after 6 paragraphs.  Everyone not Ayn Rand or Leonard Peikoff must be anti-mind and anti-life, huh?

Sigh...

 

Here is a good paper that addresses some of the issues raised in the above quote.  Movement to Thought.  It would help you to do a quick wiki research on the brain regions in bold text.

Extracts:

This paper will consider the neuroanatomic underpinnings of the ways in which sensorimotor interaction supports procedural and semantic, declarative knowledge.

[....]

It has been estimated that much of what we do, perhaps 95% of an adult human's activity or behavior is routine and automatic, outside of voluntary conscious cognitive awareness.

[....]

Such automatic behaviors cannot possibly be accounted for by a model of cognitive control that depends upon serial-order processing. Within this traditional (and still popular and clinically applied) model, three processes are central: first, we perceive; next, we “think” to formulate or choose a response; last, we act.

[....]

The ventral pathway provides information for action or behavioral selection by biasing potential actions with information about reward value associated with object identity. This behavioral biasing (which is essentially a form of anticipating or predicting the outcome of a behavior) includes information from basal ganglia reward centers and regions of the prefrontal cortex that predict reward outcomes [56–59]. Several potential actions are available in most situations. Accordingly, these potential activities, choices or decisions are reflected over large portions of the cerebral cortex. Decision making is thus not strictly localized within the prefrontal cortex. Instead, it is found within the same sensorimotor circuits that are responsible for processing sensory information, associating information with reward value, and programming and executing the associated actions. This organizational profile allows individuals to engage in a level of adaptive functioning that is characterized by automatic behaviors alternating with episodes of higher-order control. In an Cerebellum unpredictable environment, the adaptive value of such flexibility cannot be overemphasized.

Within this model, “cognition” is not separate from sensorimotor control [34, 60]. In fact, sensorimotor control is primary and paramount, while conscious cognitive control becomes subordinate. The final “decision,” or the selected action/behavior, is an outgrowth of cortical-basal ganglia interactions. While behaviors result in overt feedback from the environment, action is undertaken in concert with anticipated or predicted feedback through the cerebellum, which appropriately adjusts the behavior through context-response linkage [61]. This model emphasizes sensorimotor interaction and the brain's inherent capacities to predict and anticipate. This allows behavior to occur in “real time” in a manner that is not easily explained through a static “perception-cognition-action” model. As Kinsbourne and Jordan note, anticipation is continuous and ongoing; the ability to anticipate is an inherent design characteristic of the brain ."

 

Edited by New Buddha
Link to comment
Share on other sites

Buddha, I read the entire article. As I said, it is denying representations wholesale. It's not saying "a lot of behavior is routine", it's saying there's NO such thing as a representation in the mind. The paper you linked isn't about that. Representations are not incompatible with embodied cognition, but it's not about representations anyway...

What does your first bit matter? I was pointing out that the article is based on radical behaviorist ideas with the part I quoted.

The only  thing it really got -right- is that predicting movements of objects doesn't use a visual representation. It's probably wholly non-representational, related to eye movements. I studied it myself at a lab I worked at. This doesn't offer evidence against representations.

How would you define a representation? And what are concepts if they're not representations? I don't mean the neuroanatomic underpinnings, I mean their content.

Link to comment
Share on other sites

If the Author was pushing a Behaviourist approach, the he would hardly reference the scientists that he did (Chemero, Wilson and Golonka).  I too, read the article and it never even crossed my mind to think "behaviorism".

Be that as it may....

Regarding anti-Representationalism.  Here's where my thinking is at.  I'm bulleting my points, but not necessarily in any precise order.

1. We engaged in a post some time back where it was explored how little behavior is actually "instinct".  If I recall, you and I both pretty much agreed on that.  The vast majority of an animal's repertoire of behaviors are learned (through a combination of observation, play and nurturing).  This includes, not only hunting and migration, but even such mundane things as sitting in a chair, going up and down stairs, skipping rocks across the surface of a pond, shooting a basketball, etc.

2.  These behaviors are "stored" (per my understanding in the basal ganglia/cerebellum structure), and are inhibited from acting.   Meaning that they are "primed" to act at a moments notice without conscious top-down command.  Through a combination of attention and automated behavior (per the Paper that I linked to), we "act" in a way that is both intentional and automatic.  People with Parkinson's and Tourettes have damaged basal ganglia and have trouble controlling behavior.

3.  One of the most insightful points that Rand makes in ITOE is that to complete the conceptualization process, a concept must be given a word.  A word is a concrete, perceptible existent.  Obviously it includes sound and sight, but in order to form words, we also must learn to exercise extremely fine muscle control of the anatomical parts that go into making speech.

4.  Words - as motor memory - are stored in the basal ganglia/cerebellum region -- just like learning to shoot a basketball, climb stairs, etc.  This I took from The I of the Vortex, and other various papers.

5.  When we hear someone speak, a cascade of memory and meaning automatically floods in to our attentive sphere of awareness in much the same way that Steph Curry can in an instant, pull up a shoot a 3-pointer.  Even when we read silently to ourselves, or are just subvocaly thinking, we actually make micro muscle movements, and we "hear" words.

6.  Rand also touches on Subitizing, with the "crow" story.  Our attentive mind can never attend to more than a few things at any one time.  We are always "crows".  This is true even for Einstein standing in front of a blackboard writing down equations.

7.  What humans do learn to do, however is to not just automatically subitize 3 pennies when we see them, but we also learn to "subitize" extremely complex concepts - in their perceptible form (written words, sketches, computer code, graphs, etc.).  And this takes years of training.

8.  We learn to "externalize" our though process via writing, mathematics, code, sketches, etc.  These formal processes, which we learn over many years, and hand down generationally, are what algorithims are.

9.  The reason we memorize our multiplication tables is so that we don't have to "think" when we see 7 x 7 =  .  The answer (via motor memory) floods our attention.  The same goes for, after 5 years of college, when an engineer see a beam diagram.  This is a combination of both intentional and automatized behavior (again, the Paper is a good start to understanding this interplay).

10.  We are always, and all times "crows".  I'm being deliberately provocative with that statement, so please understand the context.  What it is meant to do is to emphasize the procedural and temporal nature of thought.  I may "know" that I know a lot about a subject like football, but it's a different thing entirely  to speak of all that I know.  The knowledge of "football" is not stored on a hard drive in my brain, and no search algorithms are run to access it.  It takes time to speak, and there are many ways of saying, roughly, the same thing.  If you repeat an experience to a friend, and then at a later time to another friend, the gist of what you say will be pretty much the same, but the actual words used, and their sequence will not.

11.  Two computer programmers, tasked with writing a CAD program, will write different lines of code in different sequences, etc., because thinking is a procedural process.  Same for two Architects designing a building.

 

 

Edited by New Buddha
Link to comment
Share on other sites

True, the author isn't literally espousing radical behaviorism of BF Skinner. But my claim is that this radical embodied cognition is so externalist that it is still effectively behaviorist. An individual is entirely formed by his environment, and mental states lack causal efficacy. Nope, this exact claim isn't made, the issue is that there is explicit DENIAL of factors like concepts as stores of knowledge that one has distinct access to and is non-perceptual (this would be a representation). This would be fine for biology, where there is no mental work going on.

1. Yes, learned, that's why I deny instinct per se. These behaviors need not be representational, particularly as motor behaviors.

2. Yes, SOME behaviors or even just parts of behavior are bottom-up. Some behaviors are not. So, this isn't against representations.

3. So what that muscle control helps to make speech? Besides, it doesn't mean words lack representation.

4. But words aren't motor memory PER SE as there's content within a concept. A word is not only your outward external speech, it's also mental content. Unless of course one denies mental content.

5. A cascade of memory and meaning? Sure, you don't necessarily SELECT meaning at all times, but mental life isn't all a "cascade" that "floods". There is plenty to say about the organizational aspect of which words we think of, relation to other words, and there's even recall of meaning. It's not some sudden "release" of behavior from motor memory and nothing else.

6. Subitizing is non-conceptual, it is at best a foundation to chunking. Subitizing is an immediate intuition about numeracy. Not intuition as much as it is automatic and takes no use of concepts. Even babies subitize. No concept of number required. For remembering items of a sequence, subitizing doesn't apply, it works a little differently - subitizing is about perceptual content in the moment.

7. Right, "subitizing" isn't subitizing. You took the metaphor too far.

8. Yes, externalizing is cognitively important. This isn't against representations.

9. Rote memory (e.g. motor memory from repitition) does not work well for creative thought. Besides, representations still work as well, as we'd do with creative thought.

10. Procedural knowledge AND abstract knowledge is needed. Both exist.

11. I don't get this point.

Basically, none of your points make representations impossible. Embodied cognition is great for sure. The problem is RADICAL embodied cognition of this article, as in representations aren't real, they're imaginary. I plan to read Chemero's book, but I am skeptical that it's a good theory.

 

Edited by Eiuol
Link to comment
Share on other sites

I'll post a little more later, but in the mean time, I can't remember if I provided this link to you or not.

Edit:  In my bulleted post, I should also have emphasized anti-Computationalism.

Edit 2:  They way I'm using the terms, it's possible to have Representationalism without Computationalism, but not the other way around.

Edit 3:  What computations we do, are externalized - since our working/attentive memory is no different than a crows.

Edited by New Buddha
Link to comment
Share on other sites

Parts of What is Consciousness For brought back recollections of Binswanger's latest book with regard to consciousness' role, enabling its possessor to move about in its environment.

The usage of the term volitional takes on different connotations to Rand's delineation of volitional consciousness as she writes about how it pertains to a conceptual consciousness.

I think of how goal oriented action gets refined from a junkie in pursuit of a high to why destructive behavior is not a life promoting virtue, i.e., not in one's rational self interest. Applying volition to the course of flight taken by a bird, or a fox meandering through the woods seems a disservice to the benefits provided from using it to describe 'throwing the mental switch' to focus.

 

Reviewing the OP article referenced, it was the metaphors cited that caught my attention. The hydraulic one seemed familiar while the others shed some insight on why other hypothesis' "grow legs" so to speak. Not lost on me was the irony computed by the article's processor to be used an Esc key in its closing paragraph:

We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.

 

Link to comment
Share on other sites

47 minutes ago, dream_weaver said:

Parts of What is Consciousness For brought back recollections of Binswanger's latest book with regard to consciousness' role, enabling its possessor to move about in its environment.

If you mean Binswanger's book How We Know, he directly references Pierson's paper in the book.  And, as an aside, as I mentioned in a previous post, after having read a good deal of William James, I became convinced that Rand must have been influenced by him in some way.  This led me to Pierson.  It was one of those happy affirmations that happen every now and then in life.

Edit:  James's book The Principle of Psychology can be separated from his essay's on Pragmatism and Radical Empiricism.  The PoP is a compendium of info that he collected as a professor at Harvard (and his own ideas, of course) and represents the "state of knowledge" in psychology prior to Freud and Watson/Skinner - i.e. before introspection became a four-letter-word, or the belief that we are driven by hidden subconscious urges.  It's a very scientific account of how the brain/senses were understood to work in the late 1800's.  It's still very valuable and insightful.

Edit 2:  The point of the paper is that the only job of the Central and Peripheral Nervous System (which, of course, includes the brain) is to coordinate muscles for movement.  That is the only thing that it can do.  Our thoughts and ideas, concepts and words, etc. are not "dis-embodied".  The sympathetic nervous system actually has more neurons than the brain, and is largely involved in control of internal organ functions that happen whether we are awake or not (we sleep 1/3 of our lives).

Edited by New Buddha
Link to comment
Share on other sites

1 hour ago, New Buddha said:

Edit 3:  What computations we do, are externalized - since our working/attentive memory is no different than a crows.

I don't know why this means anti-computationalism. There are birds that remember a great deal of places for hiding food, and while it is possible to externalize behaviors, it isn't motor memory that allows for this on its own. A number of animal studies have been done to demonstrate computationalism by measuring errors animals make regarding their own measurements. The way a flea jumps for example can be manipulated by altering how it perceives its environment. That is, fleas use parallax to find how far to jump, but screwing with that makes a flea attempt to jump but fail in a predictable way. It's not some rote memorization here brought about by the external world failing to adapt. Rather, a computation works as intended but goes wrong only in the sense that it didn't have a way to measure these fancy ways scientists try to confuse fleas. I forget if it was a flea, but it was a similar bug. See this book: https://mitpress.mit.edu/books/organization-learning

But since you are only saying computationalism is wrong but representationalism is fine, this is at least plausible. I don't think this bears out, though. It's not the same as the article, though.

Link to comment
Share on other sites

38 minutes ago, New Buddha said:

If you mean Binswanger's book


Yes, How We Know is the book to which I referred.

You keep bringing up William James. I've recently listened to the History of Philosophy course again. Peikoff clearly paints William James squarely in the pragmatist camp. I also have W.T. Jones A History of Western Philosophy series here. It has been over a decade since I went through it.

Plato influenced Rand too. In a letter to Isabelle Patterson she wrote:

I am reading a long, detailed history of philosophy [by B. A. G. Fuller]. I'm reading Aristotle in person and a lot of other things. At times it makes my hair stand on end—to read the sort of thing those [non-Aristotelian] "sages of the ages" perpetrated. And I think of you all the time—of what you used to say about them. It's actually painful for me to read Plato, for instance. But I must do it. I don't care what the damn fools said—I want to know what made them say it. There is a frightening kind of rationality about the reasons for the mistakes they made, the purposes they wanted to achieve and the practical results that followed in history. When I'm in New York, I would like to talk to you about philosophers and help you to curse them.

Regarding William James, she clearly states in Vol. III, No. 16  May 6, 1974 of The Ayn Rand Letter:

This philosophy was pragmatism, its leading exponents were William James and John Dewey, and its message to a nation on the threshold of abandoning the fundamental principles of the Founding Fathers, was: There are no principles.

Or in Vol. III, No. 16  May 6, 1974 of the same:

William James—characteristically, although not consistently—adopts the personal version of subjectivism. Human actions and purposes, he observes, vary from individual to individual—and, therefore, so does truth.To be true, states James, "means for that individual to work satisfactorily for him; and the working and the satisfaction, since they vary from case to case, admit of no universal description." "...the 'same' predication," writes the pragmatist F.C.S. Schiller, "may be 'true' for me and 'false' for you if our purposes are different."

The "may be 'true' for me and 'false' for you" is the citation she uses a couple of times for for him in Philosophy: Who Needs It?

Her last reference to James is in The Art of Nonfiction, appropriately titled: Applying Philosophy Without Preaching It, where she leads with: "If you want to know how a pragmatist would properly propagandize . . ."

 

Edited by dream_weaver
citation added
Link to comment
Share on other sites

You aren't telling me anything about Rand's opinion of James that I don't already know.  I just don't care about Rand's opinion.  I think for myself.  And she is wrong.  (gasp, did I just say that?  I hope I'm not struck by lightening!)

I've been reading a good deal of Engels lately.  Since Rand doesn't like him, should I not read him as well?  Is it impossible to learn something from Engels?  Does understanding Engels, and his time, not help me place Rand and Objectivism in a broader perspective?  Can I read James and understand his influence on Phenomology, Logical Positivism and Existentialism and thus gain a better understanding of 20th Century philosophy?

I read everybody.

Link to comment
Share on other sites

1 hour ago, dream_weaver said:

You keep bringing up William James. I've recently listened to the History of Philosophy course again. Peikoff clearly paints William James squarely in the pragmatist camp.

I'm not trying to nit here, but....

The term Pragmatism was first used by Peirce in a 1878 article (published in Popular Science no less) titled How to Make our Ideas Clear , but it did not attract much attention.  James, in a lecture at Berkeley in 1898, made mention of it and continued to develop the central idea in a series of essays.  Due to James's standing in the intellectual community, it attracted attention.

James was not "squarely in the pragmatist camp".  James WAS the pragmatist camp.

But you cannot just look up the word "pragmatic" in the dictionary and believe that you understand the central ideas of Pragmatism (and James and Peirce did not even share the same philosophy).  What does looking up the word "objective" in the dictionary tell you about Objectivism?

Edited by New Buddha
Link to comment
Share on other sites

2 hours ago, Eiuol said:

But since you are only saying computationalism is wrong but representationalism is fine, this is at least plausible. I don't think this bears out, though. It's not the same as the article, though.

That's not what I'm saying.  Both are wrong.  Computationalism implies Representationalism, but Representationalism (which is wrong) does not imply Computationalism. (I wish spell-check worked on these words, lol)

Computationalism means that there is a body of data, stored within the mind (Representationalism), upon which the "mind" works (Computationalism).

The point that I'm trying to make is that "computation" is done external to the mind, by the way of graphs, writing, equations, etc.

If I were to verbally SAY to you:  tell me the sum of:  seventyfiveplusfourplusnineplussixplusfiftytwoplussevenplustwo.  What is the answer?  This would swamp your, limited, working memory.  The way that you would solve the problem is to externalize it, i.e. write down the numbers as I say them:

75

 4

 9

 6

52

 7

 2

 

Introspect how it is that you solve the problem.  It entails a series of perceptual-based skills, including subitizing, chunking, rotation, etc.  You don't "run a program".  You solve the problem by a series of perceptual based skills.  Crow-like.

 

The way that the human brain sums the above numbers has nothing to do with how computers do so.  I'm not saying that YOU believe that the brain is just like a computer, so please don't respond as such.  I'm leading the discussion to bigger point.  What I'm trying to demonstrate is that all problem solving is perceptual based, and automatized (by mature individuals), and that we graphically construct - externally - the means to solve problems that exceed the capacity of our working memory.

There is a reason why architects sketch; why composers score; why physicists write on black boards.

 

 

 

 

Edited by New Buddha
Link to comment
Share on other sites

On 8/16/2016 at 11:06 PM, New Buddha said:

Which was the 6th paragraph of the Article.  So, you felt compelled to comment on an article which you read about 10%?

Yes.

I read six paragraphs of the article, figured I got the gist of it and wanted to clarify the one good point it had made (which was surrounded with garbage).

What do I mean by "garbage"?

 

On 8/16/2016 at 11:06 PM, New Buddha said:

"We don’t store words or the rules that tell us how to manipulate them."

What is a "vocabulary" or "grammar"? These things are indisputably stored in the brain, somehow; if not then brain damage would not cause us to lose them.

These statements are both false and self-contradictory (since they all boil down to "we know that we have no knowledge").

 

Now, I am not defending the "computing metaphor" except to say that I have yet to see a better one. That wad an important point to make (which was why I felt compelled to dig it out). However, in all the (seven, now) paragraphs I know of, I have yet to see anything that would contradict my initial conclusion: one very good idea beneath a pile of garbage.

Or have I missed something?

Link to comment
Share on other sites

On 8/17/2016 at 1:19 AM, New Buddha said:

You are looking for - and hoping to find - "Mystics of the Spirit and/or Mystics of Muscle".   Just like Harrison stops reading after 6 paragraphs.  Everyone not Ayn Rand or Leonard Peikoff must be anti-mind and anti-life, huh?

Sigh...

No; anyone who denies the fact of their own consciousness is literally being anti-mind. And I'm not looking for any kind of mystics; in fact, I had initially intended to comment on the article's one worthwhile point, and nothing else.

But sure. If that's what you'd prefer to think then rock on. :thumbsup:

Edited by Harrison Danneskjold
Grrr.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...