Jump to content
Objectivism Online Forum

Can one be honestly, genuinely certain but wrong?

Rate this topic


DavidV

Recommended Posts

Here is a brief answer for now, and I'll get one for your post in my intro a bit later (that will not be brief). Its finals for me so my responses might be a bit slow over next 2 weeks. Consider this;

Note that in my quote I used the words "repeatedly, purely and overwhelmingly"

Case 1- I have only seen one white swan in my life.

Case 2- I have seen too many white swans to count, over the course of my life, and never a black, and have no reason to believe one would be white; since I have observed many other similar species (repeatedly and overwhelmingly) that only appear in one color.

Are you saying that I have no more rational basis to be certain that all swans are white, in case 1 than in case 2? if so we can debate this.

I think I can be more clear about something, though. Specifically, the probability is the probability that you are wrong.

Assume that swans can be black or white, and have equal probability of being one or the other. Then from this fact I can conclude that if I observe a certain large number of swans I will observe a black and a white one (there is a mathematical theorem that proves this, the law of large numbers). This is the same as saying, If I toss a coin long enough I will see a head and a tails. It is possible that I see all heads, but the chances are astronomical (99.999999....). In case 1 I don't have sufficient reason, by virtue of a statistical argument, to argue that all swans are white. Only with the overwhelming case 2 do I have sufficient reason.

Link to comment
Share on other sites

I hate the swan example, but if you must use it:

You see a handful white swans, observe their mating habits, integrate this with your knowledge about other birds, natural selection, etc., etc., etc., and then you are certain that all swans are white.

Certainty has little or nothing to do with enumeration (as has been conclusively proved by skeptics again and again) and everything to do with integration and reduction.

Link to comment
Share on other sites

Certainty has little or nothing to do with enumeration (as has been conclusively proved by skeptics again and again) and everything to do with integration and reduction.

That about sums it up.

But to point out a few of your specific errors, Meade:

Assume that swans can be black or white, and have equal probability of being one or the other.
On what basis do we assume this? Presumably from observation, but that still doesn't make it necessarily true.

Then from this fact I can conclude that if I observe a certain large number of swans I will observe a black and a white one (there is a mathematical theorem that proves this, the law of large numbers). This is the same as saying, If I toss a coin long enough I will see a head and a tails. It is possible that I see all heads, but the chances are astronomical (99.999999....).

No, you can't conclude that with certainty. The swan example and the coin example are disanalogous, because with the coin example, you know that there are two basic possibilities with roughly equal chances of occuring (in normal circumstances)--but with the swan example, you can't make legitimate claims about probability based on your own observation (unless every single swan on earth has been inventoried, categorized, and recorded and you are aware of the exact figures). If you aren't a swan expert (or maybe even if you are), there could be types of swans of which you aren't aware, the actually existing proportions of each type may be radically different from the proportions of what you have personally observed, etc.

In case 1 I don't have sufficient reason, by virtue of a statistical argument, to argue that all swans are white. Only with the overwhelming case 2 do I have sufficient reason.

In neither case do you have sufficient reason to make such an argument. This is because in inductive cases such as this statistical arguments can't get you to certainty. You see 1000 white swans and you declare, "All swans are white." But then black swans are discovered, and you declare based on your new observations, "80% of swans are white, and 20% are black." But then, say, mottled swans are discovered, and you again have to change your position to "75% are white, 15% are black, and 10% are mottled." And this could conceivably go on indefinitely--or, rather, there is no way for you to know when you have reached a final, correct answer by this method. The problem is that your enumerative approach is fundamentally flawed.

P.S. Matt, feel free to offer your own example. It was just the first thing that came to mind, I didn't think it would be carried on this far.

Edited by AshRyan
Link to comment
Share on other sites

The swan example is so unbelievably pervasive that it's hardly surprising that your subconscious handed it to you when you asked it for an example. I'll give it a try a better one. Actually, it's really hard to give an example from induction without giving a lecture on science. So... I'll just steal someone else's.

My favorite example is Dr. Peikoff's from his lectures on induction this summer. He was making the point that certainty can--and often does, especially for particularly important theories--come from one observed instance. The example was of Benjamin Franklin's lightning experiment.

Franklin, pre-experiment, sat down and over a long period of time compared everything he knew about lightning (it's smell, the circumstances under which it arises, its effect on what it strikes, and so on) to everything else he knew. He came up with 12 essential similarities of lightning to 12 completely different categories of knowledge, and then decided to make the hypothesis that lightning was electricity.

Then he set up his experiment with the kite and the key contained in the jar with copper coils. When lightning hit it, he observed the heat of the jar, the sparks flying everywhere, and so on, and from this one observation concluded conclusively that lightning was an electrical charge.

Now, you might say, didn't all the comparisons and observations he do before the experiment count as instances, which added to the probability, which finally neared 100% when he did his experiment?

No--not in the sense you mean it. Nothing about the number of instances mattered. What mattered is that he related, integrated everything he knew about lightning to everything he knew about everything else, thus making sure that any knowledge and hypothesis he might have did not contradict any of his previously validated knowledge. He didn't simply enumeratively associate the possibility of it being lightning from watching these examples, like a behaviorist's rat associates pushing a petal with getting a food pellet. He abstracted from what he knew to form a hypothesis. And then he set up the experiment, which, in one instance, reduced his hypothesis to the perceptual level.

This is dramatically and fundamentally different from enumeration, which goes:

swan1=white

swan2=white

...

swanN=white

---------------

swanX=white

I think it was Amy Peikoff who called those that advocate this description of induction/inductive certainty "The swine with the swan." This is not a description of induction. It bears no resemblance to what Franklin did.

The swan example is much closer to a description of faulty deduction, where the conclusion follows from a small number premises after applying the Law of Contradiction. Well, in induction, the conclusion follows from the premises, but it is nothing like in deduction. "Premise 1" is the entirety of your validated conceptual knowledge. "Premise 2" is the new observed phenomenon (i.e., the lightning electrifying the jar).

OPAR has an excellent discussion of reduction and integration as the prerequisites for certainty. Also, I highly recommend Dr. Peikoff's lectures on induction when they come out on tape.

Note: this is from memory and from notes I took this summer. Any mistakes in the above account are mine and not Dr. Peikoff's. Ash, you heard the lectures with me... anything you think needs to be added?

Link to comment
Share on other sites

Matt,

I used the swan example because I wanted an example of why the enumaritve view of induction doesn't hold up. There are countless other examples I could have used for that, but you're probably right about the reason it was the first to come to mind. Sorry it irritates you so much. ;) For an example of proper induction, the Benjamin Franklin one works well.

I can't think of anything off the top of my head that I would want to add to your last post, but I'll have to review my notes and see if further questions about it come up in this thread.

Link to comment
Share on other sites

Yeah. I think the swan example works as an example of why enumeration isn't induction in a certain way: no one really induces that way. It's an example that David Hume types made up and called induction, and then refuted.

Try giving an account of enumeration that would validate: "A free mind and a free market are corollaries", "Evasion is the root of all evil", "There is a universal gravitational constant", etc.

I guess my real point is that saying...

Market1 works and men are free

Market2 works and men are free

...

MarketN works and men are free

--------------------------------------

A free mind and a free market are corollaries

...is an asinine account of the induction of that principle.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...