Jump to content
Objectivism Online Forum

curi

Regulars
  • Posts

    71
  • Joined

  • Last visited

Everything posted by curi

  1. Find which correlations? How can correlations be found without thinking? Or does your approach to induction presuppose one is already able to think, so it cannot explain thinking?
  2. There are two particularly hard parts of explaining why induction is false. First, there are many refutations. Where do you start? Second, most refutations are targeted at professional philosophers. What most people mean by "induction" varies a great deal. Most professional philosophers are strongly attached to the concept of induction and know what it is. Most people are strongly attached to the word "induction" and will redefine it in response to criticism. In *The World of Parmenides*, Popper gives a short refutation of induction. It's updated from an article in Nature. It involves what most people would consider a bunch of tricky math. To seriously defend induction, doesn't one need to understand arguments like this and address them? Some professional philosophers do read and respond to this kind of thing. You can argue with them. You can point out a mistake in their response. But what do you do with people who aren't familiar with the material and think it's above their head? If you aren't familiar with this argument against induction, how do you know induction is any good? If you don't have a first hand understanding of both the argument and a mistake in it, then why take sides in favor of induction? Actually, inductivists have more responses open to them than pointing out a mistake in the argument or rejecting induction (or evading, or pleading ignorance). Do you know what the other important option is? Or will you hear it for the first time from me in the next paragraph, and then adopt it as your position? I don't recommend getting your position on induction from someone who thinks induction is a mistake – all the defenses I bring up are things I already know about and I *still* consider induction to be mistaken. Another option is to correctly point out that Popper's refutation only applies to some meanings of "induction", not all. It's possible to have a position on induction which is only refuted by other arguments, not by this particular one. I won't help you too much though. What do you have to mean by "induction" to not be refuted by this particular argument? What can't you mean? You figure it out. Popper argues against induction in books like LScD, C&R, OK, RASc. Deutsch does in FoR and BoI. Should I repeat points which are already published? What for? If some inductivist doesn't care to read the literature, will my essay do any good? Why would it? I recently spoke with some Objectivists who said they weren't in favor of enumerative induction. They were in favor of the other kind. What other kind? How does it work? Where are the details? They wouldn't say. How do you argue with that? Someone told me that OPAR solves the problem of induction. OPAR, like ITOE, actually barely mentions induction. Some other Objectivists were Bayesians. Never mind that Bayesian epistemology contradicts Objectivist epistemology. In any case, dealing with Bayesians is *different*. One strategy is to elicit from people *their* ideas about induction, then address those. That poses several problems. For one thing, it means you have to write a personalized response to each person, not a single essay. (But we already have general purpose answers by Popper and Deutsch published, anyway.) Another problem is that most people's ideas about induction are vague. And they only successfully communicate a fraction of their ideas about it. How do you argue with people who have only a vague notion of what "induction" is, but who are strongly attached to defending "induction"? They shouldn't be advocating induction at all without a better idea of what it means, let alone strongly. There are many other difficulties as well. For example, no one has ever written a set of precise instructions for how to do induction. They will tell me that I do it every day, but they never give me any instructions so how am I supposed to do it even once? Well I do it without knowing it, they say. Well how do they know that? To decide I did induction, you'd have to first say what induction is (and how it works, and what actions do and don't constitute doing induction) and then compare what I did against induction. But they make no such comparison – or won't share it. Often one runs into the idea that if you get some general theories, then you did induction. Period, the end. Induction means ANY method of getting general theories whatsoever. This vacuous definition helps explain why some people are so attached to "induction". But it is not the actual meaning of "induction" in philosophy which people have debated. Of course there is SOME way to get general theories – we know that because we have them – the issue is how do you do it? Induction is an attempt to give an answer to that, not a term to be attached to any answer to it. And yet I will try. Again. But I would like suggestions about methods. Induction says that we learn FROM observation data. Or at least from actively interpreted ideas about observation data. The induced ideas are either INFALLIBLE or SUPPORTED. The infallible version was refuted by Hume among others. As a matter of logic, inductive conclusions aren't infallibly proven. It doesn't work. Even if you think deduction or math is infallible (it's not), induction STILL wouldn't be infallible. Infallible means error is ABSOLUTELY 100% IMPOSSIBLE. It means we'll never improve our idea about this. This is it, this is the final answer, the end, nothing more to learn. It's the end of thinking. Although most Objectivists (and most people in general) are infallibilists, Objectivism rejects infallibilism. Many people are skeptical of this and often deny being infallibilists. Why? Because they are only infallibilists 1% of the time; most of their thinking, most of the time, doesn't involve infallibilism. But that makes you an infallibilist. It's just like if you only think 1% of haunted houses really have a ghost, you are superstitious. So suppose induction grants fallible support. We still haven't said how you do induction, btw. But, OK, what does fallible support mean? What does it do? What do you do with it? What good is it? Support is only meaningful and useful if it helps you differentiate between different ideas. It has to tell you that idea X is better than idea Y which is better than idea Z. Each idea has an amount of support on a continuum and the ones with more support are better. Apart from this not working in the first place (how much support is assigned to which idea by which induction? there's no answer), it's also irrational. You have these various ideas which contradict each other, and you declare one "better" in some sense without resolving the contradiction. You must deal with the contradiction. If you don't know how to address the contradiction then you don't know which is right. Picking one is arbitrary and irrational. Maybe X is false and Y is true. You don't know. What does it matter that X has more support? Why does X have more support anyway? Every single piece of data you have to induce from does not contradict Y. If it did contradict Y, Y would be refuted instead of having some lesser amount of support. Every single piece of data is consistent with both X and Y. It has the same relationship with X and with Y. So why does Y have more support? So what really happens if you approach this rationally is everything that isn't refuted has exactly the same amount of support. Because it is compatible with exactly the same data set. So really there are only two categories of ideas: refuted and non-refuted. And that isn't induction. I shouldn't have to say this, but I do. That is not induction. That is Popper. That is a rejection of induction. That is something different. If you want to call that "induction" then the word "induction" loses all meaning and there's no word left to refer to the wrong ideas about epistemology. Why would some piece of data that is consistent with both X and Y support X over Y? There is no answer and never has been. (Unless X and Y are themselves probabilistic theories. If X says that a piece of data is 90% likely and Y says it's 20% likely, then if that data is observed the Bayesians will start gloating. They'd be wrong. That's another story. But why should I tell it? You wouldn't have thought of this objection yourself. You only know about it because I told you, and I'm telling you it's wrong. Anyway, for now just accept that what I'm talking about works with all regular ideas that actually assert things about reality instead of having built-in maybes.) Also, the idea of support really means AUTHORITY. Induction is one of the many attempts to introduce authority into epistemology. Authority in epistemology is abused in many ways. For example, some people think their idea has so much authority that if there is a criticism of it, that doesn't matter. It'd take like 5 criticisms to reduce its authority to the point where they might reject it. This is blatantly irrational. If there is a mistake in your idea it's wrong. You can't accept or evade any contradictions, any mistakes. None. Period. Just the other day a purported Objectivist said he was uncomfortable that if there is one criticism of an idea then that's decisive. He didn't say why. I know why. Because that leaves no room for authority. But I've seen this a hundred times. It's really common. If no criticism is ever ignored, the authority never actually gets to do anything. Irrationally ignoring criticism is the main purpose of authority in epistemology. Secondary purposes include things like intimidating people into accepting your idea. But wait, you say, induction is a method of MAKING theories. We still need it for that even if it doesn't grant them support/authority. Well, is it really a method of making theories? There's a big BLANK OUT in the part of induction where it's supposed to actually tell you what to do to make some theories. What is step one? What is step two? What always fills in this gap is intuition, common sense, and sometimes, for good measure, some fallacies (like that correlation implies or hints at causation). In other words, induction means think of theories however (varies from person to person), call it "induction", and never consider or examine or criticize or improve your methods of thinking (since you claim to be using a standard method, no introspection is necessary). For any set of data, infinitely many general conclusions are logically compatible. Many people try to deny this. As a matter of logic they are just wrong. (Some then start attacking logic itself and have the audacity to call themselves Objectivists). Should I go into this? Should I give an example? If I give an example, everyone will think the example is STUPID. It will be. So what? Logic doesn't care what sounds dumb. And I said infinitely many general conclusions, not infinitely many general conclusions that are wise. Of course most of them are dumb ideas. So now a lot of people are thinking: induce whichever one isn't dumb. Not the dumb ones. That's how you pick. Well, OK, and how do you decide what's dumb? That takes thinking. So in order to do induction (as it's just been redefined), in one of the steps, you have to think. That means we don't think by induction. Thinking is a prerequisite for induction (as just redefined), so induction can't be part of thinking. What happens here is the entirety of non-inductivist epistemology is inserted as one of the steps of induction and is the only reason it works. All the induction stuff is unnecessary and unhelpful. Pick good ideas instead of dumb ones? We could have figured that out without induction, it's not really helping. Some people will persevere. They will claim that it's OBVIOUS which ideas are dumb or not – no thinking required. What does that mean? It means they can figure it out in under 3 seconds. This is silly. Under 3 seconds of thinking is still thinking. Do you see what I mean about there are so many things wrong with induction it's hard to figure out where to start? And it's hard to go through them in an orderly progression because you start talking about something and there's two more things wrong in the middle. And here I am on this digression because most defenses of induction – seriously this is the standard among non-professionals – involve a denial of logic. So backing up, supposedly induction helps us make theories. How? Which ones? By what steps do we do it? No answers. And how am I supposed to prove a negative? How do I write an essay saying "induction has no answers"? People will say I'm ignorant and if only I read the right book I'd see the answer. People will say that just because we don't know the answer doesn't mean there isn't one. (And remember that refutation of induction I mentioned up top? Remember Popper's arguments that induction is impossible? They won't have read any of that, let alone refuted it.) And I haven't even mentioned some of the severe flaws in induction. Induction as originally intended – and it's still there but it varies, some people don't do this or aren't attached to it – meant you actually read the book of nature. You get rid of all your prejudices and biases and empty your mind and then you read the answers straight FROM the observation data. Sound like a bad joke? Well, OK, but it's an actual method of how to do induction. It has instructions and steps you could follow, rather than evasion. If you think it's a bad joke, how much better is it to replace those concrete steps with vagueness and evasion? Many more subtle versions of this way of thinking are still popular today. The idea of emptying your mind and then surely you'll see the truth isn't so popular. But the idea that data can hint or lead or point is still popular. But completely false. Observation data is inactive and passive. Further, there's so much of it. Human thinking is always selective and active. You decide which data to focus on, and which ways to approach the issue, and what issues to care about, and so on. Data has to be interpreted, by you, and then it is you interpretations, not the data itself, which may give you hints or leads. To the extent data seems to guide you, it's always because you added guidance into the data first. It isn't there in the raw data. Popper was giving a lecture and at the start he said, "Observe!" People said, "Observe what?" There is no such thing as emptying your mind and just observing and being guided by the data. First you must think, first you must have ideas about what you're looking for. You need interests, problems, expectations, ideas. Then you can observe and look for relevant data. The idea that we learn FROM observation is flawed in another way. It's not just that thinking comes first (which btw again means we can't think by induction since we have to think BEFORE we have useful data). It also misstates the role of data in thinking. Observations can contradict things (via arguments, not actually directly). They can rule things out. If the role of data is to rule things out, then whatever positive ideas we have we didn't learn from the data. What we learned from the data, in any sense, is which things to reject, not which to accept. Final point. Imagine a graph with a bunch of dots on it. Those are data points. And imagine a line connecting the dots would be a theory that explained them. This is a metaphor. Say there are a hundred points. How many ways can you draw a line connecting them? Answer: infinitely many. If you don't get that, think about it. You could take a detour anywhere on the coordinate plane between any two connections. So we have this graph and we're connecting the dots. Induction says: connect the dots and what you get is supported, it's a good theory. How do I connect them? It doesn't say. How do people do it? They will draw a straight line, or something close to that, or make it so you get a picture of a cow, or whatever else seems intuitive or obvious to them. They will use common sense or something – and never figure out the details of how that works and whether they are philosophically defensible and so on. People will just draw using unstated theories about which types of lines to prefer. That's not a method of thinking, it's a method of not thinking. They will rationalize it. They may say they drew the most "simple" line and that's Occam's razor. When confronted with the fact that other people have different intuitions about what lines look simple, they will evade or attack those people. But they've forgotten that we're trying to explain how to think in the first place. If understanding Occam's razor and simplicity and stuff is a part of induction and thinking, then it has to be done without induction. So all this understanding and stuff has to come prior to induction. So really the conclusion is we don't think by induction, we have a whole method of thinking which works and is a prerequisite for induction. Induction wouldn't solve epistemology, it'd presuppose epistemology. What we really know, from the graph with the data points, is that all lines which don't go through every point are wrong. We rule out a lot. (Yes, there's always the possibility of our data having errors. That's a big topic I'm not going to go into. Regardless, the possibility of data errors does not help induction's case!) And what about the many lines which aren't ruled out by the data? That's where philosophy comes in! We don't and can't learn everything from the data. Data is useful but isn't the answer. We always have to think and do philosophy to learn. We need criticisms. Yes, lots of those lines are "dumb". There are things wrong with them. We can use criticism to rule them out. And then people will start telling me how inconvenient and roundabout that is. But it's the only way that works. And it's not inconvenient. Since it's the only way that works, it's what you do when you think successfully. Do you find thinking inconvenient? No? Then apparently you can do critical thinking in a convenient, intuitive, fast way. At least you can do critical thinking when you're not irrational defending "induction" because in your mind it has authority.
  3. DonAthos, If I asked someone to guess at the speed of light, and they came back with 3mph, I would not say to them, "good guess!" Well that would be a bad guess. It’s totally different. You have immediate criticism of 3mph speed of light. I didn’t have immediate criticism of mustard being the culprit. I don't wish to make too much of this small use of language, but I also must observe that I think it speaks to the point I'm attempting to make, regarding how "evidence" does seem to provide support of a given hypothesis. Which hypothesis does which evidence support? Does it equally support all hypotheses which it doesn’t contradict? (If so, that’s actually binary.) Or something else? I think relating fingerprints to a particular murder suspect makes the hypothesis that this murder suspect is guilty a better hypothesis than others -- a "good guess". And I think this is fairly described as being positive support. I understand what you’re saying, but that is one of the main epistemological mistakes Popper refuted and corrected. There is no relationship between the evidence and the idea you claim it positively supports other than logical compatibility. It has exactly the same relationship with all other ideas that it doesn’t contradict. You disagree. You think there is a relationship. OK, what is it? Because it seems to me like you're implying here that Mustard is a good guess because I found evidence consonant with that hypothesis No that is not what I mean. It’s a good guess to start with because 1) no immediate criticisms 2) no immediate non-criticized rival ideas. if you quickly get to a situation where you only have one candidate idea, that’s a good start. I do not agree that this is how we actually operate epistemologically, or use fingerprints in point of fact. I believe that detectives use fingerprints to find a "match." If a lawyer were presenting fingerprints (as "evidence") to a jury, I believe he would phrase it as indicating the presence of a particular suspect, not disqualifying the presence of others. You’re treating evidence a bit strangely (from my point of view). The only good explanation of unique finger prints being there is that the guy with those fingers was there. That’s an idea I don’t have a criticism of, and which has no uncriticized rivals. So, in this way, it does indicate the particular suspect was there, because him being the prevailing idea that explains the evidence and isn’t contradicted. (I’m trying to play along with the example btw. IRL i don’t think fingerprint analysis is as reliable as is commonly believed.) I agree that the process you describe here has an important function in critical thinking and assessment. I just think that I disagree that such a process is the only one we employ; that we use evidence in this strictly negative fashion, as claimed. I think it has to be the only one because it’s the only known one that actually could ever work at all. All the others are refuted. (For purposes of our discussion, this is pending some questions I raised above. But I thought giving my answer on this topic -- the conclusion if you agree with lots of my stuff -- would help clarify.) cannot agree that this is how people work or ought to work. If I see three apples on a table, my process (insofar as I am able to describe it) is emphatically not to consider every possibility (in the world...? five plums? eleven and a half pears? twenty rodents?) and rule them out. It is the direct assessment that the three apples I see constitutes ("positive") evidence of three apples. I don’t think you’re adequately taking into account how much people automate their thinking and do it lightning fast (like Rand explains). i’m largely talking about how people think unconsciously, the underlying way people can figure anything out. people’s accompanying conscious thinking is often largely irrelevant and silly, or highly incomplete (relying on the underlying unconscious analysis), or many many other possibilities. what you think you’re doing is not my focus. also your statement of the critical method is not accurate. you don’t have to consider every possibility. you may consider whatever possibilities you want. (what if you choose by whim? arbitrarily? what’s to save reason? answer: criticism of your method of choosing possibilities to consider.) you CANNOT consider every possibility. if you are not interested in a possibility, ok don’t consider it. shrug. in some specific case i might think that was a mistake, but in many many cases it’s fine. i think you’re somewhat wrong about what is intuitive or common sense too. the reason i think you don’t care to consider the other possibilities is you know you could rule them out if you considered them. you’re able to immediately estimate what the result of considering them would be. (you lighting fast realize the whole category is refuted. fallibly as always, but that’s fine). if you thought that if you took the time to go through all the numbers and fruits you’d be able to rule out most but not all, you’d be in trouble! that’s no good. if there’s even one other possibility you don’t know how to rule out – have no criticism of – then you must not ignore it and it’s irrational to claim the evidence supports the one you chose to consider over the one you can’t rule out but arbitrarily ignore. if there is something you couldn’t rule out but you never thought if it, had no inkling of it, fine, no problem, you knowledge is limited. as long as you made a reasonable effort to think, appropriate to the situation, then your ignorance is forgivable. however if you had any idea that there was any possibility you didn’t know how to rule out, then you’d better stop and consider it, not ignore it. whether you could rule everything out is the crucial factor. i think this is totally intuitive. if you thought there were 3 apples, and could rule out all amounts of apples except 3 or 8, then i think you immediately know that concluding “there are 3 apples” is a big error. that 8 – that one alternative you don’t know how to rule out – deserves attention. in the real scenario you do know how to rule out 8 too so it’s ok and that is why it’s ok. This "ruling out" needs to be delved into more, perhaps. I don't think it's my primary aim, in any event, to "rule things out" when I see three apples on a table and describe what I've seen as "three apples on a table." But isn’t it? If you had no idea how to rule out there being 8 apples not 3, wouldn’t that be a huge problem? no matter how many positive supporting reasons you could give for claiming it’s 3, if none of your arguments rule out 8 then it’d be dumb to conclude it’s 3, wouldn’t it? (my takeaway: positive arguments don’t matter, they are useless (as in this example where they make no difference), and to the extent anyone actually uses positive supporting arguments they are thinking irrationally) (but often when people think they are using positive supporting arguments, or claim to, they aren’t actually.) But the character of the evidence, initially, I believe is positive: I am led Both Popper and Objectivism strongly disagree that you are led. Evidence does not and cannot lead you. You must lead yourself, you must have an active mind, etc (There’s also the issue of: where does any given evidence lead one? Why there and not somewhere else? basically any piece of evidence contradicts some possibilities and doesn’t lead there, and does not contradict some other possibilities and equally well leads to all of them, so focussing on one in particular is arbitrary. also you have to figure this out, it’s not really leading.) I was asking whether it is improper for me to use a process when I'm unable to explain/describe that process in full? That is not improper. However, some processes are improper (like induction and support). And it’s harder to explain that when you don’t understand what you’re even doing. The vagueness helps partially immunize your position from criticism. But also, in any case, the more you don’t fully understand what you’re doing, the more you should consider that it might not actually have anything to do with induction or positive support! Yes, absolutely! [go into refutations of induction and support] OK. Can you tell me a little about your familiarity with the subject? FYI many books by inductivists concede this. it’s common knowledge. e.g. i read some bayesian stuff not too long ago and they were happy to concede it and were able to correctly state some of the unsolved problems with induction and support. in the fabric of reality, david deutsch argues that actually most inductivists today are characterized by thinking that induction not working is a big problem. not by claiming to be able to actually rationally defend induction. so far what i’ve run into with objectivists is they don’t seem to be familiar with any of this (they only read objectivist material maybe), and they make claims that i think are ridiculous like that peikoff solved the problem of induction in OPAR. (the word “induction” is in OPAR a total of 8 times btw... and on a quick skim through them i don’t see any explanation of the problem of induction, let alone a solution) anyway, can you indicate a bit about where you’re coming from, where you stand? also are you willing to read things or do you just want to discuss? also above i started raising some of the problems. But it seems to me that saying that "all numbers apart from 5 have something wrong with it" in the given example is a roundabout way of admitting to the positive correctness of both 5 and the Pythagorean theorem generally. i don’t think it’s roundabout. i think it’s necessary. if there wasn’t something wrong with 6, and you concluded 5, you’d be in big trouble! Out of curiosity, does Popperian epistemology allow for the original construction of a geometrical proof, or the Pythagorean theorem, or similar? Because those seem to be arguments in themselves, and to rely upon evidence (as being "for" a thing), and to be "positive" in character in that they demonstrate how to come to one answer, rather than disqualifying/"criticizing" others. We don’t have a problem with the Pythagorean theorem or any geometry “proof” or anything like that. (I don’t like the word “proof” and in the fabric of reality, david deutsch, who is a popperian, has a chapter explaining that math “proofs” are fallible. many people seem to think they are infallible.) but anyway the ideas themselves are fine. lots of mathematical methods are great. use them. and then make criticisms like “all other numbers are incompatible with this method. so unless you can point out something with this method, there’s only one viable non-refuted conclusion”. (i’m being particularly explicit to be clear. so sure it sounds a little weird. it doesn’t necessarily matter if you use other terminology, even positive terminology, what really matters is the actual thought processes and how they work.)
  4. "This is how Bayesian inference works roughly where you use likelihood to make some reasonable hypotheses based upon likelihood" -- this is epistemology, not math. the math formulas do not generate hypotheses, they only adjust the probabilities of hypotheses you already have given a bunch of other information (like prior probabilities, and more) that you already have. (and the math adjustments only apply to stuff where probability literally applies. extending it where it metaphorically applies isn't math)
  5. I know Bayesian math. Bayesian epistemology is false and refuted by Popper though. I'm certainly not agreeing with it. Objectivism also disagrees with Bayesian epistemology so I'm not sure why you bring it up.
  6. Eiuol, you said "It looks to me like you misunderstood what Peikoff was saying." and said that the mistake I identified in Objectivism is not a mistake made by Objectivism. I asked for a source where Objectivism explains the correct view of the issue. I don't think it's something Objectivism understands correctly; you claimed it is. OK, source? Rather than giving a source, you then backtracked and said a bunch of other stuff and didn't address the point.
  7. If you think that Objectivism already has this position (only ever act on conclusive certain knowledge), can you please find me any sources where Objectivism explains it?
  8. DonAthos, When you describe my guess as being "good," I take that as implying that some guesses are better than others... And on what basis are we able to make this distinction? I meant a good guess like worth trying, a good guess to make and consider. I didn’t mean good like true or partly true. It’s also good in the sense of being plausible to me. Maybe you have the answer. I don’t see any reason it’s false in the first minute of thought, which is a good start. Also, the important thing is that this sense of “good” has no authority. It is a casual, loose usage, not an important epistemological pronouncement. It has no solid meaning or impact, it was just meant to communicate. It has no particular consequences or implications. When it comes down to it, being a “good guess” doesn’t matter. What really matters – what is decisive – is whether we have any criticisms or not. (In my view.) "good" (as opposed to a guess that it was Miss Scarlet, on the HMS Bounty, with a bronzed pineapple) But I have criticisms of those guesses. Why guess it was Scarlet when you found mustard? Why guess it was a location other than where the body was found? These issues might be answerable. But the simple version where you just suspect Scarlet without giving any answer to these issues is wrong, criticized, refuted. as evidence (i.e. that which provides a "positive" basis for some particular "guess" as to what has happened) Evidence is the observation data we have to work with. The fingerprints are evidence. But the way to use evidence is to look at what it contradicts. Evidence can be combined with some ideas to form a criticism and rule something out. Like, “It wouldn’t be Scarlet because the fingerprints have a special type of mustard that isn’t sold in stores, and she had no access to.” That combines the evidence (mustard found at scene) with some ideas to criticize and rule out a possibility. (Note: Criticisms are open to counter-criticisms. None may succeed but the attempt is always allowed. So you might point out that actually a mustard depot was broken into recently, so maybe Scarlet both broke in there and did the murder, and was trying to frame Mustard. Then she becomes a suspect again. But then you catch the guy who broke in and find all the stolen mustard and none is missing, and thanks to this new evidence she’s ruled out again.) To take a simpler example, if you see 3 apples on a table, that rules out the table being empty. The evidence can be used (via some thinking and ideas) to rule something out. Not only that, it rules out all numbers of apples besides 3 being on the table. So you can conclude the one remaining possibility: there are 3 apples on the table. If it didn’t rule everything out -- 7 apples wasn’t ruled out for some reason -- then you absolutely better not conclude it’s 3 apples, no matter how much evidence and “support” you have. If all your evidence and support doesn’t rule out the 7 apples possibility, then what good is it? (Answer: well it’s good for ruling out 4 apples, 5 apples, etc, everything but 3 and 7. But it provides no legitimate support/authority/status/etc for 3 over 7, since it leaves them both as open possibilities.) If even one other thing isn’t ruled out, that’s a really big deal. Why isn’t it ruled out? Why is option A so great when B isn’t ruled out? etc it seems to me that I am operating more on the basis of drawing conclusions from the mustard fingerprints (in that they are presumed to point to the murderer) rather than contemplating Mrs. Peacock-as-murderer and finding some criticism of that theory. Because you know the finger prints basically rule out everyone else. No one else has mustard prints. How else would you even know what was “supported”? What conclusion would you draw, if your evidence didn’t rule anything out? Or if it only ruled out everything but ten things, then what? I think how much is ruled out is really the key factor. Each case is dramatically different by how much the evidence rules out. Whether I'm able to explain to you "what constitutes how much of a basis for what, for all cases," to your satisfaction, or etc., does that make it improper for me to draw conclusions in the sorts of scenarios we're discussing, according to a positive approach? This is a very common view. Induction and support are an unsolved problem, but one day we will solve them. That is, indeed, Rand’s view. I don’t expect it. For one thing, your examples where we do stuff (like walk) without a full explicit understanding can be explained without induction or support. We could have a partial critical understanding, largely unconscious, and walk. I don’t think the walking example really helps one side over the other. Another thing is the arguments against induction and support are not like “there’s a few gaps to work out”. They are more along the lines of “here are 5 reasons it’s impossible and misconceived root and branch, which no one has any answer to”. Maybe we should go into that? It remains a bit of an open question for me, whether Popperian epistemology seeks to describe what we already do, or what we ought to do Both. Popper says no one has ever done induction. None, ever; that is a myth. Because it’s impossible. Some people thought they did it, but they were mistaken and didn’t understand what they were actually doing. Understanding the right approach could help people do it better. People decide between which approach to try to do. Trying to do the wrong thing that doesn’t work can lead to a lot of wasted effort and mistakes. I don't yet see any way around drawing a connection between mustard fingerprints and Col. Mustard -- which I would describe as being "positive" What is the connection, exactly? I see that they finger prints are compatible with Mustard being guilty and are (via a few arguments) incompatible with others. What kind of connection is there other than the compatibility with one option and incompatible with other options? (As always, this is fallible contextual knowledge, open to revision and criticism when it’s discovered that Scarlet has a jar of mustard in her purse or whatever.) I'm trying to approach this literally, and imagine the actual scenario playing out. My daughter, when she is at the proper age, will sit down to a right triangle with smaller sides of 3 and 4, and she'll be asked to find hypotenuse x. Now, on the one hand, she could "guess" a number and then see whether it "contradicts her math knowledge" (though this would seem to me to possibly beg the question of how that "math knowledge" is acquired in the first place, if not in some positive manner)... maybe her first guess is 5. Or maybe it isn't -- maybe it is 6 or 7 or 8 or 5.1 or 5.2 or 5.3, and she rules them out, one by one, until (hopefully) she guesses 5. Criticism does not mean one by one. Criticisms often rule out categories or sets of things. We can rule out the set of everything except 5 with some mathematical arguments. And guessing doesn’t mean guessing at random. Do the math and work out 5 (you are allowed to get your guesses/ideas in any manner whatsoever, no problem using the Pythagorean theorem). Now 5 is one of the suggested possibilities (call it a “guess” or “idea” or whatever else you like). And if you try to say anything but 5 that’s easy to criticize. But 5 hasn’t got anything known to be wrong with it. It’s like the apples on a table example earlier. If 7 wasn’t ruled out, you’d be in trouble saying it was 5. No matter how much positive basis 5 supposedly had, it wouldn’t matter at all if 7 wasn’t ruled out. If 7 isn’t ruled out, you don’t know the answer, you have a contradiction, you better look over things again. But if 7 is ruled out, then 5 is wonderful, you’re golden, no problem. For the status of 5, everything depends on whether or not 7 (or any other number) is ruled out or not. Either 5 is the one idea we have that isn’t ruled out, or it isn’t. That's right -- there is a particular context to my claim that I'm typing on my keyboard, and my claim is (only) certain within that context. If tomorrow I found that I had... I don't know... lost my mind utterly a few weeks back, I might have to revisit this, insofar as I were able. But until I have good reason to entertain such a notion, I mostly likely won't. (Which might be a third sticking point? For how could I have "good reason" for anything?) It suffices for me to understand that knowledge and certainty are contextual. I agree. No sticking point. Revisit it when you think it’s worth revisiting and have no criticism of doing so. Again, I fear extending myself too much when I don't believe I quite grasp all of the matters at play, but on "imagination" and "creativity," I don't see these as being bad things, or incompatible with my views on epistemology, or how I approach the actual matters of my life. I don’t think imagination and creativity have any incompatible with you or Objectivism. I bring them up sometimes because they are important to my approach. at some point, I don't believe that "guessing" continues to be an appropriate description of how I come to a certain conclusion. Given a right triangle, with sides (in order of length) of 3, 4, and x, I am not "guessing" that x = 5. The reason I prefer the word “guess” (and also “idea”) is because they are words with no status or authority. I don’t believe, at any point, does one’s guesses gain any authority from their methods. The results must always be evaluated by a critical consideration of the content of the guess/idea/whatever, and nothing else. The source is not relevant to our critical evaluation and bestows zero special privileges. I experience these two processes differently, and that seems to be an important distinction to maintain in our concepts and language. One is "guessing," the other is not. The difference is that in one case you had knowledge and in the other you didn’t. (More precisely, in the second case where you guess what number I’m thinking of, you have knowledge that the right answer is a number, and some things like that, but that leaves open a very big set of possibilities that aren’t ruled out and you do not have knowledge of which of those it is.) Suppose we are discussing an idea that "the planet Neptune exists." When you say that an idea "must be judged on its content and not its source," what does that mean here? If I were to see Neptune and lay claim to its existence on that basis, am I in error? So you judge by whether you can find anything wrong with the idea that Neptune exists. (Including if there is a contradictory rival idea, such as “Neptune does not exist”, it would have to rule that out. If it doesn’t, that’s something wrong.) You do not judge by whether you first came up with the idea in a dream. That has no bearing on whether it’s true. It’s not a criticism of it; it doesn’t rule it out. I may not be interested in considering what you dream. You may not be either. You need not consider everything you could consider. There’s a million things in life and we have limited time and attention and focus, so we have to be selective. We have to try to understand what is problematic and focus attention there. But if you dream an idea, and then you consider it, and you don’t see anything wrong with it, then who cares that it came from a dream? It survived 5 minutes of you trying to criticize it. So did some other idea you got in another way. They have the same status now: they are ideas which you don’t see anything wrong with. That’s it, nothing else matters. (Ideas you haven’t yet critically considered at all, I have no interest in, again regardless of the source. If I wanted ideas like that, I could create plenty of my own.) That I cannot criticize it? But there are a million possible planets, all fictional, for which I would have equal "criticism" as the proposed Neptune. If a claim is arbitrary, say “it is arbitrary” and that is a criticism of it. Yes you have equal criticism of all of them, but you do have a criticism and have just given it. But what "content" can a planet (or ultimately, anything) have apart from my experience of it? For example, the currently prevailing idea of Neptune implies that if telescopes built to certain specifications are pointed at certain places at certain times, they will detect certain wavelengths of light that are rather different from what they detect when pointed in most directions at most times. In other words, they will detect something different than empty space with the occasional distant star. The telescopes could easily be computer controlled and programmed to turn on a light if Neptune is there. We could then look at the light and watch it turn on. That light turning on is part of the “content” of the Neptune idea – if it didn’t turn on we would have a criticism of Neptune. This is something other than your experience of seeing Neptune. (But, as always when dealing with reality, some kind of perception has to be involved somewhere.)
  9. This is a false alternative where you assume as a premise there's only two possibilities. If you don't accept X, your alternative is Y. Never mind that I said I choose Z. That's not a reasonable way to respond to someone trying to explain a third alternative. And preemptively writing posts about false dichotomies and package deals to try to not get replies like this. This is a standard fallacy. One thing you're ignoring is that there are always very large numbers of correlations and you think most of them do not imply causation. For example, does this correlation imply causation? http://www.curi.us/1436-aspergers-syndrome There's also a very large number of correlations just involving pirates.
  10. You don't have to know about everything. You do need rational knowledge relevant to your life, actions, choices, in order to live rationally. To live without that would be a flaw. (I'm not really sure what you're advocating though.)
  11. Here are OPAR quotes. It's also in http://www.peikoff.com/courses_and_lectures/philosophy-of-objectivism/
  12. Hey it looks like my title got cut off (should end with "Critical Rationalism Both Made"). I can't seem to edit it. Can any admin fix it? If it has to be short, just "Epistemology Without Weights and an Objectivist Mistake" is better than the current cut off version.
  13. Objectivists accuse Popperians of being skeptics. Popperians accuse Objectivists of being infallibilists. Actually, both philosophies are valuable and largely compatible. I present here some integrating ideas and then a mistake that both philosophies share. Knowledge is contextual, absolute, certain, conclusive and progressive. The standard of knowledge is conclusiveness not infallibility, perfection or omniscience. Certain means we should act on it instead of hesitating. We should follow its implications and use it, rather than sitting around doubting, wondering, scared it might be wrong. Certain also means that it is knowledge, as opposed to non-knowledge; it denies skepticism. Absolute means no contradictions, compromises or exceptions are allowed. Contextual means that knowledge must be considered in context. A good idea in one context may not be a good idea when transplanted into another context. No knowledge could hold up against arbitrary context switches and context dropping. Further, knowledge is problem oriented. Knowledge needs some problem(s) or question(s) for context, which it addresses or solves. Knowledge has to be knowledge about something, with some purpose. This implies: if you have an answer to a question, and then in the future you learn more, the old answer still answers the old question. It's still knowledge in its original, intended context. Consider blood types. People wanted to know which blood transfusions were safe (among other questions) and they created some knowledge of A, B, AB and O blood types. Later they found out more. Actually there is A+, A-, B+, B-, AB+, AB-, O+ and O-. It was proper to act on the earlier knowledge in its context. It would not be proper to act on it today; now we know that some B type blood is incompatible with some other B type blood. Today's superior knowledge of blood types is also contextual. Maybe there will be a new medical breakthrough next year. But it's still knowledge in today's context, and it's proper to act on it. One thing to learn here is that a false idea can be knowledge. The idea that all B type blood is compatible is contextual knowledge. It was always false, as a matter of fact, and the mistake got some people killed. Yet it was still knowledge. How can that be? Perfection is not the standard of knowledge. And not all false ideas are equally good. What matters is the early idea about blood types had value, it had useful information, it helped make many correct decisions, and no better idea was available at the time. That value never goes away even when we learn about a mistake. That original value is still knowledge, considered contextually, even though the idea as a whole is now known to be false. Conclusive means the current context only allows for one rational conclusion. This conclusion is not infallible, but it's the only reasonable option available. All the alternative ideas have known flaws; they are refuted. There's only one idea left which is not refuted, which could be true, is true as far as we know (no known flaws), and which we should therefore accept. And that is knowledge. None of this contradicts the progressive character of knowledge. Our knowledge is not frozen and final. We can learn more and better – without limit. We can keep identifying and correcting errors in our ideas and thereby achieve better and better knowledge. (One way knowledge can be better is that it is correct in more contexts and successfully addresses more problems and questions.) The Mistake Peikoff says that certainty (meaning conclusive knowledge) is when you get to the point that nothing else is possible. He means that, in the current context, there are no other options. There's just one option, and we should accept it. All the other ideas have something wrong with them, they can't be accepted. This is fine. Peikoff also says that before you have certainty you have a different situation where there are multiple competing ideas. Fine. And that's not certainty, that's not conclusive knowledge, it's a precursor stage where you're considering the ideas. Fine. But then Peikoff makes what I think is an important mistake. He says that if you don't have knowledge or certainty, you can still judge by the weight of the evidence. This is a standard view held by many non-Objectivists too. I think this is too compromising. I think the choices are knowledge or irrationality. We need knowledge; nothing less will suffice. The weight of the evidence is no good. Either you have knowledge or you don't. If it's not knowledge, it's not worth anything. You need to come up with a good idea – no compromises, no contradictions, no known problems – and use that. If you can't or won't do that, all you have left is the irrationality of acting on and believing arbitrary non-knowledge. I think we can always act on knowledge without contradictions. Knowledge is always possible to man. Not all knowledge instantly, but enough knowledge to act, in time to act. We may not know everything – but we don't need to. We can always know enough to continue life rationally. Living and acting by reason and knowledge is always possible. (How can we always do this? That will be the subject of another essay. I'm not including any summary or hints because I think it's too confusing and misleading without a full explanation.) Knowledge doesn't allow contradictions. Suppose you're considering two ideas that contradict each other. And you don't have a conclusive answer, you don't have knowledge of which is right. Then using or believing either one is irrational. No "weight of the evidence" or anything else can change this. Don't pick a side when you know there is a contradiction but have not rationally resolved it. Resolve it; create knowledge; learn; think; figure it out. Neither idea being considered is good enough to address the contradiction or refute the other idea – so you know they are both flawed. Don't hope or pray that acting on a known-to-be-flawed idea will work out anyway. Irrationality doesn't work. That's not good enough. If you discover a contradiction, you should resolve it rationally. If you fail at that – fail at the use of reason – then that's bad, that's a disaster, that's not OK. Karl Popper made the same mistake in a different form. He said that we critically analyze competing ideas and the one that best survives criticism should be acted on. Again this is too compromising. Either exactly one idea survives criticism, or else there is still a contradiction. "Best survives criticism", and "weight of the evidence", are irrational ways of arbitrarily elevating one flawed idea over another, instead of using reason to come up with a correct idea. (For some further discussion about weighing ideas, see also the choices chapter of The Beginning of Infinity by David Deutsch.)
  14. No you should not assume that correlation implies causation!! Or even hints at it. It doesn't. To think well, you have to come up with explanations about what is going on.
  15. You're being ambiguous about whether induction refers to any type of way of getting general ideas whatsoever (removing the substance to evade criticism), or only certain types such as generalizing from a finite number of observations (which is refuted).
  16. DonAthos, All right. I'm doing my best to understand, but I don't believe I'm there yet, so my replies might not be completely on point (or maybe even at all). I'll ask your patience. No worries. It is difficult to understand complicated philosophical ideas. Culture clash is difficult too. We have different perspectives. Thanks for trying. Actually this kind of humble attitude and effort to learn is just the sort of thing Popperians appreciate. Popper strongly recommended it. Would it be consistent with the approach we're discussing to say that I as yet have no basis for claiming that the murderer was "Col. Mustard, with the candlestick, in the library." Popperians don't approach things in terms of having a (positive, supporting) "basis" for anything. But given what you say, you make a good guess at what happened. IRL I'd want to investigate more because murder accusations are very serious. But basically I agree with you: I haven't got any criticisms of suspecting Mustard, and I do have criticisms of rival ideas. What rival ideas? Stuff like proclaiming "I don't know, how can anyone know, why are you so certain all the time?" or "Anything could be true, let's investigate 10 arbitrary things" or "I don't want the responsibility of making a judgment, let's ask a bunch of other people their opinion to share the responsibility". Those would all be awful. This may sound convoluted to you. However, if you get used to it, you may find the other (positive) way is what seems convoluted. How does one decide between approaches objectively? I think what really matters is that this approach works at all, while the positive approach doesn't. (Because of serious flaws. There are a lot. Example: being unable to address the issue of actually defining what things support what other things, and how much, for all things. Or put another way: what constitutes how much of a basis for what, for all cases? And what exactly does having more basis do/mean anyway?) That, instead, we should go around to every other detective who can swear to knowledge of some suspect, weapon, or room, so that we can eliminate other possible scenarios/ideas ("criticize") until we are left with only one? It sounds to me like you don't think we should go around and check other possibilities. In other words ... you see something wrong with doing so. In other words, you have a criticism of doing that? And what about something like a geometrical proof? Are the arguments of a proof for, I don't know... the Pythagorean theorem, insufficient to demonstrate the truth of that theorem? If I were presented the smaller two sides of a right triangle and asked to solve for the hypotenuse, should my preferred method be to guess what the answer might be, and then start ruling out numbers one at a time (in some unspecified way)? Rule out all other numbers/possibilities because they would contradict your math knowledge and you don't have any criticisms of your (relevant) math knowledge. (though I don't know how Objectivism is soft, or compromises, with respect to contradiction, and I would like to see that demonstrated). I'm currently writing an essay which covers this, so let's just wait for me to finish that. So obviously you're being careful in saying that "the remaining idea may be true" as opposed to "the remaining idea is true." But what do you think this difference amounts to, practically? Right, that phrasing was careful. The difference amounts to being careful not to claim infallibility or omniscience, even ambiguously. Popperians are careful with that, it's our way, our emphasis. Objectivist epistemology always emphasizes that skepticism is false, and uses terminology and phrasings that are good at that. Popperians emphasize the other way -- we're fallible, error is common, people who think they have the truth are often mistaken, feeling sure and confident does not mean you are likely to be right, etc... Because of the clash of these choices of emphasis, many people on both sides think there is a much larger gap between the two epistemologies than their actually is. (So the result is Popperians frequently accuse Objectivists of infallibilism, and Objectivists frequently accuse Popperians of skepticism, but neither claim is accurate. At least the accusations are not accurate about the better people on each side. Rand is not an infallibilist. I think some of the less good Objectivists are infallibilist sometimes. Maybe some less good Popperians are skeptics, I'm not really sure about that, but I do have other criticisms of them.) If you're worried about acting in life, there isn't any difference. To the best of my knowledge, it is true. I will use it and act on it. I think saying it is "true" is defensible, because infallibility shouldn't be the standard of knowledge claims or truth claims. Truth is too useful and common a word to use to refer to infallibility. When I want to speak about infallible truth I use phrases like "perfect truth", "final truth", "infallible truth", "The Truth" with caps, etc... You can qualify "truth" if you want infallibility, and use the fallible meaning as the default. (However it depends on your audience, some people are confused about this.) If I were to conclude that "it is true that I am typing this message on a keyboard," and you were to correct me, saying "it may be true that you're typing this message on a keyboard," what would that correction signify? I think your phrasing is ok, as long as you understand that there are some possible ways you could turn out to be mistaken and have to reconsider. (In other words, it’s a fallible truth claim. Or a contextual claim that doesn’t have omniscience as the standard, as Objectivism would call it). I think Rand would understand this, it’s no problem. I think a lot of people wouldn’t understand it very well, so you would want to be cautious with some audiences. Do you think there's more here to explore? Do you believe that, even on the issue of what constitutes a "guess" or "positive" or "negative," that you have no tools other than making guesses and then criticizing them? On the subject of "guessing," are there any further distinctions you would make to describe your process? First, there is much more to explore. Popper wrote a bunch of long books! There's way more to say than I've posted. And he didn't know everything. For the issue of how to guess, specifically, Popper did not say a lot. I think Rand's ideas about measurement omission and concept formation offer some help here. And there are many other ideas which offer some help here. Popper is focussed more on a higher level of abstraction! Another good example of useful information about how to guess is scientific method. A lot is known about how to come up with good hypotheses, what sort of approaches work well in science. As long as you treat these as fallible guidelines they are valuable. Different ways of guessing are better and worse. It's an important issue to learn about and improve one's methods. But whatever you come up with is guidelines, not a required method. There's always some scope for some imagination, creativity, and varied methods of coming up with ideas. What Popper says is things like: induction is a method that doesn't work, anything with authority doesn't work (and there's a lot of authority hidden in many places, some of which he points out), and no matter what methods you use it doesn't make the results true or probably true. It doesn't give them support or status or authority or justification. You have to evaluate them by whether you see anything wrong with them -- have a criticism -- not by their methods of creation. Once an idea is created and suggested, it must be judged on its content and not its source (method of creation being an issue of the source of the idea). Judging ideas by their source instead of content always actually means going by authority, whether people admit it or not. And no mixed approach is any good either (mixed like judge partly on the merits of the content, and partly on source). As long as an "educated guess" is not deemed to have any special authority, and is judged on its merits just the same as any other idea, it's ok. You may well be right that your idea is pretty good, and that the wisdom of your method of creating it helped out. But your guessing methods don't ensure the idea is good, they don't provide anything solid. The ultimate test is criticism. PS if you’re looking for more info about this, you might find the discussions here interesting: http://rebirthofreason.com/Forum/Dissent/
  17. Reading some books from a philosopher, and understanding them, are very different. Is Popper-1 true or false? If false, why? Is attacking Popper-2 a good reply to Popper-1? Since we both agree that Popper-2 is false, why are you so interested in it?
  18. So basically you reject Popper-2, as defined by you, and have nothing to say about Popper-1 as defined by me (who has studied Popper extensively, has had discussions with many of the best Popperians, etc). What's the point? Even if Popper was wrong and meant Popper-2, Popper-1, the thing I defined, could still be right and value. Since we both seem to agree Popper-1 is a better set of ideas than Popper-2, why don't you wan to talk about it?
  19. "Conjectural knowledge" means non-omniscient knowledge. Objectivism uses the term "certain knowledge" and it also is non-omniscient knowledge. You tell me there is a big difference between the two. What is it?
  20. Regarding David Stove: http://www.the-rathouse.com/AnythingGoes.html
  21. BTW there are many stories in the history of science, and if you research them they always turn out to be along Popperian lines. Einstein, Newton, Keppler, whoever. A notable one is Mendel. http://en.wikipedia.org/wiki/Gregor_Mendel#Controversy In short, Mendel doctored his evidence. How could someone who does the experiment wrong possibly make a great discovery? He wasn't learning from the experiment. He already had some ideas about genes.
  22. Not using evidence to support, and not using evidence, are different things. Evidence can be used in a critical role. If something contradicts the evidence, that is a problem with it. This is one of the package deals of standard epistemology (packaging evidence with support). Some of these things are so prevalent that it's hard to communicate anything contrary to them and be understood. I don't know a lot about the history of Darwin's discovery offhand. First of all I found an interesting quote, by Popper in Objective Knowledge, p 257 But OK. First of all Charles Darwin's grandfather had an idea along the lines of evolution. He did not induce this from observations at the galapagos. Those came much later to test and potentially refute the idea of evolution. http://en.wikipedia.org/wiki/Erasmus_Darwin http://anthro.palomar.edu/evolve/evolve_2.htm This second essay says the idea of evolution was invented long before Charles Darwin was born. Rather -- and this isn't just me but it's basically what the essay says -- he did research to test whether it was right or not. So one thing going on is idea first, evidence second. This is a major Popperian point which is contrary to inductive concepts of learning where data comes first (including Objectivism which has percepts before concepts). One reason ideas must always come before evidence is that there are infinitely many ways to (logically possibly) interpret evidence. For evidence to be useful, it must be interpreted with ideas about what is important and what is not important. Popper dramatized this in a lecture by telling the audience, "Observe!" and waiting. They were confused, they didn't know what to observe. You have to have ideas about what to observe before you can usefully observe. You also need some sort of problem situation that helps determine which things are relevant. A problem situation is a term for a context with an extra emphasis on it having some problems one is interested in addressing. If you have some problems in mind, you can look for ways some evidence may be relevant to them. Without any problems to use evidence to help with, evidence is not useful. The account of what happened at the link is pretty ambiguous and this is no surprise. The author isn't trying to provide evidence about the particular question we're interested in. So his evidence isn't too good for it. It doesn't clarify the key points of relevance to us, but not to him. But you can still get a general idea of what happened. Darwin saw many many many things. He focused selectively on only a few particular things, such as finches. Why those? Because they posed some problems. What does it mean that they posed some problems? It means there was some incompatibility between them and some pre-existing ideas on the topic. That's why they stood out to Darwin: they had relevance to some issues he was already interested in. And that relevance is critical: it posed problems for some ideas, such as the idea that any given species (like "swan" or "finch") is just one type of thing, the same everywhere. By helping criticizing some common sense ideas, Darwin's observations helped make progress. Another interesting note: In other words, Darwin's evidence was inadequate to rule out mystical-religious views on the topic. It had limited power. It took critical, philosophical thinking to address those views (they have flaws, but incompatibility with the evidence is a flaw they can evade, so other types of criticism are needed). BTW the note there that later research proved Darwin correct is nonsense. No matter what research you do, mystic-religious views can be designed to be compatible with it, and have to be criticized as bad philosophy. OK highlights: Darwin was interested in certain problems and ideas first. Then he selectively gathered some evidence relevant to them. And interpreted the evidence according to his ideas and context. And used it to help criticize and rule out some ideas (like that members of one species in different places are the same). This helped open the door for more bold conjectures to replace the refuted ideas.
  23. Darwin used the critical method. You're only assuming otherwise because you disagree about epistemology. We each interpret it according to our epistemology. It's not evidence against either one. Criticism does use evidence and argument, I don't know why you're trying to throw them out.
  24. Stuff like this really convinces me not to look at the book. This is ridiculous. Whether you agree with what Popper wrote about scientific discovery, or not, he still wrote it! Taking up another issue from the quote: If someone would tell me the difference between conjectural knowledge and objectivist knowledge, that'd be great. Objectivism says that knowledge is fallible and contextual (omniscience is not the standard of knowledge). Popper's point with the qualifier "conjectural" is that it is fallible (and he also knows it is contextual). So why so much complaining about Popper's approach to knowledge? It doesn't look so terribly incompatible to me.
  25. Which concept is stolen? Which higher level concept uses but denies which lower level concept? ITOE does not even attempt to answer Popper's criticisms of positive approaches, refute Popper's negative approach, or solve the problem of induction (or related problems like how positive support could possibly work). We must be miscommunicating somehow.
×
×
  • Create New...