Jump to content
Objectivism Online Forum

ReasonFirst

Regulars
  • Posts

    34
  • Joined

  • Last visited

Previous Fields

  • Relationship status
    No Answer
  • State (US/Canadian)
    Not Specified
  • Country
    United States
  • Copyright
    Copyrighted

Recent Profile Visitors

1134 profile views

ReasonFirst's Achievements

Junior Member

Junior Member (3/7)

0

Reputation

  1. @DavidOdden I agree with this completely. Whenever it comes to safety, the best we can be is contextually certain of safety in whatever situation we are in. But this is also why I would say we should not "test" our contextual certainty of safety unnecessarily. If I play with a deadly weapon that I am contextually certain is safe, that would be an unnecessary action that could have devastating consequences if something goes wrong. It would be different from getting in a car or flying in an airplane even though the best I can be is contextually certain of safety in those situations as well because those actions I take to pursue my values. So it's not that I forgot that existence unavoidably comes with some finite risk, I just want to take rational risks, which exclude playing with weapons. Maybe I am thinking of it as an unnecessary negligible risk that could come with an infinite cost (my life). I think that may be the reasoning behind the gun safety hyperbole. It's based on the very very general fact that humans can only be contextually certain of safety and humans are fallible, so deadly weapons shouldn't be played with. It is almost like a principal. If someone wants to play with something they should play with toys. I agree with you that this should not be interpreted to mean that guns are an "absolute threat." And in a situation in which I am enslaved through force and I am presented with a bizarre choice to gain my freedom which involves a deadly weapon, I think I would be too suspicious that something is wrong to go through with it. Like you stated, it could be "contextually rational to find the gun to be very risky." I am sure that there are plenty of people who would go through with it though. I just do not think I would. I also didn't know kombucha was poisonous, I thought it was just a normal drink.
  2. @DavidOdden Ok so I think we are all in agreement that the actions of the slave cannot be morally judged. Now I would like to turn all attention to whether or not the slave can still be judged as irrational. You mentioned "what would you do in the situation?" I would never play with a deadly weapon in real life because I can only be contextually certain that it is safe. I can never be absolutely certain that it is safe. I would say that being in such circumstances and then being mandated to do something I would never do with a weapon in order to gain freedom is enough to be uncertain about the what the master's true intentions are and even about the safety of the weapon he gives me to do the act. So I would not do it. And that's why I would also not say that the kombucha scenario is equally applicable. One scenario involves playing with a deadly weapon and the other involves having a drink. Is it irrational of me to come to this conclusion?
  3. Ok so I read a quote on ARL that force "paralyzes a man's judgment" and "renders him morally impotent." So I was wondering if a VICTIM of force can be properly judged as irrational and therefore immoral in a specific situation and also in general. Let's say that someone is enslaved by force but that the victim doesn't have to worry about being killed, only enslaved (and maybe eventually dying from hard labor). Freedom is a huge value of his. And let's say the master tells the slave he will set the slave free if the slave takes an unloaded gun, points it at his own head, and pulls the trigger. Let's also say the slave is allowed to check to see if it is loaded or not. The slave would never partake in such an activity if he was free to exercise his own judgment. Can we judge the slave as irrational and therefore immoral if he cannot make a decision or if he refuses? Because by not making a decision or refusing, he is not pursuing his freedom which is a huge rational value. Can we say that his judgment is too paralyzed for us to hold him to certain standards of rationality? Although, the situation is bizarre and I think that could even be evidence in favor of not trusting the master. And in general, can a person be judged as irrational or immoral if they choose not to pursue a value if it means doing something that goes against their better judgment while they are under threat of force?
  4. @Boydstun I apologize if I used unclear verbiage. I don't think LP described error as metaphysical and I did not intend to convey that. He described the POSSIBILITY of error as metaphysical. And it just means that a man has the capacity to make a mistake under certain circumstances due merely to his own nature. Like what MisterSwig stated. What I find interesting is that the metaphysical possibility of error is a valid piece of evidence in favor of someone having made a mistake if that someone is not methodologically conscious and has tried to obtain knowledge. In other words, if somebody does NOT know a proper epistemology, they can actually validly claim that it is possible they made a mistake after obtaining knowledge because of the human capacity to make a mistake. It would not be arbitrary to make that claim in that context. That claim only becomes arbitrary when somebody knows a proper epistemology and obtains certainty in the way described by LP.
  5. Ok, thanks for those Peikoff references. That does make sense how certainty is limited to each proposition. I also wanted to respond to what you mentioned about conjuring up "fancy scenarios" about something like this. And I think this may also clarify how this is relevant to my OP on this thread (about gambling on a certain propositon). My friend and I were having a discussion about certainty and he was saying that the only way you can prove that you're truly certain about a proposition is if you're willing to bet your life on it. That idea led him to propose the gun scenario that we are currently discussing. He said that after doing all your due diligence if you truly know that you're certain, you are rationally obligated to point the gun at your head and pull the trigger. If you refuse, he argued, it's because you are guilty of harboring arbitrary doubt. Because if you were certain, you would put that certainty to the test. Because I'm not absolutely certain, I think of that as a gamble and I would refuse to do it. Let's take a fantastic scenario for example, let's say you do all your due diligence and learn all the facts about how to render a gun safe, but during the moments that you point it at yourself someone teleports a bullet into the chamber. Or it doesn't have to be done by teleportation, maybe someone figures out how to convert energy into matter and localizes that matter in the chamber. These are a couple of arbitrary scenarios but they still do take advantage of the fact that a gun can discharge a projectile at some point in time under the right circumstances, like you stated. They take advantage of that nature that a gun has and other objects don't. It just has to get in the chamber, it doesn't necessarily have to even be shaped like a bullet. So I can't have certainty about that and I refuse to test that out because if I'm wrong I would find out by getting high velocity lead poisoning. And the gun could even have other enhancements that I missed to identify. There was another example that I think may also be related to this idea which was discussed at length on this forum: the arbitrary assertion that a whole bunch invisible gremlins are having a party on the far side of the moon: The Gremlin Convention. We know that the Gremlin Convention assertion is an arbitrary assertion, but would you bet your life that Gremlins don't exist? I wouldn't. And if you say "no I wouldn't" is that because you're harboring arbitrary doubt? Like if someone came to you and said do you want to bet your life that Gremlins don't exist and you take the bet and he ends up being the first person in history to bring forth evidence of the existence of Gremlins? You're screwed in that case. I wouldn't take that bet in principle because my life would be at stake and I'm not absolutely certain, I'm only contextually certain. But am I guilty of being arbitrary by refusing to take the bet or am I just saying I refuse to put my contextual certainty to the test such that my life depends on it? Does that amount to an arbitrary assertion? Or it is also possible that I'm fearing the unknown?
  6. Ok so just to clarify something about omniscience let's say that the following situations apply: 1) I am contextually certain that a given gun is safe. 2) I am contextually certain that a given toy gun is safe. 3) I am contextually certain that a given book is safe. 4) I am contextually certain that a given cell phone is safe. Let's say that I did all the due diligence necessary to arrive at contextual certainty that each of the items mentioned above is safe. Am I then rationally obligated to handle all of these items in the same manner? Can a rational argument be made for Item #1 that even though I am contextually certain that it's safe, I still know that it is a weapon. And I'm aware that I'm not omniscient so I dont want to test that contextual certainty unnecessarily by treating it as if it was a toy or an everyday normal item. Because if it is dangerous, the consequences could be disastrous. Is this argument rational? It seems rational to me but it also seems like someone could say that this argument is entertaining the arbitrary idea that Item #1 is dangerous since it is contextually certain that it is safe. We're not even supposed to think that a safe item could be dangerous if it is contextually certain that it is safe. Because, according to Objectivism, it is irrational to make an arbitrary assertion. The arbitrary must be totally disqualified from cognition. So does this argument make an arbitrary assertion?
  7. @Boydstun Regarding my first paragraph, I was mostly thinking about LP's lectures about certainty and fallibility. I think their main points are: 1) There is a metaphysical possibility of error that is always there, but that does not mean that we are condemned to being uncertain about everything. 2) Metaphysical possibility of error is more general and is different from epistemological possibility of error. 3) Epistemological possibility of error pertains to someone actually having specific evidence to think they did something wrong when they tried to obtain certain knowledge. 4) We can eliminate epistemological uncertainty by applying a proper methodology to obtain knowledge, and when we do eliminate epistemological uncertainty, claims that we did something wrong become arbitrary (because such claims would have no specific evidence to point to in such a situation, kind of like what MisterSwig stated) @Eiuol I think your first sentence in your reply almost hits at exactly what I meant. I wasnt necessarily referring to the nature of the method that you use to obtain knowledge. It was moreso referring to someone's ability to apply that method. I was basically wondering what if there was something about the nature of a being that made them prone to doing something wrong when engaging in specific activities. Almost like a mental kryptonite. But that same nature enables that being to do other activities completely infallibly. And that led to me wonder, if a being like that does engage with an epistemological domain that he is error-prone to just because of his very nature, can he ever obtain certainty in that domain?
  8. I have a hypothetical question that I am thinking about and I wanted to see what other people think about it. We know that human beings are not omniscient AND that human beings are fallible. With regard to fallibility I think Objectivism’s position is that there exists a general possibility of error that can impede the human ability to acquire knowledge that is certain. I think I read somewhere that Objectivism holds that the possibility of error is abstract and metaphysical and specific errors are more concrete and epistemological. Is this correct? Skeptics exploit the metaphysical possibility of error to claim that humans can never know anything for certain. And I think Objectivism’s answer to that claim is that we can’t get rid of the general metaphysical possibility of error but we don’t have to because we can apply objectivist epistemology to acquire knowledge with epistemological certainty in a specific context. So the metaphysical possibility of error is very abstract and it applies to ALL ERRORS that humans can possibly commit. And the certainty that Objectivism claims we can obtain is an epistemological certainty that exists in specific situations. So my hypothetical what-if question about fallibiity is what would the consequences be to our ability to obtain certainty if the general metaphysical possibility of error wasn’t so general and it only applied to certain specific mistakes but not other mistakes? For example, let’s consider three specific activities humans do in normal life: Driving a car Tying Shoes Playing chess Each one of these activities has its own specific, concrete errors associated with it which can be committed. What if, for example, there was something specific about the activity of driving that makes you commit driving errors, and you are infallible when you are doing everything else like tying shoes, playing chess, etc. Let’s say you are capable of making some mistakes, like driving errors, but not other mistakes. If this scenario was real, could you ever know with certainty that you are driving a car properly without committing any traffic infractions or other driving errors? In such a scenario, even if you would apply Objectivist epistemology to determine that you are driving correctly, just knowing that the activity of driving itself causes you to commit errors would qualify as at least one specific piece of evidence that you are doing something wrong. Am I right about this? So now the claim that you are making a mistake would not be arbitrary, since the activity of driving itself would give you reason to suspect that you are committing a driving error. I think the philosophic significance of this scenario is that it extends the possibility of error from the metaphysical layer of your worldview into the epistemological layer of your worldview and thereby destroys your ability to obtain certainty. So I am wondering does fallibility have to be so general? What would the philosophic consequences be if it only applied to cerain errors but not other errors?
  9. @Boydstun I think I'm inclined to agree with what you stated about fallibility/infallibility. Thanks for those links too. I'm still thinking about it though so I welcome any other thoughts on the matter. I also wanted to pose a couple of additional scenarios that are more closely related to my OP regarding testing the validity of certainty. Let's say that someone dares me to playfully point a real gun at myself and pull the trigger in a situation in which I am certain the gun is not loaded. Gun safety laws state that we are ALWAYS supposed to treat a gun as if it is loaded. But is that rational according to Objectivist epistemology? If I refuse to take the dare because I'm worried something may go wrong, does that make me guilty of possessing arbitrary doubt or arbitrary uncertainty since I am supposedly certain the gun is safe? If I am certain the gun is safe, I should have no problem taking the dare right? Or let's say I see a package on the road and it looks harmless. But I refuse to go near it because I think it could be a bomb planted by a Unabomber copycat or something. Let's say I acknowledge I have no evidence that the package is actually harmful but because of the metaphysical possibility of it being harmful, I choose not to go near it. Am I in that situation guilty of possessing arbitrary doubt or arbitrary uncertainty? I'd like to think that in both of the foregoing situations there is a sufficient rational basis to avoid engaging in risky behavior, but I don't know. Objectivist epistemology takes a very hard line position against arbitrary ideas having any legitimate cognitive status and it seems like in the situations I described any idea that would steer someone toward a safe behavior would be arbitrary? Does Objectivist epistemology require clear evidence of danger to be present prior to the rational exercise of caution?
  10. Ok, yeah I'm pretty much in agreement with what David stated about omniscience, except I would also add that omniscient knowledge of the present is also impossible (not just the future and the past), because you would have to be able to observe the entire universe to have access to it (which is impossible). Regarding infallibility, I'm not really sure. David mentioned some examples of the limits of human perception but I don't think they precisely fall under the topic of infallibility. Infallibility is the incapacity to make mistakes or to err even with conceptual knowledge, am I right about this? I'm just wondering if some conscious being can be so epistemologically skilled that he can't make a mistake? But I also think that Ayn Rand's idea of all consciousnesses being born "tabula rasa" or "blank slates" would imply that infallibility is equally as impossible as omniscience. My thinking is that when a being is born, he doesn't know anything at all since he just started existing and therefore had no prior opportunity to acquire knowledge. That means that he also doesn't know Objectivist epistemology. Therefore, he is very vulnerable to erring or making mistakes in his thinking. We can observe this with kids as their minds develop. If someone doesn't have a proper epistemology, we can observe him making a ton of mistakes. Once someone learns a proper epistemology, he minimizes the amount of mistakes he makes but he is still vulnerable to making mistakes. I just want to make sure I am keeping straight what we know in principle and what applies just to human consciousnesses. I think I remember a lecture in which LP mentioned that Ayn Rand's Theory of Concepts applies to human minds when he said that he had a discussion with Ayn Rand about some other way to form concepts, and she responded that if some other way comes up it can be explored then. So it seems like at least the Theory of Concepts is something that is not true in principle, just true for human beings.
  11. @DavidOdden Interesting point. This is making me wonder about omniscience and infallibility. Would this apply to a person if they possess omniscience and infallibility? Although this question may not be valid because I think omniscience and infallibility is impossible for a human being. Could you guys tell me is omniscience impossible only for a human being or is it impossible period? What about infallibility, is it impossible only for a human being or impossible period?
  12. I have some questions about a situation that can arise in every day normal life. People commonly "bet" or "gamble" with each other about a variety of propositions they have to show their certainty about those propositions. They may say "I'll bet you a million dollars that X is true" or "I would bet my life that Y is true." The idea behind this is that someone telling someone else that someone would enter a high-stakes "bet" (in which the stake is someone's life which is the highest possible stake) that someone's proposition is true shows that someone is certain about his proposition. I would argue that entering into a real high-stakes "bet" or a "gamble" (like one in which the stake is a life) about a proposition that is certainly true (like for example that a cup is on a table) should rationally never be done for the following reasons: 1) A "bet" is by definition is an uncertain game which has not concluded when it is entered into, so "betting" about a proposition that is already certainly true implies a contradiction at the outset of the "bet" (it's like playing a dice game and then "betting" after a dice has already been rolled and a number has already been revealed). 2) If a proposition is already certainly true based on evidence and a counter-party is willing to enter a high-stakes "bet" against that proposition, I think that opens the situation up to arbitrary uncertainty, which is something that should not rationally be involved. What I mean by this is that if I have conclusive evidence that a proposition is true, and a counter-party is willing to "bet" against that proposition, it's reasonable to think that that counter-party is at least to some extent disconnected from reality, which means it's possible that that counter-party may not be in his right mind and is putting forth an arbitrary argument against that proposition. And that's a situation that should be avoided. I know it could be possible that the counter-party in this hypothetical situation may simply be mistaken or that he may not have evidence that someone else does have, but I would say it's still irrational to enter into a high-stakes bet (like one in which the stake is a life) with that counter-party even if someone is certain their proposition is true because the stakes are too high. Does anyone have a reaction they'd be willing share to my two thoughts I described above? And more concretely, would you enter a real high-stakes "bet" (in which the stake is your life) about a proposition which is clearly certainly true? What about a low-stakes "bet," does that change your answer?
  13. @Doug Morris I think, under certain circumstances, that reason would qualify as evidence that something potentially dangerous is going on. I mean as long as you weren't on any drugs or hallucinating and you have every reason to believe that you should have seen it because you looked for it again very quickly and you didn't see it, then yes I would think that would count as evidence. @MisterSwig That is very interesting. I guess I was not distinguishing between knowledge itself and the study or theory of knowledge in my arguments. I agree with your statement about what epistemology is. But I was working off the assumption that epistemology ONLY involves using your five senses and rational inference therefrom. I was not including any element of "faith" or "revelation" in my argument. I guess I might have presumed that everyone else on this forum thought that way as well about epistemology. @Easy Truth Thanks for that link, I'll definitely check it out. I know I keep hammering at this but I was hoping you would help me understand specifically what you meant when you presented those two matrix claims and you placed the first one in a "permanently arbitrary" subcategory and the second one (which you mentioned was a variation of the first) in a "tentatively arbitrary" subcategory? I'm hoping to understand what your thought process was when you presented that second claim and what you meant by that second claim: "Everything we know COULD SIMPLY BE a simulation." What would you say is the difference between your second matrix claim and your first matrix claim? I know you placed that second claim together with other claims for which there is no evidence like the one about 9/11 being caused by the US. So in trying to understand what your thought process was, I was thinking about that 9/11 claim and I came up with this: Right now we have NO EVIDENCE that 9/11 was caused by the US but IF one day evidence for that emerges, we can rightfully entertain that possibility. And so, returning attention to your second matrix claim, following the same thought format, "Right now we have NO EVIDENCE that everything we know could simply be a simulation but IF one day evidence for that emerges, we can rightfully entertain that possibility." It seems like that second matrix claim leaves open the scenario of us one day obtaining evidence that we could be living in a simulation and then we would have to accept something like this: "we can no longer be certain that we live in the real world because NOW we have evidence that we could be in a simulation." But accepting that would put us in the same epistemological position as the first matrix claim you mentioned, which is a position that in your words is "unverifiable" and "to be permanently ignored" instead of in your words "Imaginable with no indication but verifiable (to be true or false) (given time)." In the 9/11 scenario, you could follow whatever evidence emerged and look for more evidence to verify it to be true or false. But how would the second matrix claim that you presented be verified to be true or false?
  14. @MisterSwig You could be a sort of deist of the simulated world. You would believe that if the world's simulated, then the Programmer created it but leaves it alone, so no changes to the simulated laws of nature. He doesn't interfere with anything. Once you accept the arbitrary, you might as well make the most of it. The Programmer also has a backup generator in case the power goes off and he has to keep the computers running. Agreed. Just wanna say a couple things about that. 1) Even if the Programmer or the creator of the simulation doesn't interfere after he creates the simulation, he still might have created it in the first place in some way that would eventually lead to the inhabitants of the simulation obtaining contradictory knowledge. What I mean is, even from the beginning, he could have defined humanoid inhabitants that possess all the abilities that we do and ONLY the abilities we do but he could have defined one humanoid inhabitant who could walk on water or walk through walls or fly. So he doesn't have to interfere after the simulation's creation to destroy our epistemology, because we know simulations are programs and at any time even before a program is run, all of the natural or physical rules are set by the Programmer's choices or whims, i.e no generalizations that we can make can be valid. 2) If you're a deist in the simulation and you believe that the Programmer leaves it alone and doesn't interfere, you are still injecting belief without evidence into your epistemology, so your epistemology is still destroyed, i.e. what you have is not epistemology at all, it's a belief system. I agree that this is arbitrary and this leads us to an infinitely regressive back and forth argument about us existing or not existing in a matrix/simulation. And I agree it should be rejected as arbitrary. My biggest question though is can the simulation/matrix claim be something that belongs to a "tentatively arbitrary" subcategory or should it be something that forever belongs to the "permanently arbitrary" subcategory?
  15. @Doug Morris Pardon my ignorance, but why is this a claim about "epistemological possibility" and not simply a claim about the future? How is it different from "That plane must have crashed."? Well the claim “That plane must have crashed” is just a past tense claim instead of a claim about the future. And I would say that both of those claims are claims of “epistemological possibility.” The reason I brought up the two senses of possibility is because it is relevant when you’re trying to make a determination about where on the epistemological spectrum a claim belongs: arbitrary, possible, probable, certain. If you claim for example, “The plane can crash,” you can validly make that claim without having to provide any specific evidence of your own because that claim does not make any assertions about a particular plane in a particular set of circumstances. That claim only asserts that an entity has a potentiality. And we already have all the evidence we need to know that airplanes have the potential to crash. So claiming that an airplane has the capability to crash is the metaphysical sense of possibility. But this is very different from the epistemological sense of possibility in which you’re trying to advance a hypothesis about a particular situation. So your claim “That plane must have crashed” is an assertion about a PARTICULAR plane in a PARTICULAR SITUATION. And so are my examples “This plane is going to crash” or “This plane will crash.” Both of those claims are advancing a hypothesis about a particular airplane under a particular set of circumstances (in a particular situation). And these claims CANNOT be validly made if the person who makes the claims doesn’t present specific evidence for them. You would have to present something specific about the plane that you are making a claim about that would cause or contribute to a crash, like that the specific plane in question was damaged or that the specific plane in question encountered bad weather or something else that is specific to the airplane in question. Otherwise, in the absence of specific evidence, those kinds of claims about specific entities have to be classified as arbitrary and thrown out. I was contemplating the two matrix claims that Easy Truth made and why he put each one into different subcategories of “arbitrary.” I couldn’t understand why the second claim belongs in his “tentative” category of arbitrary. He placed his first claim “Everything we know is simply a simulation” into a category of arbitrary that he stated is “unverifiable” and that should “be permanently ignored.” And I completely agree with this. But I don’t understand what makes it appropriate to place the second claim “Everything we know COULD SIMPLY BE a simulation” into a “tentative” subcategory of arbitrary. And that’s when I started asking myself “What does Easy Truth mean by this second matrix claim? Is he expressing a claim about metaphysical possibility? Is he saying that we CAN create simulations of reality in general? I was thinking if that’s what he is saying, we already know at this point that that claim is true because we know we can create simulations of reality as I’ve already mentioned. But if his second claim is claiming “The specific reality that we exist in COULD ITSELF be a simulation,” I just don’t see how that differs at all from his first matrix claim and I would say that both of the claims should both be placed into his first subcategory of arbitrary: “Unverifiable” and “To be permanently ignored.” If we assume a simulation that perfectly fakes vision for us, why not assume that it perfectly fakes all of our other senses, including our sense of touch and our kinesthetic sense? That seemed to be the case in The Matrix. Rather than go down a rabbit hole debating what the simulation does or does not do, we should just reject the whole idea as arbitrary. Yes, we can go ahead and assume that for the purposes of being clear about what exactly the simulation claims are claiming. But I would say that it is a mistake on our part or anybody's part to make an assumption like that, not just because that assumption is arbitrary, but also because it implies that the senses or our perceptions are not good enough to differentiate between the "fake vision" and "real vision" along with all of our other senses. And I would base my argument on a course lectured by Binswanger called "The Foundations of Knowledge." But I agree with you that debating against an arbitrary claim does make us go down into a rabbit hole and I would agree that we should reject the whole idea as arbitrary. You are correct that they have been spotted in Australia but I think that that example still does have some value because it lends itself well to understanding Easy Truth's two subcategories of "arbitrary." Permanently arbitrary and tentatively arbitrary (i'm paraphrasing his categories a little). He distinguished some arbitrary claims that would become true if sufficient evidence in favor of them ever emerged from arbitrary claims that would remain arbitrary forever. I think the "black swan" example is a good example for a claim that in a past context of knowledge had no evidence and then in a future context did have evidence and so it's a great example of an arbitrary claim that became true over time.
×
×
  • Create New...