Jump to content
Objectivism Online Forum

Question About the Epistemology of "Betting" or "Gambling" on a Certainly True Proposition

Rate this topic


Recommended Posts

I have some questions about a situation that can arise in every day normal life.  People commonly "bet" or "gamble" with each other about a variety of propositions they have to show their certainty about those propositions.  They may say "I'll bet you a million dollars that X is true" or "I would bet my life that Y is true."  The idea behind this is that someone telling someone else that someone would enter a high-stakes "bet" (in which the stake is someone's life which is the highest possible stake) that someone's proposition is true shows that someone is certain about his proposition.

I would argue that entering into a real high-stakes "bet" or a "gamble" (like one in which the stake is a life) about a proposition that is certainly true (like for example that a cup is on a table) should rationally never be done for the following reasons:

1)  A "bet" is by definition is an uncertain game which has not concluded when it is entered into, so "betting" about a proposition that is already certainly true implies a contradiction at the outset of the "bet" (it's like playing a dice game and then "betting" after a dice has already been rolled and a number has already been revealed).

2) If a proposition is already certainly true based on evidence and a counter-party is willing to enter a high-stakes "bet" against that proposition, I think that opens the situation up to arbitrary uncertainty, which is something that should not rationally be involved.  What I mean by this is that if I have conclusive evidence that a proposition is true, and a counter-party is willing to "bet" against that proposition, it's reasonable to think that that counter-party is at least to some extent disconnected from reality, which means it's possible that that counter-party may not be in his right mind and is putting forth an arbitrary argument against that proposition.  And that's a situation that should be avoided.

I know it could be possible that the counter-party in this hypothetical situation may simply be mistaken or that he may not have evidence that someone else does have, but I would say it's still irrational to enter into a high-stakes bet (like one in which the stake is a life) with that counter-party even if someone is certain their proposition is true because the stakes are too high.  Does anyone have a reaction they'd be willing share to my two thoughts I described above?  And more concretely, would you enter a real high-stakes "bet" (in which the stake is your life) about a proposition which is clearly certainly true?  What about a low-stakes "bet," does that change your answer?

Edited by ReasonFirst
Link to comment
Share on other sites

10 minutes ago, ReasonFirst said:

I know it could be possible that the counter-party in this hypothetical situation may simply be mistaken or that he may not have evidence that someone else does have, but I would say it's still irrational to enter into a high-stakes bet (like one in which the stake is a life) with that counter-party even if someone is certain their proposition is true because the stakes are too high.

Sure, they could be mistaken but the issue is why are they so confident in their knowledge. If it's about the high stakes, meaning, "I can't afford to be wrong", then the heightened emotions are causing the irrationality. At that point anything goes, and if you have confidence in them, then you are in trouble. Otherwise, if they know they are being arbitrary, I don't know what the motive would be, other than maybe they want to hurt you/misguide you.

Link to comment
Share on other sites

If there is a dispute about who won the bet, how will it be adjudicated?

Is it possible that an irrational person who thinks they've won will try to take those winnings by force?

Is it possible that someone has gone to a lot of trouble to trick you with a fantastic setup?  For a million dollars, they might consider it worth it.

***************

1 hour ago, ReasonFirst said:

They may say "I'll bet you a million dollars that X is true" or "I would bet my life that Y is true." 

Probably not meant literally.  When we were kids, a friend of my brother's would say "I'll bet you a nickle and a doughnut."  On one episode of classic Star Trek, a non-regular character said "I'll bet you credits to Navy beans we can make a dent in it." 

***************

I once read that if you get an extremely good poker hand, you should fold, because "a gentleman never bets on a sure thing."

 

Link to comment
Share on other sites

People use that phrase when they have some doubt, usually about the future. Or it's a way of expressing "I have no doubt about this, so it's not like I would actually lose the bet." You seem to be saying "well, it's not actually a bet if you are certain!" That's the point. Otherwise, you are just asking about the morality of gambling. Pretty much a nonissue, although it depends on why you gamble.

Either that, or how to deal with risk seems to be the a real question. The answer to that depends on the values in question and how to compare them. 

Edited by Eiuol
Link to comment
Share on other sites

On 11/12/2022 at 1:52 PM, ReasonFirst said:

What I mean by this is that if I have conclusive evidence that a proposition is true, and a counter-party is willing to "bet" against that proposition, it's reasonable to think that that counter-party is at least to some extent disconnected from reality, which means it's possible that that counter-party may not be in his right mind and is putting forth an arbitrary argument against that proposition.

And the counter-party should be deprived of his money so he can do less harm to himself and others, and I can do more good.

Link to comment
Share on other sites

I disagree with the claim that a bet is by definition an uncertain game when entered into and therefore it is contradictory to bet on a proposition that is “already certainly true”. Certainty is a relationship between facts, logic, and a conclusion. If two people do not share the same facts and logic, they do not reach the same conclusion. We may assume that rational people always follow the “same logic”, provided that we narrow the scope of logic to the law of non-contradiction, but actually people impute to logic a lot of facts that have to be independently validated (and who actually does that?).

Abstract hypotheticals have no epistemological value (they lack a relation to existence), so let’s concretize the question with invented “facts”. Jane saw “Bill’s wallet” on Jane’s kitchen counter. Bill claims that his wallet is in his car, because he remembers putting it under the driver’s seat. The bet in question (proffered by Bill) is that Bill’s wallet is in his car. Bill does not have prior access to Jane’s mind, and vice versa. Jane implicitly holds a contradictory proposition (the bet is not about the “counter” claim, it is about the car location: Jane could also be wrong in her beliefs and win the bet). Jane and Bill both have good reasons to hold their positions, and they do not share those reasons. A bet, from the rational perspective, is either about knowledge-context or differences in rationality. Either I am betting that I know something material that you don’t know, or I am betting that you use irrational methods to reach your position.

We can refine the question, though, to say that an “almost certain bet” is one where both parties know that they have “almost identical knowledge contexts” and both adhere strictly to logic. I have spent my life looking for another person with a knowledge context that is almost identical to mine, and so far, I’ve come up empty.

The threat of arbitrary uncertainty is ubiquitous, and applies to everything, not just bets. It is a problem with contracts, as well. With contracts, there is a method of deciding who is right – the lawsuit. Most contractual disagreements are not resolved by the courts – you re-negotiate the agreement after the fact. Not being a bettor, I don’t know how people resolve surprise claims like “I didn’t mean the wallet I usually carry, I means the other wallet that is my property”. Bets can in principle be submitted to a court of law, where there are well-defined procedures for dealing with the arbitrary. That is, you need an independent arbiter and not just the refusal of a party to perform.

The rationality of entering into a bet should really be judged in terms of the benefit of being right and the detriment of being wrong. It’s not rational to induce a person to kill themselves if they foolishly bet that the second month is spelled “Febyuary”. I could cook up a scenatio where the other party might actual gain a better epistemological framework from some bet, but I doubt that betting is actually a rational way to teach somebody a lesson.

Link to comment
Share on other sites

@DavidOdden

Interesting point.

Quote

The threat of arbitrary uncertainty is ubiquitous, and applies to everything, not just bets. 

This is making me wonder about omniscience and infallibility.  Would this apply to a person if they possess omniscience and infallibility?  Although this question may not be valid because I think omniscience and infallibility is impossible for a human being.  Could you guys tell me is omniscience impossible only for a human being or is it impossible period?  What about infallibility, is it impossible only for a human being or impossible period?

Link to comment
Share on other sites

Omniscience is universally impossible in principle, insofar as the scope of “all” in all-knowing includes experiential knowledge of events that have not happened (you cannot experience a thing before it happens), or of events that preceded the existence of the particular consciousness. Infallibility on the other hand is meaningless (impossible for a different reason). I cannot hear microwave radiation, but that is not a failure, that is because of my nature (or, the nature of humans, or mammals). If you switch to omnipotence, that just leads to a different kind of incoherence. For example, humans can see light in a particular range, using their eyes, and can hear sound in a different range, with their ears. You can’t hear light or see sound, and you can’t digest light or sound either. Plants can “digest” light, but then we are metaphorically toying with the word digest. There are many things that humans are incapable of doing, including a whopping load of meaningless “things”. Omnipotence is also conceptually incoherent.

Our solution to the problem of certainty is to understand what it is. Certainty is contextual – a proposition is certain if all actual evidence in a knowledge context points to the conclusion and alternatives are also disproven. Arbitrary uncertainty is a fiat declaration that “one can imagine”, that is, reifying imagination into being a “fact”.

Link to comment
Share on other sites

Another reason omniscience is impossible is that knowledge is gained by a process, and no one can process everything.

If we define infallibility as immunity from making mistakes, we have a different question from the one DavidOdden answered.  I still think the answer is no, but I'm not ready right now to give a good explanation.

 

Link to comment
Share on other sites

Ok, yeah I'm pretty much in agreement with what David stated about omniscience, except I would also add that omniscient knowledge of the present is also impossible (not just the future and the past), because you would have to be able to observe the entire universe to have access to it (which is impossible).

Regarding infallibility, I'm not really sure.  David mentioned some examples of the limits of human perception but I don't think they precisely fall under the topic of infallibility.  Infallibility is the incapacity to make mistakes or to err even with conceptual knowledge, am I right about this?  I'm just wondering if some conscious being can be so epistemologically skilled that he can't make a mistake?  But I also think that Ayn Rand's idea of all consciousnesses being born "tabula rasa" or "blank slates" would imply that infallibility is equally as impossible as omniscience.  My thinking is that when a being is born, he doesn't know anything at all since he just started existing and therefore had no prior opportunity to acquire knowledge.  That means that he also doesn't know Objectivist epistemology.  Therefore, he is very vulnerable to erring or making mistakes in his thinking.  We can observe this with kids as their minds develop.  If someone doesn't have a proper epistemology, we can observe him making a ton of mistakes.  Once someone learns a proper epistemology, he minimizes the amount of mistakes he makes but he is still vulnerable to making mistakes.

I just want to make sure I am keeping straight what we know in principle and what applies just to human consciousnesses.  I think I remember a lecture in which LP mentioned that Ayn Rand's Theory of Concepts applies to human minds when he said that he had a discussion with Ayn Rand about some other way to form concepts, and she responded that if some other way comes up it can be explored then.  So it seems like at least the Theory of Concepts is something that is not true in principle, just true for human beings.

Edited by ReasonFirst
Link to comment
Share on other sites

ReasonFirst,

Descartes thought the only reason we humans err is that we let our will outrun our understanding. He and many others thought that God could not err. That was because they thought error would be an imperfection. That is foolishness, I say. Where there is no error, there is no intelligence. God was traditionally thought of as having a will (there was the choice to make the world and to make humans) and as having understanding, or intellect. Although Descartes would emphasize the extent of the divine will, whereas Leibniz would emphasize the extent of the divine understanding, all could agree that for God, Its will cannot outrun Its understanding. Its understanding, Its intellect, may be pure act, but it is not a process requiring time to obtain knowledge. This idea of divine infallibility (and omniscience) in comparison to human fallibility (and partial ignorance) might be thought analogous to a real refrigerator and a perfect refrigerator, as in thermodynamics. The Second Law says the perfect refrigerator can be compared to real refrigerators, but no real ones can attain coincidence with the perfect one. I think that analogy would be an inappropriate analogy. Although we can get better at avoiding errors (and I would say that the best outside help on that is elementary logic texts which include informal fallacies as well as formal ones; the former can be supplemented by the informal fallacies Rand formulated, or anyway rediscovered and renamed, such as the Stolen Concept Fallacy |—>The Art of Reasoning), we would rationally expect to make errors even when proceeding with the greatest care and conformance to logic. We must not suppose it is possible to make no innocent errors, even as we get more skilled in avoiding them and even with the self-correcting methods of the hard sciences. That would be an error. For comparisons of human intelligence with other intelligence, I should suggest comparison of our cognition with the cognitive powers of the great apes, and not with imagined chimera such as God.

A Natural History of Human Thinking

 

Edited by Boydstun
Link to comment
Share on other sites

@Boydstun I think I'm inclined to agree with what you stated about fallibility/infallibility.  Thanks for those links too.  I'm still thinking about it though so I welcome any other thoughts on the matter.

I also wanted to pose a couple of additional scenarios that are more closely related to my OP regarding testing the validity of certainty.  Let's say that someone dares me to playfully point a real gun at myself and pull the trigger in a situation in which I am certain the gun is not loaded.  Gun safety laws state that we are ALWAYS supposed to treat a gun as if it is loaded.  But is that rational according to Objectivist epistemology?  If I refuse to take the dare because I'm worried something may go wrong, does that make me guilty of possessing arbitrary doubt or arbitrary uncertainty since I am supposedly certain the gun is safe?  If I am certain the gun is safe, I should have no problem taking the dare right?

Or let's say I see a package on the road and it looks harmless.  But I refuse to go near it because I think it could be a bomb planted by a Unabomber copycat or something.  Let's say I acknowledge I have no evidence that the package is actually harmful but because of the metaphysical possibility of it being harmful, I choose not to go near it.  Am I in that situation guilty of possessing arbitrary doubt or arbitrary uncertainty?

I'd like to think that in both of the foregoing situations there is a sufficient rational basis to avoid engaging in risky behavior, but I don't know.  Objectivist epistemology takes a very hard line position against arbitrary ideas having any legitimate cognitive status and it seems like in the situations I described any idea that would steer someone toward a safe behavior would be arbitrary?  Does Objectivist epistemology require clear evidence of danger to be present prior to the rational exercise of caution?

Edited by ReasonFirst
Link to comment
Share on other sites

First, there is no “gun safety law” saying that you must always treat a gun as if it is loaded. That is a mind-deadening slogan that shortcuts reason. It does not reflect the metaphysically given as in the laws of physics, and it was not enacted by the legislature as a rule whose violation if a fine or imprisonment. The most generous label that can be assigned to it is “best practice”.

It is like “the building code”, a set of locally-set rules that “must” be followed in building or repairing structures – let’s set aside the enforcement aspect (the legislature did pass a law mandating those practices). Those rules are generally sensible, and they encode knowledge that is not otherwise accessible to most people. For example, it is not practical for me to conduct the relevant research in order to be certain how deep I should dig, in order to build a solid retaining wall where I live. I live near a jurisdictional boundary, so I know that it is somewhere between 12” here and 18” 2 blocks away, even though the weather is the same. In Duluth, the government says 60”. But this is not arbitrary government meddling, this reflects a fundamental difference between Seattle and Duluth (Duluth is colder). The thing that I can and should do, as a rational person, is determine why there is such a concern (details of the freeze-thaw cycle omitted here). The premise behind the local rule of thumb is that the ground doesn’t freeze very deep, certainly not more than 6”.

In considering alternatives (remember that certainty is based on there not being evidence to support alternatives, so you have to consider alternatives), I have to address questions like “what if the frost line changes to 12 inches?” (ergo a 24 inch trench). There is conceptual and empirical evidence showing that this is possible (in my lifetime), therefore I am not certain what the safe number is. My choice of action is thus not based on certainty, it is based on comparison of probability of propositions, and comparison of values. A deeper trench costs more in terms of material and labor, but there is potential benefit compared to the shallower trench. The consequences of being wrong about changing weather are less severe that being wrong about a gun still having a round in the chamber. Still, one can be certain that the gun is unloaded, although I personally cannot be certain, given my (limited) knowledge of guns. I’m not objecting to the premise that a person can have the knowledge that makes the proposition certain, I am objecting to the stipulation that “I am certain” – suggesting that the emotional feeling of “certainty” is evidence that the proposition has been logically validated. I’m just asking you to “check your premises”, which of course entails being aware of them. Why are you certain?

One of my premises is that I am not omniscient. I know just enough physics and chemistry to know that Vladimir Putin could enshroud the planet in a thick blanket of particulate matter that would substantially lower the temperature. I don’t know exactly how significant this event would be, and I also don’t know just how crazy Putin is or whether anyone can stop him. My decision not to dig a 5 foot trench just in case Putin causes massive nuclear winter is based on my admittedly slap-dash estimation that this is a highly unlikely outcome (his recent conduct in Ukraine has caused me to reconsider the logic of my estimate).

Since I am aware that I am not omniscient, I always have to evaluate conceptual reasons that might steer me away from an action that I favor. I rarely base choices of actions on certainty, I base them on comparative probability, focusing on things that I know that I don’t know. This includes a comparative view of value, such as whether another day’s worth of digging is a sufficient disvalue, compared to the increased probability that my wall will remain stable in case of an unlikely event being realized. Therefore, I may conclude that the added effort does not add sufficient value. The disvalue of blowing my brains out is infinitely high, so I must rely very heavily on my probability-of-outcome estimates.

 

 

 

 

Link to comment
Share on other sites

  • 6 months later...

Ok so just to clarify something about omniscience let's say that the following situations apply:

1)  I am contextually certain that a given gun is safe.

2) I am contextually certain that a given toy gun is safe.

3) I am contextually certain that a given book is safe.

4) I am contextually certain that a given cell phone is safe.

Let's say that I did all the due diligence necessary to arrive at contextual certainty that each of the items mentioned above is safe.  Am I then rationally obligated to handle all of these items in the same manner?

Can a rational argument be made for Item #1 that even though I am contextually certain that it's safe, I still know that it is a weapon.  And I'm aware that I'm not omniscient so I dont want to test that contextual certainty unnecessarily by treating it as if it was a toy or an everyday normal item.  Because if it is dangerous, the consequences could be disastrous.  Is this argument rational?  It seems rational to me but it also seems like someone could say that this argument is entertaining the arbitrary idea that Item #1 is dangerous since it is contextually certain that it is safe.  We're not even supposed to think that a safe item could be dangerous if it is contextually certain that it is safe.  Because, according to Objectivism, it is irrational to make an arbitrary assertion.  The arbitrary must be totally disqualified from cognition.  So does this argument make an arbitrary assertion?

Edited by ReasonFirst
Link to comment
Share on other sites

In Peikoff’s exposition of certainty in OPAR, you will note that (at least in the parts where he speaks approvingly of the concept), propositions are certain, not people. “Contextual certainty” is about what is objectively known, not what you are subjectively aware of or focused on. What would it then mean to be certain that a given toy gun is safe? “Safe” is a floating abstraction, but I assume you mean “the object is incapable of damaging another human” (dogs and rats are another matter). This is the point at which hypotheticals in philosophy especially ethics tend to go way off the rails, with people conjuring up fancy scenarios about dropping the gun thereby causing a victim to stumble over it and conk their head. A straightforward application of "unsafety" is that a person brandishes the toy at a police encounter, causing themselves to suffer high-velocity lead poisoning – unsafe!

The conclusion that the gun is “safe” is a consequence of context-dropping insofar as such incidents actually exist and are in the news every so often. Even if this were not a well-known kind of event, part of due diligence is considering the conceptual evidence that the gun is not always “safe”.

Part of the problem is the vague predicate “safe”, so instead let us substitute a more restrictive proposition: “is incapable of discharging a bullet at the moment”. We would still go through the same mental process in terms of conceptual evidence, for example an individual may delude themselves about the gun because they are ignorant of the concept “one in the chamber”, but again “due diligence” implies learning these facts, therefore the person has something else to check before concluding that the proposition is certain. This is doable, certainty as to that proposition is possible. It may be easier with a toy gun (I haven’t seen a modern toy gun so this is just a guess). If in fact the real and toy guns are equally incapable of discharging a bullet at the moment, then at the moment you are rationally entitled to any other conclusions that flow from that. One difference that you must reach though is that the toy gun is always incapable of discharging a bullet but a real gun is by nature capable of discharging a bullet, since it is easy to change the context.

The only way to evade the relevant second-order conclusion (that is is trivially possible to enable a real gun to fire a bullet, unless you obstruct the barrel at your peril) is to be stunningly ignorant, again highlighting the fact that “contextual knowledge” crucially depends on due epistemological diligence. Therefore, you cannot treat the two guns the same way, because you are not stunningly ignorant and irrational. I’m not making arbitrary assertions about the guns, I am calling on well-known conceptual evidence that simply involves thinking beyond the range of the moment.

Link to comment
Share on other sites

Ok, thanks for those Peikoff references.  That does make sense how certainty is limited to each proposition.  I also wanted to respond to what you mentioned about conjuring up "fancy scenarios" about something like this.  And I think this may also clarify how this is relevant to my OP on this thread (about gambling on a certain propositon).

My friend and I were having a discussion about certainty and he was saying that the only way you can prove that you're truly certain about a proposition is if you're willing to bet your life on it.  That idea led him to propose the gun scenario that we are currently discussing.  He said that after doing all your due diligence if you truly know that you're certain, you are rationally obligated to point the gun at your head and pull the trigger.  If you refuse, he argued, it's because you are guilty of harboring arbitrary doubt.  Because if you were certain, you would put that certainty to the test.

Because I'm not absolutely certain, I think of that as a gamble and I would refuse to do it.  Let's take a fantastic scenario for example, let's say you do all your due diligence and learn all the facts about how to render a gun safe, but during the moments that you point it at yourself someone teleports a bullet into the chamber.  Or it doesn't have to be done by teleportation, maybe someone figures out how to convert energy into matter and localizes that matter in the chamber.  These are a couple of arbitrary scenarios but they still do take advantage of the fact that a gun can discharge a projectile at some point in time under the right circumstances, like you stated.  They take advantage of that nature that a gun has and other objects don't.  It just has to get in the chamber, it doesn't necessarily have to even be shaped like a bullet.  So I can't have certainty about that and I refuse to test that out because if I'm wrong I would find out by getting high velocity lead poisoning.  And the gun could even have other enhancements that I missed to identify.

There was another example that I think may also be related to this idea which was discussed at length on this forum:  the arbitrary assertion that a whole bunch invisible gremlins are having a party on the far side of the moon:  The Gremlin Convention.  We know that the Gremlin Convention assertion is an arbitrary assertion, but would you bet your life that Gremlins don't exist?  I wouldn't.  And if you say "no I wouldn't" is that because you're harboring arbitrary doubt?  Like if someone came to you and said do you want to bet your life that Gremlins don't exist and you take the bet and he ends up being the first person in history to bring forth evidence of the existence of Gremlins?  You're screwed in that case.  I wouldn't take that bet in principle because my life would be at stake and I'm not absolutely certain, I'm only contextually certain.  But am I guilty of being arbitrary by refusing to take the bet or am I just saying I refuse to put my contextual certainty to the test such that my life depends on it?  Does that amount to an arbitrary assertion?  Or it is also possible that I'm fearing the unknown?

Edited by ReasonFirst
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...