Pascal's Mugging (2009) [pdf]

55 points by srimukh 5 years ago | 69 comments
  • roenxi 5 years ago
    This can be easily resolved by considering that the victim of the mugging has finite resources. This is a usual remedy to problems were expected value alone gives stupid results (such as Pascal's Mugging). Something similar happens in lotteries where even if the expected value of buying a ticket is positive it is still not rational for an ordinary person to buy a ticket.

    If I have $400 dollars I can't afford to take 1:1000000 risks that cost $200 each. I will go bankrupt with an enormous likelihood whatever the payoff. There is a minimum cutoff involving cost/probability below which it does not make sense to take up the opportunity.

    There are links to similar theoretical ideas from the Pascale's Mugging wiki page - although from the casinos perspective not the gambler's - https://en.wikipedia.org/wiki/St._Petersburg_paradox#Finite_... and then https://en.wikipedia.org/wiki/Gambler%27s_ruin for example.

    Most people will not take an 99% risk of going bankrupt in a game that will consume all their resource reserves; expected value as a statistic does not meaningfully capture the risk. Positive expectation, losing strategy.

    • joe_the_user 5 years ago
      That's a good argument against seemingly plausible but risky approaches.

      This particular situation, of someone just pulling "it could be true!" out of their arse, can also be solved by framing things as "the more utility you claim, the less likely it seems and disproportionately so".

      IE, if the chance of getting X from the scoundrel is less than 1/(X^2*C), even integral of all the offers together winds up very small.

      • petters 5 years ago
        Yes, but it is still not trivial to formalize mathematically without running into trouble: https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-m...
        • roenxi 5 years ago
          It is pretty simple to formalise - if there is a pool of people who routinely accept positive-expected-value gambles where there is a P = 99.999999999% chance of going broke then we expect everyone in that pool will be broke unless the size of the pool is comparable to 1/(1-P). Tighten up the bounds on 'comparable' a bit and that is formalised.

          The mistake is accepting uncritically that expected value is the best metric to optimise. Nobody ever proved that expected value is a strategically superior metric. In fact it would be quite hard to prove that since it is not true. It leaves people vulnerable to making very stupid decisions as illustrated in Pascals Mugging.

          Optimium strategy involves at a minimum considering your available opportunities and available resources. Opportunity alone is not enough.

          • ralfd 5 years ago
            > Tighten up the bounds on 'comparable' a bit and that is formalised.

            Is this like „draw the rest of the owl“?

            • ikeboy 5 years ago
              You need to maximize expected utility. In order to avoid this, you need utility to be bounded: there must be some multiple of your current utility that's impossible to ever have regardless of what happens.
          • maest 5 years ago
            I thought the "solution" for St Peteresburg paradox was to consider utility as a non-linear function of wealth. And this makes the series converge.

            Also, I don't find the Pascal's Mugger example convincing, as the probability that the mugger will return with the money is inversely proportional to the multiple they are promissing (for very large multiples this is because they have finite resources, but even at lower multiples this intuitively feels true).

            • roenxi 5 years ago
              > as the probability that the mugger will return with the money is...

              That can't be reasonably estimated though. Putting aside the fact that we can't really assert the relation you posit, there is also a finite probability that the mugger is some sort of illuminati member with the ability to create an arbitrary amount of money. Ie, there is some tiny-but-positive probability that he can create an arbitrary amount of money.

              At that point, the expected return can be made large compared to the probability that the mugger is lying.

              • Dylan16807 5 years ago
                They can't really offer you more than the amount of resources in the entire world, though. Money past some point in the trillions stops being money, because you can't exchange it for anything. So even for an illuminati member, the expected value can only go so high, and I don't know if that value is higher or lower than a dollar.
                • maest 5 years ago
                  It seems to me that the set of worlds where the mugger can return $X is strictly a subset of the set of worlds where the mugger can return $X+epsilon.

                  So the probability won't be "inversely proportional" in the strict sense, but it will be decreasing with X increasing.

              • simonh 5 years ago
                Oh it’s a completely ridiculous, thoroughly flawed and easily refuted argument, for exactly the same reasons Pascal’s Wager is flawed. I think that’s the point.
              • samspenc 5 years ago
                This is likely trending because John Carmack's recent post saying he was moving to AI references Pascal's Mugging: https://www.facebook.com/100006735798590/posts/2547632585471...

                His Facebook post was also discussed in detail on Hacker News here: https://news.ycombinator.com/item?id=21530860

                • Erlich_Bachman 5 years ago
                  I really love how this is could be seen as a demonstration that HN (or any other similar interest community, or probably the internet in general) functions like a hive mind, a collective consciousness. I've read that post by Carmack and I wondered exactly that "wonder what Pascal's Mugging is, interesting".

                  Sometimes you go around and wonder various things, but don't look them up or do them in the moment, and then sometime later your subconscious mind serves you up with an answer, perhaps when you are more relaxed you just think up of the answer, or it happens to come up in a certain context, the subconscious lights it up there. It might be a word that you see randomly in a newspaper, or you think of a person that was related when you met them etc.

                  And here the subconscious did the same wonderous thing, except it wasn't even strictly my personal subconsciousness, it was the group subconscious that found the information and presented it.

                  • pennaMan 5 years ago
                    I have noticed this pattern before: Someone mentions a topic deep inside a thread on HN and next thing you know that topic is on the front page.
                    • maest 5 years ago
                      It could also be confirmation bias, though. The Baader-Meinhof thing.
                    • peterwwillis 5 years ago
                      But how useful is it? Pascal's Mugging was submitted to HN and discussed 8 years ago. If the collective consciousness keeps needing reminders of what it once knew, this is probably still inferior to a single intelligent person who reads a lot and remembers it all.
                      • ncmncm 5 years ago
                        The population using HN takes in new members continuously, and the fraction who read everything posted eight years ago is very small.
                        • Erlich_Bachman 5 years ago
                          As a somewhat intelligent person, I still also do need reminders often. The trick is to forget mostly the unneeded stuff and maximally retain the useful stuff, and accurately discern between the two. The goal is not to remember everything.

                          Also it would be strange to expect this group consciousness to never need reminders when it continuously has new people added to it, who are unfamiliar with old things.

                          I think consciousness is more about active living rehashing of information and reupdating it, reupdating the worldview to adjust to constantly changing environment - not so much about building one single model that would somehow know everything. Such models tend to be stale or abstract and philosophical to the point of uselessness.

                          • brian_herman__ 5 years ago
                            It’s new to me :)
                      • zenon 5 years ago
                        I hereby declare that I will expose everyone that gives in to Pascal's mugger to a negative utility so great compared to whatever the mugger promises that it is always best to keep the wallet. I cold be telling the truth. You're welcome.
                        • YeGoblynQueenne 5 years ago
                          I don't know the background to this but if I understand correctly, it kind of pivots on the probability that the mugger is indeed an Operator from the Seventh Dimension, that Pascal places at 1 in a quadrillion.

                          In that case, I have to wonder where this estimate comes from? I get that it's just an arbitrary number and that any number would do, as long as it wasn't zero, but that's exactly the point: why can't Pascal place the probability of his mugger being an Operator from the Seventh Dimension at zero?

                          Is there any evidence at all to support the mugger's claim? Is there any evidence at all that there is such a thing as a "Seventh Dimension" for which the only thing we know is that its "Operators" have magickal, utility-maximising powers?

                          And does the whole thing only work if we assume that the probability that there is such a place and such people is more than 0?

                          • Strilanc 5 years ago
                            If you set the probability at zero, you won't be convinced when they actually are an operator from the seventh dimension. That is to say, you run into the opposite problem of being Pascal's Muggle [1].

                            1: https://www.lesswrong.com/posts/Ap4KfkHyxjYPDiqh2/pascal-s-m...

                            > A wind begins to blow about the alley, whipping the Mugger's loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle. In the sky above, a gap edged by blue fire opens with a horrendous tearing sound - you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too - and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.

                            > [...] "Unfortunately, you haven't offered me enough evidence," you explain.

                            • lidHanteyk 5 years ago
                              In no particular order:

                              * Helping a googleplex of people immediately vs over a period of time are two different complexities of action.

                              * Recall that hypotheses are selected from an ambient pool of possibilities. Then we might imagine that some hypotheses dominate others, so that regardless of how much evidence is offered, we always insist that the evidence supports a simpler alternative. To wit:

                              "Well, if I'm not a Matrix Lord, then how do you explain my amazing powers?" asks the Mugger.

                              "Street magic," you say. "Very impressive sleight of hand. Perhaps some smoke, mirrors, lasers, assistants."

                              * A Matrix Lord asking $5 of a person on the street in order to commit miracles is inherently irrational. If they just wanted $5, or wanted to deprive the person of $5, or wanted to humiliate and embarrass the person, or force them to accept certain philosophical truths, then those all could be achieved via Matrix Lordery. Therefore the Lord in this story is being a pointless dick, and it's silly to expect rational arguments to be part of the conversation. To wit:

                              "Just give yourself $5. Give yourself any reward you like, for helping people; it's not my place to set or fulfill the price of such powerful entities, is it?" you ask.

                              "But...but don't you want the feeling of doing good?" asks the Mugger.

                              "Not really, no," you reply. "I have investments and equity already, and those dollars already have ripples that affect people far beyond my direct control. I don't feel much of anything about those investments. And it would be irrational for me to value a $5 investment more than $5. Really, if you can do all of this good, then you should turn yourself into an exchange-traded fund, and let people buy your time to do good in the world," you muse.

                              "But...but this offer is for you, and you alone," the Mugger insists.

                              "Okay, but why me? Let's talk about the Self-Sampling Assumption!" you say. The Mugger groans.

                              • YeGoblynQueenne 5 years ago
                                That's a good point: if the Mugger is an Operator from the Fifth Dimension and he has such great magickal powers, why does he need the 10lb in Pascal's wallet? Or, if he does need them, why can't he just get them?

                                Like, we are asked to believe that, given that the Mugger is an Operator from the Seventh Dimension, he has the power to offer 10 quintillion Utils to Pascal, but not the power to just take the 10lb from his wallet.

                                I think the whole paradox can still stand, given that the Mugger can then just offer an amount of Utils that compensates from the much smaller conditional probability of the Mugger being only sorta omnipotent.

                                On the other hand, I think we can easily resolve the paradox by inserting the Crowbar of Cynical Jadedness: If it sounds too good to be true, then its probability of being either good, or true is zero (it can be one or the other with a non-zero probability, but not both). 10 quintillion utils (or however many) sounds too good to be true, so it can't be true. A Used Car Salesman will never offer you a good deal. The Mugger is only lying to get Pascal's money.

                              • YeGoblynQueenne 5 years ago
                                Thanks, that's an interesting read. But I don't think it addresses my question: why shouldn't Pascal place the probability that the Mugger is an Operator from the Seventh dimension to _zero_ (rather than an infinitissemally small number)?

                                The point is that, at the time when the Mugger declares himself to be an Operator from the Seventh Dimension who can offer large rewards etc, there is no evidence to suggest he's saying the truth. No evidence at all. Accordingly, the probability that he's telling the truth must be zero. Where does a non-zero probability value come from?

                                Are you then saying that the probability of any reward should never be placed to zero because that would not maximise rewards?

                                • Dylan16807 5 years ago
                                  Probability zero is the same as saying that it would take infinite evidence to convince you. Even if someone provides amazingly convincing evidence, better than you've ever seen, a flat 0 or 1 eats it.

                                  > there is no evidence to suggest he's saying the truth. No evidence at all. Accordingly, the probability that he's telling the truth must be zero.

                                  I don't think that logic works. What if the claim was "I have a five dollar bill in my pocket"?

                                  • TeMPOraL 5 years ago
                                    As the saying goes, "zero and one are not probabilities". Like 'Dylan16807 says, they eat evidence. When doing maths, when transforming to log probabilities, 0 becomes -Infinity; when transforming to odds ratios, 1 goes to infinity.

                                    A longer explanation: https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-ar....

                                    See also https://en.wikipedia.org/wiki/Cromwell%27s_rule, mentioned by 'edflsafoiewq.

                                  • joefourier 5 years ago
                                    I would agree, that is not enough evidence. Some sort of advanced display technology causing the apparition provides the exact same explainability, and would require no changes to our understanding of the universe and the laws of physics.
                                  • joefourier 5 years ago
                                    Why, instead of the probability being 0 or 1 in a quadrillion, is the probability not simply undefined?

                                    Maybe I am not well-versed in Bayesian thinking, but I am unable to understanding assigning probabilities to events that have not occurred before, and to which there is no related numerical data.

                                    Making the probability of Pascal's number being undefined renders any calculation of the risk involved null and solves the problem, while making it possible for future evidence to assign a defined probability (say you were previously approached by 10 Pascal's muggers and 2 turned out to be telling the truth).

                                    • edflsafoiewq 5 years ago
                                      > I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

                                      https://en.wikipedia.org/wiki/Cromwell%27s_rule

                                      • YeGoblynQueenne 5 years ago
                                        I think this is the same idea in another comment, above, about Pascal's Muggle. I think there's a bit of a confusion here though. I can adjust the probabilities of any event given new evidence.

                                        For example, at this point in time I believe that the probability that I can fly if I flap my arms up and down is zero. I have no evidence that this is possible and I understand enough of the relevant physics to know that this is not just improbable, it is impossible.

                                        However, if tomorrow I flapped my arms and found that I could fly, there would be nothing stopping me from re-evaluating my belief and assigning a higher probability to the chance that I can fly if I flap my arms.

                                        But I think the problems begin with the misguided ambition to be able to predict the future even when there is no evidence to support any prediction. You can't know what you can't know. You can assign any probability you like to what you can't know, but even if you end up assigning the right probability that will be the result of blind chance, not the result of correct reasoning.

                                        Anyway this is why I prefer logical inference to probabilistic inference. I understand that I'm in a minority on this, but for me it makes a lot more sense to maintain a state of provisional belief with an absolute value (in {0,1}), provisional in the sense that new evidence can always change your belief, than to live in a perpetual state of uncertainty which never resolves itself no matter how much evidence you see, because there is always a chance that you're wrong. There always _is_ a chance that you're wrong but it just seems cumbersome to have to maintain a ledger of competing probabilities for everything that has happened, and everything that hasn't yet happened, just on the off chance that anything can happen, including mutually exclusive events.

                                        In principle, anything might happen. In practice, not everything will. There must be a sensible way to figure out what we need to prepare for and what we can safely ignore. And the whole Pascal's Mugger paradox, while it's meant to attack Pascal's Wager's logic, ends up for me as an illustration of why proabilistic inference is deeply borked.

                                    • skrebbel 5 years ago
                                      Wow, I always thought of Pascal's Mugging as a satirical illustration of how stupid it is to take enormous (or extremely small) numbers seriously. Turns out people like John Carmack (see other comments in this thread) aren't picking up on the satire. Or am I reading him wrong? I think he's smarter than me, so what am I missing?

                                      Given Bostrom's general love for mixing tiny probabilities with enormous outcomes, what's the point of this article? It seems delightfully self-critical. How can the conclusion be anything other than that we should not be taking the AI/singularity crowd too seriously, as doing so would be akin to voluntarily handig over a wallet to a mugger?

                                      • Erlich_Bachman 5 years ago
                                        Sounds more like you are taking the argument too seriously and are trying to read more into it than what it is.

                                        It is just a philosophical story created to provide a certain line of reasoning, as certain possible structure of an argument. People are free to apply this argument however they want, it doesn't prove anything by itself, it doesn't say anything about the world, it's up to the user of it. It does not make any conclusions, it's just a story. Carmack used it to illustrate his own beliefs (which are therefore: AI is possible and the payout for the AI is exteremely high, even if probability for it during the next couple of years is low).

                                        Carmack did not mean that you should believe or not believe in AI or anything else based on this argument. He just used it to illustrate what he himself chose to do. He did not base it on this argument alone, he did not just hear the argument and suddenly decide "now because of that I have to work on AI", he based it on his experience and knowledge of the actual field. The mugging argument is just a cute way of quickly explaining it.

                                        > I think he's smarter than me, so what am I missing?

                                        Wisdom and context.

                                      • praptak 5 years ago
                                        Paradoxes such as this one point at some rarely discussed assumptions about the utility function. The major one is that it exists. Or exists as something more than a model which is only good for a range of payoffs.
                                        • adamchalmers 5 years ago
                                          The point of paradoxes like this is to demonstrate that even in our incredibly simple toy models of agents, we still run into issues. These paradoxes help point out weaknesses of the models and act as new desiridata for better models.
                                        • 5 years ago
                                          • EvanWard97 5 years ago
                                            Expected value != expected utility.

                                            Just taking this seriously pretty much resolves all these problems.

                                            It makes sense to take high cost/reward, low probability events seriously if the expected utility works out. Examples include reducing existential risk substantially, even at cost to short-term utility (say, in the form of well-being) to increase the probability that we can eventually figure out how to optimally arrange matter and energy and trigger a utilitronium shockwave.

                                            • ivan_ah 5 years ago
                                              In case someone is not familiar with Pascal's wager, here is a table that shows the expected values: https://en.wikipedia.org/wiki/Pascal%27s_wager#Analysis_with... (the appearance of infinity breaks the decision theory...)
                                              • MaxBarraclough 5 years ago
                                                Pascal's Wager fails for the simpler reason that it ignores the possibility that believing in god could send you to hell.

                                                I see a similar problem here. Pascal should consider the possibility that the mugger will use his (Pascal's) silly action as the basis for punishment, in place of the promised reward. Slim probability, potentially very high cost. The fact that this risk has gone unstated doesn't mean it isn't there.

                                                • adamchalmers 5 years ago
                                                  I agree. This problem is still useful though, because to analyse a lot of counterfactual muggers, you must be able to analyse one mugger!
                                                  • MaxBarraclough 5 years ago
                                                    I agree that both problems are worth analysing, even if they're both absurd. Same goes for the Hangman's Paradox, a personal favourite. [0]

                                                    I gather that Pascal had never intended Pascal's Wager to be a watertight argument, it was intended more as a plaything for showing how unlikely it was anyone could make a watertight case for the existence of God.

                                                    [0] https://en.wikipedia.org/wiki/Hangman%27s_Paradox

                                              • dang 5 years ago