AI existential risk probabilities are too unreliable to inform policy
42 points by latentnumber 11 months ago | 24 comments- DennisP 11 months agoThis article says that we can't trust the estimates of p(doom), therefore we should take no action. But it assumes that "no action" means spending many billions of dollars to develop advanced AI.
But why is that our default? I could just as well say we can't trust the estimates of p(survival), therefore we should fall back on a default action of not developing advanced AI.
- DisgracePlacard 11 months agoI don't think there's a real need to justify technological progress as a default. That has been the default for at least a century or so, and I think it's done us quite well. The unorthodox thing is the idea that we should avoid technological progress, and if there isn't good evidence for that, then we should ignore it.
- vannevar 11 months ago>[Technological progress] has been the default for at least a century or so, and I think it's done us quite well.
Such progress has presented an existential threat only once previously in the past century, and that was the development of nuclear weapons. The jury is still out on whether that was the right decision. I don't think the argument that, "Well, nothing bad has happened yet!" is very persuasive in the face of possible extermination of the human race.
- krageon 11 months agoIt's escalated humanity's incapability to look toward the future or do long-term planning in any reliable kind of way very problematic. It's also made the tendency for humans to go to war significantly more problematic for everyone that isn't already a murdering loon.
While there's plenty of positive effects of the march of technological progress, let's not pretend it's done us an unequivocally good turn. The state of the environment is enough to explain that.
- DennisP 11 months agoIt's our default because for most technology, we have high confidence of a high p(survival).
- vannevar 11 months ago
- tim333 11 months ago>should fall back on a default action of not developing advanced AI
Not going to happen. It's so potentially valuable economically and militarily that if one country doesn't develop it their rivals will.
- Buttons840 11 months agoAnd p(doom) is non-zero with or without developing AI. It may be that p(doom) is less with AI than without.
- DisgracePlacard 11 months ago
- trott 11 months agoA lot of people are worried about aligning superintelligent, self-improving AI. But I think it will be easier than aligning current AI, for the same reason that it's easier to explain what you want to a human than it is to train a dog.
I posted my specific proposal here: https://olegtrott.substack.com/p/recursion-in-ai-is-scary-bu...
Unlike previous ideas, it's implementable (once we have AGI-level language models) and works around the fact that much data on the Internet is false. I should probably call it Volition Extrapolated by Truthful Language Models.
- greenthrow 11 months agoJust because you explain what you want to a human that doesn't mean they agree or will comply.
That said, I see no reason to believe we are even on a path that can create AGI. LLMs don't actually understand or reason about anything.
- trott 11 months ago> Just because you explain what you want to a human that doesn't mean they agree or will comply.
Humans have innate desires that may conflict with the desires of other humans. A language model just looks for ways to continue texts. In doing so, it attempts to extrapolate what humans who authored the training data thought (If we ignore fabrications, which my approach proposes to address also)
> LLMs don't actually understand or reason about anything.
Current Transformer-based, SGD-trained language models fall short of AGI. But better algorithms can change that. And there is no reason to think that you'll see any warning signs in advance. No one said "I'll invent a faster Fourier transform algorithm in 5 years. Prepare yourselves."
- trott 11 months ago
- greenthrow 11 months ago
- TideAd 11 months agoThe authors are basically asking the alignment problem to be well-defined and easy to model. I sympathize. Unfortunately the alignment problem is famously difficult to conceptualize in its entirety. It's like 20 different difficult counterintuitive subproblems, and the combined weight of all the subproblems that makes up the risk. Of course probabilities are all over place. It'll remain tricky to model right up until we make a superintelligence, and if we don't get that right then it'll be way too late for government policy help.
- greenthrow 11 months agoThe purpose of hyping up the existential risks of purely theoretical AGI is to distract us from the actual problems involved with LLMs today and the much more likely problems they will cause soon.
- cubefox 11 months ago> the actual problems involved with LLMs today
What are these "actual problems today"? More spam in Google results? Students cheating on assignments?
It seems clear that today's problems with LLMs are incredibly minor and unimportant compared to the potential extinction of humanity by future superintelligence.
- HeatrayEnjoyer 11 months agoSource?
- greenthrow 11 months agoCommon sense. It's what scum bag companies always do. If their business is creating actual problems, they try to distract the public with some other shiny object.
- tim333 11 months agoA lot of the p(doom) worrying predates the scum bag companies.
- tim333 11 months ago
- greenthrow 11 months ago
- cubefox 11 months ago
- tim333 11 months agoAt this point when we can pull the plug I think governments should just keep an eye on what's going on. And maybe avoid connecting unstable AI networks to the nuclear missile systems to reduce the risk of Terminator 2.
I'm not sure about when the robots can build better robots and their own power sources - maybe worry about that when the time comes.
- jahewson 11 months agoIt’s really nice to see data being used to back up an argument in this space. There’s too much sci-fi out there.
- aoeusnth1 11 months agoWe can’t rely on any estimates for p(the plane crashes), therefore everyone should get in a plane.
- 11 months ago
- amelius 11 months agoA nonzero probability should be enough because the consequences are infinitely bad.
- Buttons840 11 months agoEvery path has non-zero risks. Choose your risk.
If we outlaw AI, because of "risk", then people might experiment with it in secret, which has its own unique risks. If we study AI in the open, that has unique risks. If we descend further into authoritarianism to punish AI, that has risks. And no matter what we do about AI, the risks of nuclear war or life ending asteroid impacts remain.
- DisgracePlacard 11 months agohttps://en.wikipedia.org/wiki/Pascal%27s_mugging
> In philosophy, Pascal's mugging is a thought experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighted by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.
Curiously enough, this idea can be traced to one of the most prominent AI Safety advocates.
- amelius 11 months agoYes, probabilities are not the right tool for the job.
- amelius 11 months ago
- jncfhnb 11 months agoGive me your life savings or I will destroy the universe
- Buttons840 11 months ago