Show HN: Doom Train

2 points by digbybk 4 months ago | 2 comments
The goal of Doom Train (metaphor stolen from Liron Shapira) is to make debates about AI safety more productive. People who are concerned about existential risk will ride the doom train all the way to the end. People who think these concerns are unwarranted will get off before the end. The question is: where do you get off the doom train? Once that's been established, there might be a more productive conversation. Let me know if I missed any stations.
  • anonzzzies 4 months ago
    You missed the station where humans are such animals that we cannot even conceive an intelligence that doesn't have our petty, warmongering, ego driven manic ways. Maybe an intelligence with a 1000+ iq is not interested in whatever we think will doom us; how would we know, we are barely surpassed being apes. Well, some of us.
    • digbybk 4 months ago
      So I think you would get off at the station asking the question "Will superintelligent AI systems develop sub-goals that include self-preservation, resource acquisition or power-seeking?"

      I personally feel very uncertain about AI risk, but if I was to get off the doom train it wouldn't be at that station. There's no stop on the doom train that assumes an AI will be petty, warmongering, or ego driven. The only assumptions are that it will be superintelligent, and have goals. Anything with a goal that is intelligent enough will realize that self-preservation, resources and power will make it easier to achieve their goals.

      Again, I don't know if it's realistic to think humanity will ride the doom train to the end, but I'm just trying to get a sense for which stop people get off at (or if I missed any stops, in which case I'll add them).