Sequence to sequence learning with neural networks: what a decade

89 points by dspoka 6 months ago | 31 comments
  • nopinsight 6 months ago
    An important point from this talk:

    The better the models are at reasoning, the less predictable they become.

    This is analogous to top chess engines which make surprising moves that even human super grandmasters can't always understand.

    Thus, the future will be unpredictable (if superintelligence takes control).

    Link to the full talk & the time he talks about this: https://youtu.be/1yvBqasHLZs?si=3M6eZCQtXnW2tSUd&t=866

    • rented_mule 6 months ago
      I think there is an analogous challenge to the gradual, but large scale adoption of self-driving cars. Even in the absence of reasoning, they have different constraints than human drivers. This makes them react to situations in ways that surprise human drivers around them. It's not hard to imagine that being a new source of accidents.

      It won't just be a matter of learning how they react differently, because it will be different from one self-driving platform to another. Sometimes even from one version of a platform to another. And is self-driving engaged, or is the human in control at the moment? Or is self-driving in the process of abdicating to the human, making behavior different from what it was a moment ago?

      • AceJohnny2 6 months ago
        > Thus, the future will be unpredictable (if we let superintelligence take control)

        As opposed to the predictable future we've had for the past few decades?

        • nopinsight 6 months ago
          I think it's a matter of magnitude and semantics. It was quite predictable that the Internet would take off, become ubiquitous, and highly influential.

          If superintelligence emerges soon, we may not even know which technologies will emerge and how many will be unleashed in the next 2 decades.

          ADDED: Examples of some concrete predictions:

          (1980) https://en.wikipedia.org/wiki/The_Third_Wave_(Toffler_book)

          (1995) https://en.wikipedia.org/wiki/The_Road_Ahead_(Gates_book)

          Obviously, specifics differ from predictions and plenty of people got it wrong. Many good forecasters got the broad strokes right though.

          Which forecasters can even predict most technologies that would be invented after ASI emerges?

          • someothherguyy 6 months ago
            Wouldn't base economics dictate that, not ASI? A state can have knowledge of how to do things, how things work, hypothetical implementations, etc. However, if a state lacks the resources, skill, or desire to actually confirm and implement those hypothetical technologies, would they just stagnate? There might be a bottleneck there?
            • adrianN 6 months ago
              Plenty of people predicted that the Internet was just a fad.
              • unit149 6 months ago
                "What is behind a dynamic super-intelligent tiling manager implemented in auto-regressive LSTM pre-training models?"

                Suggestive pathways, he replied.

                • jb1991 6 months ago
                  Hindsight.
                • riwsky 6 months ago
                  To be fair, Ilya seems to have called the previous one pretty well!
                • d0mine 6 months ago
                  Is it just: smarter people look less predictable from a point of view of dumber people. (i.e., there may a perfectly good reason for a smarter person to behave a certain way, it is just not immediately apparent to the dumber one)
                  • nwienert 6 months ago
                    I don’t think that’s broadly true with chess. Did you mean go?
                    • mike_hearn 6 months ago
                      Ilya says in the talk that chess ai is unpredictable even to grand masters.
                      • codeflo 6 months ago
                        That's kind of a trivial observation -- if chess AI were predictable to grand masters, then those GMs could play like the AI and thus wouldn't lose to it.
                        • nwienert 6 months ago
                          It's not predictable of course, but the parent commend said:

                          > surprising, even human super grandmasters can't always understand

                          Of course they don't expect the move, but they always understand it, even if it takes a bit. There's no move an engine makes where grandmasters go "I can't understand this", it may take them a bit, but they will always ultimately have a good idea of why it's the best move.

                      • foogazi 6 months ago
                        4-D chess ?
                      • Caitlynmeeks 6 months ago
                        if you don't want to dip your toe in the festering pile of crap that is X:

                        https://www.youtube.com/watch?v=1yvBqasHLZs

                        • TheSisb2 6 months ago
                          I appreciate the link, but the tone of this comment is very un-HN. I don’t even see people talk that way about 4chan, which one can argue would deserve it more.
                          • pas 6 months ago
                            these are very un-HN times
                            • Gigachad 6 months ago
                              The content on 4chan is almost indistinguishable from X.
                              • TMWNN 6 months ago
                                Ussername [does|does not] check out
                          • defenestrated 6 months ago
                            • skissane 6 months ago
                              Already being discussed here: https://news.ycombinator.com/item?id=42413677
                              • tablatom 6 months ago
                                Any recommendations for thinkers writing good analysis on the implications of superintelligence for society? Especially interested in positive takes that are well thought through. Are there any?

                                Ideally voices that don’t have a vested interest.

                                For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.

                                Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”

                                It certainly could be that I haven’t looked hard enough. That’s why I’m asking.

                                • melvinmelih 6 months ago
                                  Interesting that he left out the concept of safety (being the founder of a company called Safe SuperIntelligence). Would have been curious to his thoughts on that.
                                  • thih9 6 months ago
                                    Is there a transcript? The slides are very clear and useful already but I guess there is more.
                                    • 6 months ago
                                    • ChumpGPT 6 months ago
                                    • saranshsharma 6 months ago
                                      [dead]