Analyzing a Critique of the AI 2027 Timeline Forecasts

38 points by jsnider3 2 weeks ago | 61 comments
  • shitloadofbooks 2 weeks ago
    AI proponents keep drawing perfectly straight lines from "no AI --> LLMs exist --> LLMs write some adequate code sometimes" up into the horizon of the Y axis where AIs run all governments, write all code, paint all paintings and so on.

    There's a large overlap with the crypto true-believers who were convinced after seeing "no blockchain --> blockchain exists" that all laws would be enshrined in the blockchain, all business would be done with blockchains, etc.

    We've had automation in the past; it didn't decimate the labour-force; it just changed how people work.

    And we didn't go from handwashing clothes --> washing machines --> all flat surfaces are cleaned daily by washing robots...

    • refulgentis 2 weeks ago
      Would advise, generally, that AI isn't crypto.

      It's easy to lapse into personifying it and caricaturing the-thing-in-toto, but then we end up at obvious absurdities - to wit:

      - we're on HN, it'd be news to most readers that there's a "large overlap" of "true-believers", AI was a regular discussion topic here a loooong time before ChatGPT, even OpenAI. (been here since 2009)

      - Similarly "AI proponents keep drawing perfectly straight lines...AIs run all governments, write all code, paint all paintings and so on."

      The technical term would be "strawmen", I believe.

      Or maybe begging the question (who are these true-believers who overlap? who are these AI proponents)

      Either way, you're not likely to find these easy-to-knock-down caricatures on HN. Maybe some college hypebeast on Twitter. But not here.

      • mystified5016 2 weeks ago
        I have personally seen all of these people on HN.
        • refulgentis 2 weeks ago
          Right - more directly, asserting they're overlapping, and then asserting all members of both sets all back the same obviously-wrong argument(s) is a recipe for dull responses from ilk like me :)

          I am certain you have observed N members of each set. It's the rest that doesn't follow.

    • gmuslera 2 weeks ago
      My main objection against this kind of predictions is that predictions (at least, the known enough ones) are part of the past that shape the future. Even doing a good extrapolation on the current trends, the prediction itself could make things diverge, converge or do something totally different, because the main decision makers will take it into account and that is not part of the trend. Specially with disruptive enough predictions that paints an undesirable future for all or most decision makers.

      Unless it hits hard in some of the areas that we have cognitive biases and are not fully rational on the consequences.

      • old_man_cato 2 weeks ago
        Sometimes I feel like I'm losing my mind with this shit.

        Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?

        What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?

        • shaldengeki 2 weeks ago
          No, you're wrong. They wrote the story before coming up with the model!

          In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.

          • old_man_cato 2 weeks ago
            https://ai-2027.com/research says that:

            AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.

            You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?

            • shaldengeki 2 weeks ago
              Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".

              Here is the primary author of the timelines forecast:

              > In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.

              > In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.

              https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

              Here is one staff member at Lightcone, the folks credited with the design work on the website:

              > I think the actual epistemic process that happened here is something like:

              > * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon

              > * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world

              > * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to

              > The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"

              https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

          • refulgentis 2 weeks ago
            Correct. Entirely.

            And I'm yuge on LLMs.

            It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.

            As neutrally as possible, I think everyone can agree:

            - There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,

            - Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.

            - Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.

            - You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.

            It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.

            In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago

            It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.

            • radioactivist 1 week ago
              Thank you for this comment, it is exactly my impression of all of this as well.
              • heavyset_go 2 weeks ago
                The point? MIRI and friends want more donations.
            • stego-tech 2 weeks ago
              Reading through the comments, I am so glad I’m not the only one beyond done with these stupid clapbacks between boosters and doomers over a work of fiction that conveniently ignores present harms and tangible reality in knowledge domains outside of AI - like physics, biology, economics, etc.

              If I didn’t know better, it’s almost like there’s a vested interest in propping these things up rather than letting them stand freely and let the “invisible hand of the free market” decide if they’re of value.

              • jvalencia 2 weeks ago
                It's like the invention of the washing machine. People didn't stop doing chores, they just do it more efficiently.

                Coders won't stop being, they'll just do more, compete at higher levels. The losers are the ones who won't/can't adapt.

                • bgwalter 2 weeks ago
                  No, all washing machines were centralized in the OpenWash company. In order to do your laundry, you needed a subscription and had to send your clothes to San Francisco and back.
                  • vntok 2 weeks ago
                    Exactly, it wasn't the case then with washing machines and it's not the case now with AI. Your example is pretty relevant!

                    Today, anyone can run SOTA open-weights models in the comfort of their home for much less than the price of a ~1929 electric washing machine ($150 then or $2,800 today).

                    • er4hn 2 weeks ago
                      That was something I struggled to understand for AI-2027. They have China nationalize DeepCent so there's only one Chinese lab. I don't understand why OpenBrain doesn't form multiple competing labs because that seems to be what happened IRL before this was written.
                    • jgalt212 2 weeks ago
                      Excellent analogy
                    • falcor84 2 weeks ago
                      I suppose that those who stayed in the washing business and competed at a higher level are the ones running their own laundromats; are they the big winners of this technological shift?
                      • 2 weeks ago
                        • alganet 2 weeks ago
                          What are you even talking about?

                          The article is not about AI replacing jobs. It doesn't even touch this subject.

                          • fasthands9 2 weeks ago
                            Yeah. For understandable reasons that is covered a lot too, but AI 2027 is really about the risk of self-replicating AI. Is an AI virus possible, and could it be easily stopped by humans and our military?
                            • alganet 2 weeks ago
                              Actually, the subject has shifted from discussing any specific forecast to "really, how reliable are these forecasts?"
                        • KaiserPro 2 weeks ago
                          bangs head against the table.

                          Look, fitting a single metric to a curve and projecting from that only gets you a "model" that conforms to your curve fitting.

                          "proper" AI, where it starts to remove 10-15% of jobs will cause an economic blood bath.

                          The current rate of AI expansion requires almost exponential amounts of cash injections. That cash comes from petro-dollars and advertising sales. (and the ability of investment banks to print money based on those investment) Those sources of cash require a functioning world economy.

                          given that the US economy is three fox news headlines away from collapse[1] exponential money supply looks a bit dicey

                          If you, in the space of 2 years remove 10-15% of all jobs, you will spark revolutions. This will cause loands to be called in, banks to fail and the dollar, presently run obvious dipshits, to evaporate.

                          This will stop investment in AI, which means no exponential growth.

                          Sure you can talk about universal credit, but unless something radical changes, the people who run our economies will not consent to giving away cash to the plebs.

                          AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.

                          [1] trump needs a "good" economy. If the fed, who are currently mostly independent need to raise interest rates, and fox news doesn't like it, then trump will remove it's independence. This will really raise the chance of the dollar being dumped for something else (and its either the euro or renminbi, but more likely the latter)

                          That'll also kill the UK because for some reason we hold ~1.2 times our GDP in US short term bonds.

                          TLDR: you need an exponential supply of cash for AI 2027 to even be close to working.

                          • OgsyedIE 2 weeks ago
                            I disagree with the forecast too, but your critique is off-base. The assumption that exponential cash is required assumes that subexponential capex can't chug along gradually without the industry collapsing into mass bankruptcy. Additionally, the investment cash that the likes of Softbank are throwing away comes from private holdings like pensions and has little to nothing to do with the sovereign holdings of OPEC+ nations. The reason that it doesn't hold water are the bottlenecks on compute production. TSMC is still the only supplier of anything useful for foundation model training and their expansions only appear big and/or fast if you read the likes of Forbes.
                            • KaiserPro 1 week ago
                              > cash that the likes of Softbank are throwing away comes from private holding

                              Softbank use debt to leverage various things. They require a functioning monetary system to work. They cannot work as they are in a financial recession. Sure some of it might pensions, but again that drys up when the financial system freezes.

                              Money doesn't exist in a vacuum. Its not some fixed supply of things. its a dynamic system that grows and contracts based largely on vibes.

                            • goatlover 2 weeks ago
                              It's certainly hard to imagine the political situation in the US resulting in UBI anytime soon, while at the same time the party in control wants unregulated AI development for the next decade.
                              • bcrosby95 2 weeks ago
                                It's the '30s with no FDR in sight. It won't end well for anyone.
                              • gensym 2 weeks ago
                                > AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.

                                AI 2027 is classic Rationalist/LessWrong/AI Doomer Motte-Bailey - it's a science fiction story that pretends to be rigorous and predictive but in such a way that when you point out it's neither, the authors can fall back to "it's just a story".

                                At first I was surprised at how much traction this thing got, but this is the type of argument that community has been refining for decades and this point, and it's pretty effective on people who lack the antibodies for it.

                                • mitthrowaway2 2 weeks ago
                                  I'm very much an AI doomer myself, and even I don't think AI 2027 holds water. I find myself quite confused about what its proponents (including Scott Alexander) are even expecting to get from the project, because it seems to me like the median result will be a big loss of AI-doomer credibilty in 2028 when the talking point shifts to "but it's a long tailed prediction!"
                                  • hollerith 2 weeks ago
                                    Same here. I ask the reader not to react to AI 2027 by dismissing the possibility that it is quite dangerous to let the AI labs continue with their labbing.
                                    • 098799 2 weeks ago
                                      Because if we're unlucky, Scott will think in the final seconds of his life as he watches the world burn "I could have tried harder and worried less about my reputation".
                                      • heavyset_go 2 weeks ago
                                        Scott will just post a ten thousand word article to deflect and his audience will reorient themselves like they always do.
                                      • stego-tech 2 weeks ago
                                        It got traction because it supported everyone’s position in some way:

                                        * Pro-safety folks could point at it and say this is why AI development should slow down or stop

                                        * LLM-doomer folks (disclaimer: it me) can point at it and mock its pie-in-the-sky charts and milestones, as well as its handwashing of any actual issues LLMs have at present, or even just mock the persistent BS nonsense of “AI will eliminate jobs but the economy [built atop consumer spending] will grow exponentially forever so it’ll be fine” that’s so often spewed like sewage

                                        * AI boosters and accelerationists can point to it as why we should speed ahead even faster, because you see, everyone will likely be fine in the end and you can totes trust us to slow down and behave safely at the right moment, swearsies

                                        Good fiction always tickles the brain across multiple positions and knowledge domains, and AI 2027 was no different. It’s a parable warning about the extreme dangers of AI, but fails to mention how immediate they are (such as already being deployed to Kamikaze drones) and ultimately wraps it all up as akin to a coin toss between an American or Chinese Empire. It makes a lot of assumptions to sell its particular narrative, to serve its own agenda.

                                        • heavyset_go 2 weeks ago
                                          It got traction because it hyped AI companies' products to a comical level. It's simply great marketing.
                                        • tux3 2 weeks ago
                                          It's the other way around entirely: the story is the unrigorous bailey, when confronted they fall back to the actual research behind it

                                          And you can certainly criticize the research, but you've got the motte and the bailey backwards

                                          • shehzade 1 week ago
                                            Hey tux3, sorry to post off topic, but is there any chance I could DM you on Discord/Signal/Reddit/etc…?

                                            You had a super interesting comment a while back about CrowdStrike RE that I was really hoping I could ask you about further!

                                            Discord: shehzade1618 Reddit: u/shehzade16180 Signal: shehzade.1618

                                        • JimDabell 2 weeks ago
                                          It’s not just changing economics that will derail the projections. The story gives them enough compute and intelligence to massively sway public opinion and elections, but then seems to just assume the world will just keep working the same way on those fronts. They think ASI will be invented, but 60% of the public will disapprove; I guess a successful PR campaign is too difficult for the “country of geniuses in a datacenter”?
                                          • pier25 2 weeks ago
                                            > AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.

                                            One of the best things I've read all day.

                                          • f38zf5vdt 2 weeks ago
                                            I think the author is right about AI only accelerating to the next frontier when AI takes over AI research. If the timelines are correct and that happens in the next few years, the widely desired job of AI researcher may not even exist by then -- it'll all be a machine-based research feedback loop where humans only hinder the process.

                                            Every other intellectual job will presumably be gone by then too. Maybe AI will be the second great equalizer, after death.

                                            • goatlover 2 weeks ago
                                              Except we have no evidence of AI being able to take over AI research anymore than we have evidence so far that automation this time will significantly reduce human labor. It's all speculation based on extrapolating what some researchers think will happen as models scale up, or what funders hope will happen as they pour more billions into the hype machine.
                                              • dinfinity 2 weeks ago
                                                It's also extrapolating on what already exists. We are way beyond 'just some academic theories'.

                                                One can argue all day about timelines, but AI has progressed from being fully inexistent to a level rivaling and surpassing quite some humans in quite some things in less than 100 years. Arguably, all the evidence we have points to AI being able to take over AI research at some point in the near future.

                                                • pier25 2 weeks ago
                                                  > all the evidence we have points to AI being able to take over AI research at some point in the near future.

                                                  Does it?

                                                  That's like looking at a bicycle or car and saying "all the evidence points out we'll be able to do interstellar travel in the future".

                                                  • suddenlybananas 2 weeks ago
                                                    >surpassing quite some humans

                                                    I don't really think this is true, unless you'd be willing to say calculators are smarter than humans (or else you're a misanthrope who would do well to actually talk to other people).