AMD's Freshly-Baked MI350: An Interview with the Chief Architect

128 points by pella 2 weeks ago | 75 comments
  • pella 2 weeks ago
    FP6:

      "Alan: Sure, yep, so one of the things that we felt like on MI350 in this  timeframe, that it's going into the market and the current state of AI... we felt like that FP6 is a format that has potential to not only be used for inferencing, but potentially for training. And so we wanted to make sure that the capabilities for FP6 were class-leading relative to... what others maybe would have been implementing, or have implemented. And so, as you know, it's a long lead time to design hardware, so we were thinking about this years ago and wanted to make sure that MI350 had leadership in FP6 performance. So we made a decision to implement the FP6 data path at the same throughput as the FP4 data path. Of course, we had to take on a little bit more hardware in order to do that. FP6 has a few more bits, obviously, that's why it's called FP6. But we were able to do that within the area of constraints that we had in the matrix engine, and do that in a very power- and area-efficient way.
    • treesciencebot 2 weeks ago
      the main question is going to be software stack. NVIDIA is already shipping NVFP4 kernels and perf is looking good. It took a really long time after MI300X's that the FP8 kernels were OK (not even good, compared to almost perfect FP8 support in NVIDIA side of things).

      I will doubt that they will be able to reach %60-70 of the FLOPs in majority of the workloads (unless they hand craft and tune a specific GEMM kernel for their benchmark shape). But would be happy to be proven wrong, and go buy a bunch of them

      • pella 2 weeks ago
        (related)

        Tinygrad:

          "We've been negotiating a $2M contract to get AMD on MLPerf, but one of the sticking points has been confidentiality. Perhaps posting the deliverables on X will help legal to get in the spirit of open source!"
        
           "Contract is signed! No confidentiality, AMD has leadership that's capable of acting. Let's make this training run happen, we work in public on our Discord.
        " https://x.com/__tinygrad__/status/1935364905949110532
        • LeonM 2 weeks ago
          It still amazes me that George/Tinycorp somehow seems to get AMD on board every time, and being blissfully unaware that they are a very small player. See for example top comment here [0].

          Don't get me wrong, I think it's impressive what he achieved so far, and I hope tiny can stay competitive in this market.

          [0] https://news.ycombinator.com/item?id=36193625

        • lhl 2 weeks ago
          For anyone interested in tracking max achievable matmul FLOPS for hardware and unaware, I highly recommend tracking Stas Bekman's mamf-finder results: https://github.com/stas00/ml-engineering/tree/master/compute...
        • kristianp 2 weeks ago
          Will 1.58 bits be in the MI400? Or is it not established as a widely useful technology yet?

          See https://arxiv.org/abs/2402.17764

        • behnamoh 2 weeks ago
          Does this also ship only in x8 batches? I really liked MI300 and could afford one of them for my research, but they only come in batches of x8 in a server rack, so I decided to buy an RTX Pro 6000.
          • jiggawatts 2 weeks ago
            Of course not.

            AMD stubbornly refuses to recognise the huge numbers of low- or medium- budget researchers, hobbyists, and open source developers.

            This ignorance of how software development is done has resulted in them losing out on a multi-trillion-dollar market.

            It's incredible to me how obstinate certain segments of the industry (such as hardware design) can be.

            • rfv6723 2 weeks ago
              These ppl are very loud online, but they don't make decisions for hyperscalers which are biggest spenders on AI chips.

              AMD is doing just fine, Oracle just announced an AI cluster with up to 131,072 of AMD's new MI355X GPUs.

              AMD needs to focus on bringing rack-scale mi400 as quickly as possible to market, rather than those hobbyists always find something to complain instead of spending money.

              • behnamoh 2 weeks ago
                > these people

                we're talking about the majority of open source developers (I'm one of them). if researchers don't get access to hardware X, they write their paper using hardware Y (Nvidia). AMD isn't doing fine because most low level research on AI is done purely on CUDA.

                • uniclaude 2 weeks ago
                  Neither their revenue nor their market share in the space looks like just fine. What exactly in trailing the market for years is “just fine”?

                  AMD is very far behind, and their earnings are so low that even with a nonsensical pe ratio they’re still less than a tenth of nvidia. No, they are not doing anywhere near fine.

                  Are hobbyists the reason for this? I’m not sure. However, what AMD is doing is clearly failing.

                  • creato 2 weeks ago
                    When you design software for N customers, where N is very small, and you expect to hold each customers' hand individually, the software is basically guaranteed to be hot garbage that doesn't generalize or actually work except in exactly the use cases you supported (there are exceptions to this, but it requires having exceptional software engineers and leaders that care about doing things correctly and not just closing the next ticket, and in my experience, they are extremely rare).

                    If you design software for N00000 customers, it can't be shit, because you can't hold the hands of that many people, it's just not possible. By intending to design software for a wide variety of users, it forces you to make your software not suck, or you'll drown in support requests that you cannot possibly handle.

                    • almostgotcaught 2 weeks ago
                      > These ppl are very loud online, but they don't make decisions for hyperscalers which are biggest spenders on AI chips.

                      this guy gets it - absolutely no one cares about the hobby market because it's absolutely not how software development is done (nor is it how software is paid for).

                    • gdiamos 2 weeks ago
                      startups and researchers are broke, just like Geoff Hinton in 2006 - https://blog.waqasrana.me/assets/papers/hinton2006.pdf
                      • behnamoh 2 weeks ago
                        no we're not broke! we constantly write grants and receive funding from various sources. guess what hardware we recommend the University to purchase? it's 99.9% Nvidia, and sometimes Mac Studio just to play with MLX.
                      • naveen99 2 weeks ago
                        There is no mass middle market… it’s volume or luxury… middle management is for taxes.
                      • latchkey 2 weeks ago
                        It isn't 8x batches. It is 8 OAMs on a UBB. The UBB is what enables the 8x to communicate with each other over infinity fabric.

                        If you don't need 8, then that's exactly why we offer 1xMI300x VM's.

                        • ryao 2 weeks ago
                          A number of people want to purchase their own hardware, not rent cloud hardware. I recently purchased a RTX PRO 6000 for the same reason, despite having the option of renting a B200 VM for $1.49 an hour from DeepInfra until the end of June.
                          • latchkey 2 weeks ago
                            True, but as time goes on, it will become a wider and wider gap between what is deployed in DC’s and what you can run at home.

                            We see it now with 8x UBB and it will get worse with direct liquid cooling and larger power requirements. Mi300x is 700w. Mi355 is 1200w. Mi450 will be even more.

                            Certainly amd should make some consumer grade stuff, but they won’t stop on the enterprise side either. Your only option to get super computer level compute, will be to rent it.

                      • teleforce 2 weeks ago
                        This 8-combo MI350 is a beauty with 2304 GB VRAM of HMB3E memory on each UBB [1].

                        [1] This is the AMD Instinct MI350:

                        https://www.servethehome.com/this-is-the-amd-instinct-mi350/

                        • latchkey 2 weeks ago
                          I've got the MI300x and I can't wait to deploy a bunch of the MI355's.
                          • jonfromsf 2 weeks ago
                            NVDAs advantage is software, not just hardware. Would be amazing to have a competitive market but better hardware won't be enough to make it happen.
                            • tedunangst 2 weeks ago
                              A solid 40% of George's questions were deemed great. (Not counting some fluff like what's your job.)
                              • AbuAssar 2 weeks ago
                                If MI350 employs CDNA, which is based on the VEGA (GCN) architecture, does that imply that MI400, when introduced next year, will skip the 2020 GCN and directly transition to RDNA 5 equivalent?
                                • adrian_b 2 weeks ago
                                  There will be no RDNA 5, but a unified UDNA, replacing both CDNA and RDNA.

                                  AMD has not disclosed how they will achieve the unification, but it is far more likely that the unified architecture will be an evolution of CDNA 4, i.e. an evolution of the old GCN, than an evolution of RDNA, because basing the unified architecture on CDNA/GCN, will create less problems in software porting than basing it on RDNA 4 or 3. The unified architecture will probably take some features from RDNA only when they are hard to emulate on CDNA.

                                  While the first generation of RDNA has been acclaimed for having a good performance increase in games over the previous GCN-based Vega, it is not clear how much of that performance increase was due to RDNA being better for games and how much to the fact that the first RDNA GPUs happened to have double-width vector pipelines in comparison with the previous GCN GPUs, thus double throughput per clock cycle and per CU (32 FP32 operations/cycle vs. 16 FP32 operations/cycle).

                                  It is possible that RDNA was not really a better architecture, but omitting some of the hardware that was rarely used in games from GCN allowed the implementation of the wider pipelines that were more useful for games. So RDNA was a better compromise for the technology available at that time, not necessarily better in other circumstances.

                                  • trynumber9 2 weeks ago
                                    I heard the opposite. The next is gfx13 and that it is more like RDNA with more bolted on. Which makes sense given the version numbers. MI350 is still gfx943 or gfx950. RX 9070 XT is gfx1201.
                                  • pella 2 weeks ago
                                    2026 - MI400X - CDNA 5 - UALink/IF - Helios - HBM Bandwidth: 1,400 TB/s

                                    https://www.tomshardware.com/pc-components/gpus/amd-says-ins...

                                    • rfv6723 2 weeks ago
                                      RDNA is a dead-end.

                                      AMD went down the wrong path by focusing on traditional rendering instead of machine learning.

                                      I think future AMD consumer GPUs would go back to GCN.

                                      • almostgotcaught 2 weeks ago
                                        • adrian_b 2 weeks ago
                                          The identification of the AMD GPU architectures has always been extremely confusing, with tons of different names meaning the same thing and with some names, like GCN, used for several very different things.

                                          The table linked by you is good for revealing the meaning of a part of the many AMD code names.

                                      • deadbabe 2 weeks ago
                                        Will AMD catch up to Nvidia?
                                        • mobilio 2 weeks ago
                                          If they improve software quality and providing some low budget versions then - Yes.
                                          • AzzyHN 2 weeks ago
                                            On the consumer side, almost certainly not. Nvidia is a HUGE brand name, it doesn't matter how good and cheap AMD makes their consumer GPUs, people will buy Nvidia GPUs for the brand and prebuilts will stick with Nvidia for the name.

                                            For AI chips... also probably not, unless AMD can compete with CUDA (or CUDA becomes irrelevant)

                                            • jillesvangurp 2 weeks ago
                                              Actually both Xbox and Playstation use AMD GPUs; and so does the Steam Deck. So there's that. For the narrow niche of gaming PCs, I think there are a lot of kids buying what they can afford and getting creative about what works. AMD isn't doing horrible in that market either.

                                              And for AI, CUDA is already becoming less relevant. Most of the big players use chips by their own designs: Google has its TPUs, Amazon has some in house designs, Apple has it's own CPU/GPU line and doesn't even support anything nvidia at this point, MS do their own thing for Azure, etc.

                                              You are basically making the Intel will stay big because Intel is big for Nvidia. Except of course that stopped being true for Intel. They are still largish. But a lot of data centers are transitioning to ARM CPUs. They lost Apple as a customer. And there are now some decent windows laptops using ARM CPUs as well.

                                              • pjmlp 2 weeks ago
                                                People learning on their laptops, into their way of becoming future researchers, care about what software they can get, regardless of closed system proprietary game consoles, and hyperscalers server farms.
                                              • frje1400 2 weeks ago
                                                > On the consumer side, almost certainly not. Nvidia is a HUGE brand name, it doesn't matter how good and cheap AMD makes their consumer GPUs, people will buy Nvidia GPUs for the brand and prebuilts will stick with Nvidia for the name.

                                                I think that AMD could do it, but they choose not to. If you look at their most recent lineup of cards (various SKUs of 9070 and 9060), they are not so much better than Nvidia at each price point that they are a must buy. They even released an outright bad card a few weeks ago (9060 8 GB). I assume that the rationale is that even if they could somehow dominate the gamer market, that is peanuts compared to the potential in AI.

                                              • pjmlp 2 weeks ago
                                                Not for me, I was burned twice buying laptops with AMD only to battle with their software, and even the FOSS drivers on GNU/Linux weren't that great versus the Windows experience.

                                                While on Windows it has been hit and miss with their SDKs and shader tooling, anyone remembers RenderMonkey?

                                                So NVidia it is.

                                                • gavinray 2 weeks ago
                                                  Yeah, sorry, I'm in the same boat.

                                                  I'm team AMD for CPU (currently waiting for consumer X3D laptops to become reasonably priced).

                                                  But for GPU, if only for the "It Just Works" factor, I'm wedded to NVIDIA for the foreseeable future.

                                                • z3ratul163071 2 weeks ago
                                                  and as of lately, I really think AMD exists only for NVidia not to get slapped with antitrust lawsuits.

                                                  they played that part beautifully in the past decades for Intel

                                                  • naveen99 2 weeks ago
                                                    Beating intel is not just existing for cya against antitrust lawsuits.
                                                    • happycube 2 weeks ago
                                                      Ten years ago nobody would belive that AMD would have over double Intel's market cap in 2025. And at least somewhat surprised that nVidia would be about 10x that.
                                                      • z3ratul163071 2 weeks ago
                                                        [flagged]
                                                    • wmf 2 weeks ago
                                                      Yes, if they can ship on time.
                                                      • z3ratul163071 2 weeks ago
                                                        the only way for them to have any chance at catch up is to fire all the software VPs and all SW middle management, and 90% of the engineers and build the software team from ground up.

                                                        cause the team they have the last decade is clearly retarded.

                                                        • zombiwoof 2 weeks ago
                                                          They don’t care to catch up.