Lion Cove: Intel's P-Core Roars
126 points by luyu_wu 9 months ago | 76 comments- kristianp 9 months agoAbout 94.9 GB/s DRAM bandwidth for the Core Ultra 7 258V they measured. Aren't Intel going to respond to the 200GB/s bandwidth of the M1 Pro introduced 3 years ago? Not to mention 400GB/s of Max and 800GB/s of the Ultra?
Most of the bandwidth comes from cache hits, but for those rare workloads larger than the caches, Apples products may be 2-8x faster?
- adrian_b 9 months agoAMD Strix Halo, to be launched in early 2025, will have a 256-bit memory interface for LPDDR5x of 8 or 8.5 GHz, so it will match M1 Pro.
However, Strix Halo, which has a much bigger GPU, is designed for a maximum power consumption for CPU+GPU of 55 W or more (up to 120 W), while Lunar Lake is designed for 17 W, which explains the choices for the memory interfaces.
- Dylan16807 9 months agoThat's good. And better than match, that's 30% faster, at least until the M4 Pro launches with a RAM frequency upgrade.
On the other hand, I do think it's fair to compare to the Max too, and it loses by a lot to that 512 bit bus.
- kvemkon 9 months ago> LPDDR5x of 8 or 8.5 GHz
Not 8000 or 8500 MT/s and thus the frequency is halved?
- adrian_b 9 months agoSorry, I meant the frequency of the transfers, not that of some synchronization signal, so 8000 or 8500 MT/s, as you say.
However that should have been obvious, because there will be no LPDDR5x of 16000 MT/s. That throughput might be reached in the future, but a different memory standard will be needed, perhaps a derivative of the MRDIMMs that are beginning to be used in servers (with multiplexed ranks).
MHz and MT/s is really the same unit. What differs is the quantity that is measured, e.g. the frequency of oscillations and the frequency of transfers. I do not agree with the method of giving multiple names to a unit of measurement in order to suggest what kind of quantity has been measured. The number of different quantities that can be measured with the same unit is very large. If the method of giving multiple unit names had been applied consistently, there would have been a huge number of unit names. I believe that the right way is to use a unique unit name, but to always specify separately what kind of quantity had been measured, because having only a numeric value and the unit is never sufficient information, without having also which quantity has been measured.
- adrian_b 9 months ago
- Dylan16807 9 months ago
- wtallis 9 months agoLunar Lake is very clearly a response to the M1, not its larger siblings: the core counts, packaging, and power delivery changes all line up with the M1 and successors. Lunar Lake isn't intended to scale up to the power (or price) ranges of Apple's Pro/Max chips. So this is definitely not the product where you could expect Intel to start using a wider memory bus.
And there's very little benefit to widening the memory bus past 128-bit unless you have a powerful GPU to make good use of that bandwidth. There are comparatively few consumer workloads for CPUs that are sufficiently bandwidth-hungry.
- formerly_proven 9 months agoIs the full memory bandwidth actually available to the CPU on M-series CPUs? Because that would seem like a waste of silicon to me, to have 200+ GB/s of past-LLC bandwidth for eight cores or so.
- inkyoto 9 months ago100 Gb/sec per a CPU core.
200+ Gb/sec (IIRC Anandtech measured 240 Gb/sec) per a cluster.
Pro is one cluster, Max is two clusters and Ultra is four clusters, so the accumulative bandwidth is 200, 400 and 800 Gb/sec respectively[*].
The bandwidth is also shared with GPU and NPU cores within the same cluster, so on combined loads it is plausible that the memory bus may become fully saturated.
[*] Starting with M3, Apple has pivoted Pro models into more Pro and less Pro versions that have differing memory bus widths.
- adrian_b 9 months agoI do not know for the latest models, but at least in M1 the CPU was limited to a fraction of the total memory throughput, IIRC to about 100 GB/s.
- inkyoto 9 months ago
- nox101 9 months agowith all of the local ML being introduced by Apple and Google and Microsoft this thinking seems close to "640k is all you need"
I suspect consumer workloads to rise
- throwuxiytayq 9 months agoI think the number of people interested in running ML models locally might be greatly overestimated [here]. There is no killer app in sight that needs to run locally. People work and store their stuff in the cloud. Most people just want a lightweight laptop, and AI workloads would drain the battery and cook your eggs in a matter of minutes, assuming you can run them. Production quality models are pretty much cloud only, and I don’t think open source models, especially ones viable for local inference will close the gap anytime soon. I’d like all of those things to be different, but I think that’s just the way things are.
Of course there are enthusiasts, but I suspect that they prefer and will continue to prefer dedicated inference hardware.
- wtallis 9 months agoLocal ML isn't a CPU workload. The NPUs in mobile processors (both laptop and smartphone) are optimized for low power and low precision, which limit how much memory bandwidth they can demand. So as I said, demand for more memory bandwidth depends mainly on how powerful the GPU is.
- throwuxiytayq 9 months ago
- epolanski 9 months agoThe few reviews we have seen now show that lunar lake is competitive with m3s too depending on the application.
- formerly_proven 9 months ago
- phonon 9 months agoM3 Pro is 150 GB/s (and that should be compared to Lunar Lake's nominal memory bandwidth of 128 GB/s) and the cheapest model with it starts at $2000 ($2400 if you want 36 GB of RAM).
At those price levels, PC laptops have discrete GPUs with their own RAM with 256 GB/s and up just for the GPU.
- wmf 9 months agoThe "response" to those is discrete GPUs that have been available all along.
- Aaargh20318 9 months agoDiscrete GPUs are a dead end street. They are fine for gaming, but for GPGPU tasks unified memory is a game changer.
- kristianp 9 months agoTrue, but I thought Intel might start using more channels to make that metric look less unbalanced in Apple's favour. Especially now that they are putting RAM on package.
- tjoff 9 months agoWhy the obsession of this particular metric? And how can one claim something is unbalanced while focusing on one metric?
- tjoff 9 months ago
- Aaargh20318 9 months ago
- sudosysgen 9 months agoNot really, the killer is latency, not throughput. It's very rare that a CPU actually runs out of memory bandwidth. It's much more useful for the GPU.
95GB/s is 24GB/s per core, at 4.8Ghz that's 40 bits per core per cycle. You would have to be doing basically nothing useful with the data to be able to get through that much bandwidth.
- adrian_b 9 months agoFor scientific/technical computing, which uses a lot of floating-point operations and a lot of array operations, when the memory is limiting the performance almost always the limit is caused by the memory throughput and almost never by the memory latency (in correctly written programs, which allow the hardware prefetchers to do their job of hiding the memory latency).
The resemblance to the behavior of GPUs is not a coincidence, but it is because the GPUs are also mostly doing array operations.
So the general rule is that the programs dominated by array operations are sensitive mostly to the memory throughput.
This can be seen in the different effect of the memory bandwidth on the SPECint and SPECfp benchmark results, where the SPECfp results are usually greatly improved when memory with a higher throughput is used, unlike the SPECint results.
- sudosysgen 9 months agoYou are right that it's a limiting factor in general for that use case, just not in the case of this specific chip - this chip has far less cores per lanes, so latency will be the limiting factor. Even then, I assure you that no scientific workload is going to be consuming 40 bits/clock/core. It's just a staggering amount of memory, no correctly written program would hit this, you'd need to have abysmal cache hit ratios.
This processor has two lanes over 4 P-cores. Something like an EPYC-9754 has 12 lanes over 128 cores.
- sudosysgen 9 months ago
- fulafel 9 months ago40 bits per clock in a 8-wide core gets you 5 bits per instruction, and we have AVX512 instructions to feed, with operand sizes 100x that (and there are multiple operands).
Modern chips do face the memory wall. See eg here (though about Zen 5) where they in the same vein conclude "A loop that streams data from memory must do at least 340 AVX512 instructions for every 512-bit load from memory to not bottleneck on memory bandwidth."
- adrian_b 9 months agoThe throughput of the AVX-512 computation instructions is matched to the throughput of loads from the L1 cache memory, on all CPUs.
Therefore to reach the maximum throughput, you must have the data in the L1 cache memory. Because L1 is not shared, the throughput of the transfers from L1 scales proportionally with the number of cores, so it can never become a bottleneck.
So the most important optimization target for the programs that use AVX-512 is to ensure that the data is already located in L1 whenever it is needed. To achieve this, one of the most important things is to use memory access patterns that will trigger the hardware prefetchers, so that they will fill the L1 cache ahead of time.
The main memory throughput is not much lower than that of the L1 cache, but the main memory is shared by all cores, so if all cores want data from the main memory at the same time, the performance can drop dramatically.
- sudosysgen 9 months agoThe processors that hit this wall have many many cores per memory lane. It's just not realistic for this to be a problem with 2 lanes of DDR5 feeding 4 cores.
These cores cannot process 8 AVX512 instructions at once, in fact they can't do it at all, as it's disabled on consumer Intel chips.
Also, AVX instructions operate on registers, not on memory, so you cannot have more than one register being loaded at once.
If you are running at ~4 instruction per clock, to actually go ahead and saturate 40 bits per clock on 64 bit loads, you'd need 1/6 of instructions to hit main memory (not cache)!
- adrian_b 9 months ago
- unsigner 9 months agoThere might be a chicken-and-egg situation here - one often hears that there’s no point having wider SIMD vectors or more ALU units, as they would spend all their time waiting for the memory anyway.
- adrian_b 9 months agoThe width and count of the SIMD execution units are matched to the load throughput from the L1 cache memory, which is not shared between cores.
Any number of cores with any count and any width of SIMD functional units can reach the maximum throughput, as long as it can be ensured that the data can be found in the L1 cache memories at the right time.
So the limitations on the number of cores and/or SIMD width and count are completely determined by whether in the applications of interest it is possible to bring the data from the main memory to the L1 cache memories at the right times, or not.
This is what must be analyzed in discussions about such limits.
- adrian_b 9 months ago
- immibis 9 months agoCPUs generally achieve around 4-8 FLOPS per cycle. That means 256-512 bits per cycle. We're all doing AI which means matrix multiplications which means frequently rereading the same data bigger than the cache, and doing one MAC with each piece of data read.
- jart 9 months agoThe most important algorithm in the world, matrix multiplication, just does a fused multiply add on the data. Memory bandwidth is a real bottleneck.
- adrian_b 9 months agoThe importance of the matrix multiplication algorithm is precisely due to the fact that it is the main algorithm where the ratio between computational operations and memory transfers can be very large, therefore the memory bandwidth is not a bottleneck for it.
The right way to express a matrix multiplication is not that wrongly taught in schools, with scalar products of vectors, but as a sum of tensor products between the column vectors of the first matrix with those row vectors of the second matrix that share with them the same position of the element on the main diagonal of the matrix.
Computing a tensor product of two vectors, with the result accumulated in registers, requires a number of memory loads equal to the sum of the lengths of the vectors, but a number of FMA operations equal to the product of the lengths (i.e. for square matrices of size NxN, there are 2N loads and N^2 FMA for one tensor product, which multiplied with N tensor products give 2N^2 loads and N^3 FMA operations for the matrix multiplication).
Whenever the lengths of both vectors are are no less than 2 and at least one length is no less than 3, the product is greater than the sum. With greater vector lengths, the ratio between product and sum grows very quickly, so when the CPU has enough registers to hold the partial sum, the ratio between the counts of FMA operations and of memory loads can be very great.
- svantana 9 months agoIs it though? The matmul of two NxN matrices takes N^3 macs and 2*N^2 memory access. So the larger the matrices, the more the arithmetic dominates (with some practical caveats, obviously).
- adrian_b 9 months ago
- adrian_b 9 months ago
- 9 months ago
- adrian_b 9 months ago
- perryh2 9 months agoIt looks awesome. I am definitely going to purchase a 14" Lunar Lake laptop from either Asus (Zenbook S14) or Lenovo (Yoga Slim). I really like my 14" MBP form factor and these look like they would be great for running Linux.
- jjmarr 9 months agoI constantly get graphical glitches on my Zenbook Duo 2024. Would recommend against going Intel if you want to use Linux.
- skavi 9 months agoIntel has historically been pretty great at Linux support. Especially for peripherals like WiFi cards and GPUs.
- jauntywundrkind 9 months agoTheir "PCIe" wifi cards "mysteriously" not working in anything but Intel systems is enraging.
I bought a wifi7 card & tried it in a bunch of non-Intel systems, straight up didn't work. Bought a wifi6 card and it sort of works, ish, but I have to reload the wifi module and sometimes it just dies. (And no these are not cnvio parts).
I think Intel has a great amazing legacy & does super things. Usually their driver support is amazing. But these wifi cards have been utterly enraging & far below what's acceptable in the PC world; they are not fit to be called PCIe devices.
Something about wifi really brings out the worst in companies. :/
- jauntywundrkind 9 months ago
- silisili 9 months agoI get them also in my Lunar Lake NUC. Usually in the browser, and presents as missing/choppy text oddly enough. Annoying but not really a deal breaker. Hoping it sorts out in the next couple kernel updates.
- jjmarr 9 months agoDo you get weird checkerboard patterns as well?
- jjmarr 9 months ago
- gigatexal 9 months agoGive it some time. Probably needs updated drivers, intel and Linux have been rock solid for me too. If your hardware is really new it’s likely a kernel and time issue. 6.12 or 6.13 should have everything sorted.
- rafaelmn 9 months agoGiven the layoffs and the trajectory spiral I wouldn't be holding my breath for this.
- rafaelmn 9 months ago
- skavi 9 months ago
- amanzi 9 months agoI'm really curious about how well they run Linux. e.g. will the NPU work under Linux in the same way it does on Windows? Or does it require specific drivers? Same with the batter life - if there a Windows-specific driver that helps with this, or can we expect the same under Linux?
- ac29 9 months agoYou can look at the NPU software stack here:
https://github.com/intel/linux-npu-driver/blob/main/docs/ove...
The Linux driver is specific to Linux, but the software on top of that like oneAPI and OpenVINO are cross platform I think.
- wmf 9 months agoAll NPUs require drivers. https://www.phoronix.com/news/Intel-Linux-NPU-Driver-1.5
- ac29 9 months ago
- formerly_proven 9 months agoI’d wait for LG honestly
- jjmarr 9 months ago
- RicoElectrico 9 months ago> A plain memory latency test sees about 131.4 ns of DRAM latency. Creating some artificial bandwidth load drops latency to 112.4 ns.
Can someone put this in context? The values seem order of magnitude higher than here: https://www.anandtech.com/show/16143/insights-into-ddr5-subt...
- toast0 9 months agoThe chips and cheese number feels like an all-in number; get a timestamp, do a memory read (that you know will not be served from cache), get another timestamp.
The anandtech article is latencies for parts of a memory operation, between the memory controller and the ram. End to end latency is going to be a lot more than just CAS latency, because CAS latency only applies once you've got the proper row open, etc.
- wtallis 9 months agoGetting requests up through the cache hierarchy to the DRAM controller, and data back down to the requesting core's load/store units is also a non-trivial part of this total latency.
- wtallis 9 months ago
- foota 9 months agoI think the numbers in that article (the CAS latency) are the latency numbers "within" the DRAM module itself, not the end to end latency between the processor and the RAM.
You could read the article on the latest AMD top of the line desktop chip to compare: https://chipsandcheese.com/2024/08/14/amds-ryzen-9950x-zen-5... (although that's a desktop chip, the original article compares the Intel performance to 128 ns of DRAM latency for AMD's mobile platform Strix Point)
- Tuna-Fish 9 months agoCAS latency is only the latency of doing an access from an open row. This is in no way representative of a normal random access latency. (Because caches are so large that if you were frequently hitting open rows, you'd just load from cache instead.)
The way CAS has been widely understood as "memory latency" is just wrong.
- Tuna-Fish 9 months ago
- Sakos 9 months agoThat article is about RAM latency in isolation. See this Anandtech article that shows similar numbers to chips and cheese when evaluating a CPU's DRAM latency (further down on the page): https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...
- jart 9 months agoUse Intel's mlc (memory latency checker) tool to measure your system. On a GCE instance I see about 97ns for RAM access. On a highly overclocked gaming computer with a small amount of RAM I see 60ns. Under load, latency usually drops to about 200ns. On workstation with a lot of RAM and cores I see it drop to a microsecond.
- toast0 9 months ago
- adrian_b 9 months agoI completely agree with the author that renaming the L1 cache memory as L0 and introducing a new L1 cache, as done by Intel is a completely misleading terminology.
The correct solution is that from the parent article, to continue to call the L1 cache memory as the L1 cache memory, because there is no important difference between it and the L1 cache memories of the previous CPUs, and to call the new cache memory that has been inserted between the L1 and L2 cache memories as the L1.5 cache memory.
Perhaps Intel did this to give the very wrong impression that the new CPUs have a bigger L1 cache memory than the old CPUs. To believe this would be incorrect, because the so called new L1 cache has a much lower throughput and a worse latency than a true L1 cache memory of any other CPU.
The new L1.5 is not a replacement for an L1 cache, but it functions as a part of the L2 cache memory, with identical throughput as the L2 cache, but with a lower latency. As explained in the article, this has been necessary to allow Intel to expand the L2 cache to 2.5 MB in Lunar Lake and to 3 MB in Arrow Lake S (desktop CPU), in comparison with AMD, which has an only 1 MB L2 cache (but a bigger L3 cache).
According to rumors, while the top AMD desktop CPUs without stacked cache memory have an 80 MB L2+L3 cache (16 MB L2 + 64 MB L3), the top Intel model 285K might have 78 MB of cache, i.e. about the same amount, but with a different distribution on levels: 2 MB L1.5 + 40 MB L2 + 36 MB L3. Nevertheless, until now there is no official information from Intel about Arrow Lake S, whose launch is expected in a month from now, so the amount of L3 cache is not certain, only the amounts of L2 and L1.5 are known from earlier Intel presentations.
Lunar Lake is an excellent design for all applications where adequate cooling is impossible, i.e. thin and light notebooks and tablets or fanless small computers.
Nevertheless, Intel could not abstain from not using unfair marketing tactics. Almost all the benchmarks presented by Intel at the launch of Lunar Lake have been based on the top model 288V. Both top models 288V and 268V are likely to be unobtainium for most computer models, while at the few manufacturers that will offer this option they will be extremely overpriced.
Most available and affordable computers with Lunar Lake will not offer any better CPU than 258V, which is the one tested in the parent article. 258V has only 4.8 GHz/2.2 GHz turbo/base clock frequencies, vs. 5.1 GHz/3.3 GHz of the 288V used in the Intel benchmarks and in many other online benchmarks. So the actual experience of most Lunar Lake users will not match most published benchmarks, even if it will be good enough in comparison with any competitors in the same low-power market segment.
- AzzyHN 9 months agoWe'll have to see how this compared to Zen 5 once 24H2 drops.
And once more than like three Zen 5 laptops come out.
- deaddodo 9 months agoThe last couple of generations have had plenty of AMD options. Razer 14, Zephyrus G14, TUFbook, etc. If you get out of the performance/enthusiast segment, they're even more plentiful (Inspirons, Lenovos, Zenbooks, etc).
- nahnahno 9 months agothe review guide had everyone on 24H2; there were some issues with one of the updates that messed up performance for lunar lake pre-release, but appears to have been fixed in time for release.
I’d expect lunar lake’s position to improve a bit in coming months as they tweak scheduling, but AMD should be good at this point.
Edit: around 16 mark, https://youtu.be/5OGogMfH5pU?si=ILhVwWFEJlcA3HLO. The laptops came with 24H2
- deaddodo 9 months ago
- AStonesThrow 9 months agoI apologize in advance for my possibly off-topic linguistics-nerd pun:
Q: What do you call Windows with its UI translated to Hebrew? A: The L10N of Judah
- 9 months ago
- 9 months ago