CPU ST Performance: Not Much Change from M1

Apple didn’t talk much about core performance of the new M1 Pro and Max, and this is likely because it hasn’t really changed all that much compared to the M1. We’re still seeing the same Firestrom performance cores, and they’re still clocked at 3.23GHz. The new chip has more caches, and more DRAM bandwidth, but under ST scenarios we’re not expecting large differences.

When we first tested the M1 last year, we had compiled SPEC under Apple’s Xcode compiler, and we lacked a Fortran compiler. We’ve moved onto a vanilla LLVM11 toolchain and making use of GFortran (GCC11) for the numbers published here, allowing us more apple-to-apples comparisons. The figures don’t change much for the C/C++ workloads, but we get a more complete set of figures for the suite due to the Fortran workloads. We keep flags very simple at just “-Ofast” and nothing else.

SPECint2017 Rate-1 Estimated Scores

In SPECint2017, the differences to the M1 are small. 523.xalancbmk is showcasing a large performance improvement, however I don’t think this is due to changes on the chip, but rather a change in Apple’s memory allocator in macOS 12. Unfortunately, we no longer have an M1 device available to us, so these are still older figures from earlier in the year on macOS 11.

Against the competition, the M1 Max either has a significant performance lead, or is able to at least reach parity with the best AMD and Intel have to offer. The chip however doesn’t change the landscape all too much.

SPECfp2017 Rate-1 Estimated Scores

SPECfp2017 also doesn’t change dramatically, 549.fotonik3d does score quite a bit better than the M1, which could be tied to the more available DRAM bandwidth as this workloads puts extreme stress on the memory subsystem, but otherwise the scores change quite little compared to the M1, which is still on average quite ahead of the laptop competition.

SPEC2017 Rate-1 Estimated Total

The M1 Max lands as the top performing laptop chip in SPECint2017, just shy of being the best CPU overall which still goes to the 5950X, but is able to take and maintain the crown from the M1 in the FP suite.

Overall, the new M1 Max doesn’t deliver any large surprises on single-threaded performance metrics, which is also something we didn’t expect the chip to achieve.

Power Behaviour: No Real TDP, but Wide Range CPU MT Performance: A Real Monster
Comments Locked

493 Comments

View All Comments

  • Ppietra - Tuesday, October 26, 2021 - link

    anyone can compile SPEC and see the source code
  • Ppietra - Monday, October 25, 2021 - link

    There aren’t that many games that are actually optimized for Apple’s hardware so you cannot actually extrapolate to other case scenarios, though we shouldn’t expect to be the best anyway. We should look for other kind of workloads to see out it behaves.
    SPEC uses a lot of different real world tasks.
  • FurryFireball - Wednesday, October 27, 2021 - link

    World of Warcraft is optimized for the M1
  • Ppietra - Wednesday, October 27, 2021 - link

    true, but it isn’t one of the games that were tested.
    What I meant is that people seem to be drawing conclusions about hardware based on games that have almost no optimisation.
  • The Hardcard - Monday, October 25, 2021 - link

    Please provide 1 example of the M1 falling behind on native code. As far as games, we’ll see if maybe one developer will dip a toe in with a native game. I wouldn’t buy one of these now if gaming was a prority.

    But note, these SPEC scores are unoptimized and independently compiled, so there are no benchmark tricks here. Imagine what the scores would be if time was taken to optimize to the architecture’s strengths.
  • name99 - Monday, October 25, 2021 - link

    Oh the internet...
    - Idiot fringe A complaining that "SPEC results don't count because Apple didn't submit properly tuned and optimized results".
    - Meanwhile, simultaneously, Idiot fringe B complaining that "Apple cheats on benchmarks because they once, 20 years ago, in fact tried to create tuned and optimized SPEC results".
  • sean8102 - Tuesday, October 26, 2021 - link

    From what I can find Baldur's Gate 3 and WoW are the only 2 demanding games that are ARM native on macOS.
    https://www.applegamingwiki.com/wiki/M1_native_com...
  • michael2k - Monday, October 25, 2021 - link

    From the article, yes, the benchmark does show the M1M beating the 3080 and Intel/AMD:
    On the GPU side, the GE76 Raider comes with a GTX 3080 mobile. On Aztec High, this uses a total of 200W power for 266fps, while the M1 Max beats it at 307fps with just 70W wall active power. The package powers for the MSI system are reported at 35+144W.

    In the SPECfp suite, the M1 Max is in its own category of silicon with no comparison in the market. It completely demolishes any laptop contender, showcasing 2.2x performance of the second-best laptop chip. The M1 Max even manages to outperform the 16-core 5950X – a chip whose package power is at 142W, with rest of system even quite above that. It’s an absolutely absurd comparison and a situation we haven’t seen the likes of.

    However, your assertion regarding applications seems completely opposite what the review found:
    With that said, the GPU performance of the new chips relative to the best in the world of Windows is all over the place. GFXBench looks really good, as do the MacBooks’ performance productivity workloads. For the true professionals out there – the people using cameras that cost as much as a MacBook Pro and software packages that are only slightly cheaper – the M1 Pro and M1 Max should prove very welcome. There is a massive amount of pixel pushing power available in these SoCs, so long as you have the workload required to put it to good use.
  • taligentia - Monday, October 25, 2021 - link

    Did you even read the article ?

    The "real world" 3080 scenarios were done using Rosetta emulated apps.

    When you look at GPU intensive apps e.g. Davinci Resolve it is seeing staggering performance.
  • vlad42 - Monday, October 25, 2021 - link

    Did you read the article? Andrei made sure the UHD benchmarks were GPU bound, not CPU bound (which would be the case if it were a Rosetta issue).

Log in

Don't have an account? Sign up now