Intel’s Xeon Platinum 8284 CPU: When 300 MHz Cost $5,500by Anton Shilov on July 18, 2019 12:00 PM EST
- Posted in
- Enterprise CPUs
- Xeon Platinum
Besides Xeon processors that are officially mentioned on its website and price list, Intel has tens of ‘off roadmap’ server CPUs only available to select customers that have special requests. Recently journalists from ComputerBase discovered that Intel has Xeon Platinum 8284, the company’s fastest 28-core chip for multi-socket servers. The CPU runs 300 MHz faster than the ‘official’ Xeon Platinum 8280, but costs considerably more.
Intel’s Xeon Platinum 8284 packs 28 cores with Hyper-Threading that run at 3.0-4.0 GHz, feature a 38.5 MB cache, a six-channel memory controller supporting up to 1 TB of DDR4-2933 with ECC, 48 PCIe 3.0 lanes, and other capabilities found in codenamed Cascade Lake CPUs. Since the chip runs at 300 MHz higher base frequency when compared to the Xeon Platinum 8280, it has a 240 W TDP, up from 205 W. Meanwhile, Tcase of the CPU (the maximum allowed temperature on the IHS of the processor) was reduced to 65°C (down from 84°C), so the CPU requires a very sophisticated cooling system that can take away 240 W at the aforementioned temperature.
Being Intel’s fastest 28-core CPU for multi-socket servers, the Xeon Platinum 8284 processor costs $15,460 (recommended customer price for 1k unit order, RCP), whereas the Xeon Platinum 8280 that runs at a 300 MHz lower frequency, costs $10,009 for 1ku.
|Intel Second Generation Xeon Scalable Family
|Xeon Platinum 8200|
The Xeon Platinum 8284 is not mentioned in Intel’s pricelist, and not under Cascade Lake on Intel's ARK database, but it is searchable if you know the exact number. This typically means that the CPU is only available to select customers or even a customer. That said, it is possible that apart from higher clocks, this 'semi-custom' off-roadmap processor may come with features that go beyond that and this might explain the huge price difference when compared to the model 8280.
- The Intel Second Generation Xeon Scalable: Cascade Lake, Now with Up To 56-Cores and Optane!
- Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600
- Intel Architecture Manual Updates: bfloat16 for Cooper Lake Xeon Scalable Only?
- Intel Xeon Update: Ice Lake and Cooper Lake Sampling, Faster Future Updates
Source: Intel’s ARK (via ComputerBase)
Post Your CommentPlease log in or sign up to comment.
View All Comments
twotwotwo - Friday, July 19, 2019 - linkCurious about the 24C range, since 6C=>8C chiplets is like a >50% price jump (b/c yields I guess). Also, of course, about AMD's pricing and Intel's (pricing/product) response.
chada - Monday, July 22, 2019 - linkcores, cost, threads. Pick 1 or 2 depending on your needs.
mmrezaie - Thursday, July 18, 2019 - linkI always wonder why Intel has so many SKUs. I don't think anyone wants this many choices since they are hardly different. I like to see choices but in such a small increments, nah? Is this marketing? Production forces them to it e.g. silicon variation?
TheWereCat - Thursday, July 18, 2019 - linkIts just a ton of different bins mostly
edzieba - Thursday, July 18, 2019 - linkThese are Intel's equivalent to AMD's 'semi custom' service, were Intel will produce an SKU to the request of a specific vendor for a specific product. It's why the 'list price' is a bit of a misnomer, as they're not listed and that price doesn't really reflect what the companies buying these variants are paying.
HStewart - Thursday, July 18, 2019 - linkI am curious what L M and Y mean with same specs
SarahKerrigan - Thursday, July 18, 2019 - linkM: Supports 2TB RAM
L: Supports 4.5TB RAM
S: Speed Select
HStewart - Thursday, July 18, 2019 - linkThanks so this shows one of reasons why there is so many different products.
GreenReaper - Thursday, July 18, 2019 - linkYes, except that at least two of those are likely to be artificial limitations for product segmentation (read: extracting the most profit from those who will pay for it).
Is this a bad thing? Not necessarily. But it's definitely a thing, and annoying if you want those features.
edzieba - Friday, July 19, 2019 - linkNo segmentation, die harvesting. When you have a very large die, you have lots of individual component parts you can fuse off if they have a defect. Cores, PCIe interfaces, memory interfaces, etc. If you did not harvest these dies and only sold 'perfect' dies as a single SKU line, then you would have a very small volume of parts and high prices for those parts. By finely binning dies into a large number of SKUs based on yield, then you have many more sellable dies and customers can buy ones that lack features they do not use to reduce outlay.