I'd imagine the 16C is being held in reserve, so if Intel try to respond they'll just drop everything by one price level and insert the 16C at $499 again.
Don't see how this would be "screwing over their current customers". It's PC hardware; Whenever you buy something, you know chances are you could get something with a better price to perfromance ratio within the next 6 months.
Considering the fact that we do not have a 12core part on the consumer side yet (the 3900x will be the first..) I fail to see how not releasing a 16core part at launch is screwing over anyone. Amd raised the bar here, it's a game changer. They also announced a new 64core part on the professional side of things..
Sure, as soon as they introduce a processor at some price point, they must never again introduce any better processor at that price point. That's definitely how the computer industry was built.
Technically the 3700x is the 2700x replacement, running at a higher frequency /w 15% improvements clock for clock and a 65W TDP. The 3900X would be hmm.. a refresh of the first gen 1800x I guess.. since it's priced in and around there.
They wouldn't screw anyone. If anything, 16 cores should be the super-maximum number of cores you should feed with merely 2 memory channels. I would personally prefer to max out at 12 cores. I believe one of the reasons the Zen 2 based Threadripper was either canned or postponed was because it would make little to no sense to go above 32 cores with merely 4 memory channels, and they probably have not yet decided if they want to max out at 32 cores (which is what they should do). The same applies to the 64-core Epyc and its 8 memory channels. Beyond 64 cores you need to either add more channels or, more sanely, move to DDR5. Both require a new platform. Intel paired their new cream of the crop AP platform with *12* memory channels, and I believe they intend to max out at 56 cores. That's quite more reasonable.
That is barely a real product. I don’t know who will buy it. It is two standard 28 core cpus (at over 600 square mm each; really expensive) placed on one package. It doesn’t have any of the power optimizations that are possible with chiplets designed to be placed in an MCM. At 400 Watts TDP, it isn’t really a competitor to Epyc 2. The 12 channel memory is more incidental since this product would never have existed if it wasn’t needed as a marketing response to Epyc 2.
Lisa Su confirmed there would be more threadripper in an interview right after the keynote.
“You know. it’s very interesting, some of the things that circulate on the Internet—I don’t think we ever said that Threadripper was not going to continue—it somehow took on a life of its own on the Internet,”
"If mainstream is moving up, then Threadripper will have to move up, up—and that's what we're working on,"
I know exactly where the idea of "Threadripper has been cancelled" came from. There was a "leaked" roadmap without Threadripper on it, and some people assumed that was real. The lack of a discussion about Threadripper so far this year has served to make people believe it.
Our Geek Squad is expert in providing the best of solutions to our customers to help them with tech repairs.Geek Squad has made a huge area in providing the best services for office and home gadgets repair. The professional and enthusiastic experts of our team are capable of handling all sorts of contraptions, gadgets, hardware, electrical, control issues, and other adaptable issues that you would not find anywhere else. https://supportcustomers.us/geek-squad-tech-suppor... https://geektechsupport.me
what is this ?? free advertising for a knock off of the best buy tech support of the same name ? i wonder how long before Best Buy sues for copy right infringement
You get what you pay for at the time you buy it. The reward is that you get to enjoy it while everyone else who "waits", does not get a priceless experience. Too cheap for a 2080Ti? Well, great, but you won't get to enjoy a 2080Ti in it's heyday.
That's how everything works. If you bought a Model T in 1915, you can't be mad that a Toyota Camry is better in 2019. I'd rather have bought the Model T when it came out and enjoy it, then keep upgrading. Because I have a job.
i have a job too.. but i cant justify paying 1500 to 2k for a video card... if you want to call some one cheap for that.. then.. must be nice to have more money then brains
The whole NVIDIA RTX issue doesn't apply here, because very little supports ray tracing right now, so there isn't a lot of getting a much better performing product that 2080Ti buyers have gotten for all that extra money.
Spending $500 for a Ryzen 9 3900X on the other hand, is something that will provide benefits, and even in another six years, you probably won't see 12 core chips as entry level(though 8 core may be at that point).
If you truly need 16 cores, you have Threadripper, which goes up to 32 real cores. True, you won't get pci-e 4, but you do get 60 pci-e 3 lanes, so if for example you need a pci-e 4 m.2 ssd, you can instead use two pci-e 3 m.2 ssds in a raid or something to get same speed.
You have to figure that if Intel were to release a 12 core consumer chip, it won't be sold for less than $700, so AMD could leave prices where they are, but sell the 16 core chip for the same price as the Intel 12 core chip. The key comes down to competition, but also, not really increasing prices. When the Ryzen 7 1800X was released, it was sold for $500 and sold out very quickly. The 3900X for $500 is 50% more cores, plus boost was 4.0GHz for the 1800X, while the 3900X will have a 4.6GHz boost. That's a lot more CPU for the same price point, even two years later.
yeah, they figured out how to combine two 6-core chips, but can't imagine how to combine 8c ones
isn't obvious that 12 cores is enough to beat 9900K, and 16-core will appear AFTER Intel will rollout its 10-core beast? just like AMD pushed Intel to double amount of cores in 2 years, Intel pushed AMD back to rollout 12-core CPU, and it don't yet pushed AMD enough to get 16 cores on the market
I work for a relatively small ISP (~50 racks), and all of OUR new server purchases are exclusively being quoted out as Epyc (customers are loading whatever they want).
At least by us, and I expect others, QuickAssist is largely seen as an unnecessary backdoor of likely poor security, and with dubious value. As well, Intel's Meltdown/Spectre/Fallout/etc vulnerabilities that have greatly reduced the processing power that we purchased is deplorable, and further erodes our faith in proprietary Intel implementations. On top of that, we also see QuickAssist as an unnneccessary "lock-in" to a CPU architecture, which is very bad business for people like us.
Having said that, as far as I know, AMD does not have an equivalent, but I doubt it matters to most people, as cost/performance, density (very important for us), security, and availability are the main decision points, and the new Epyc chips win on all fronts.
Too bad your words are not assisted by hard facts or numbers, and AMD market share remain really low. Be careful, likely there is a security hole in your new SKU that you don't know because nobody have given notice of it. Marketing post.
This is what I heard about the Rome CPU according to articles. It's the first to support PCIE 4.0, is a drop in replacement for existing boards, has 4x the floating point power of the previous gen, and 2x the speed per socket over naples. They showed a demo of 1 Rome cpu beating 2 flagship intels in a rendering test..
I don't know a whole helluva lot about cpu's like that but it sounds impressive..
AMDs numbers have grown supstantially in the enterprise space in the last two years. Intel still has the majority but these things don't happen overnight. Somebody is buying a lot of epyc based systems though and the extra attention given by cloud providers like Amazon means you should take it seriously.
I don't admin nearly as many systems as OP but I am also looking at epyc for out next VM cluster later this year. And it really does come down to price/performance. They have a much more sensible upgrade path and road map too.
Price/Performance has always been a selling point for AMD.. but from speculation by others it appears AMD will be competing with Intel across just about every sector and not just at certain price points.. either matching or beating them outright. Something that we haven't really seen before. It's gotten a lot of people really excited and should certainly bode well for all of us as Intel will be forced to compete on pricing as well as innovation.
Going to be hard for AMD to compete against the very datacenter focused Intel Xeon products, with optimizations for video transcoding, as well as FPGA accelerators.
Depends on what you are doing. The vast majority of businesses do not use their servers for video transcoding. Most of the work done is pretty mundane. AD domain controllers, SMB file servers, SQL servers, VDI and application hosting.
Users with more specific needs will have to be more discerning but for everyone else its a simple price/performance arithmetic.
Could just be holding it back? The 12 core doesn't need to use top tier chips to be made only 6 of the chips need to work and reach the desired clock speeds. And either way even without the 16 core chip their processors are faster than anything Intel has to show so why not. It is unfortunate but AMD is now in the position where they can hold back products because Intel can't produce.
The nature of the 'battlefield' doesn't really matter, the same basic strategies still apply. If Intel strikes back now, AMD can immediately respond with whatever they're holding back.
I also don't think that Lisa Su could have gotten AMD where it is today by thinking of their products as 'computer toys'.
Clocks are but one spec of a multitude of things a CPU can use to be a top performer.
The most valuable is definitely IPC. If you have a low IPC, you need more clocks to get more performance. If you have a high IPC, you need less clocks. The fact this chip goes up to 4.6 ghz on a 12 core part is quite substantial. Intel's 5ghz i9 is on a single core, and yes they announced a 5 ghz all core on the 9900ks, but that's probably gonna require amazing cooling.
clock speeds are like horsepowers in a car. It's a spec, but it's not the vehicle's total performance. Torque and Efficiency are two very important things to consider, and just because the Ford GT's 650HP engine is greater than the F-150's 400ish, doesn't mean that the GT can tow better. It's quicker yes, but it's not a workhorse.
We don't know this for sure until someone tries to overclock these chips. Until then all talk about clockspeeds is nothing more than hearsay remember the 9900k is not a real "95w" chip because when it is set to run as a real 95W chip it loses a lot of it's performance. So it is possible that AMD's R7 3800X could reach 5Ghz (on air/water) when overclocked, but we won't know for sure until closer to or after July 7th.
" Too bad AMD can not clock....this is the real problem " maybe they dont need to... if Zen 2 is able to be on par or 1-5% faster then intel while being 300-400 mhz slower.. then why clock them that high ?? it could just make intel look even worse :-)
Clock speed means nothing if it doesn't have IPCto go with it. The Pentium 4 was able to clock very high, but it wasn't faster than the Athlon 64 clocked 1GHz slower. The AMD FX9590 was able to hit 5GHz but was slower than the Intel parts clocked 1GHz slower.
"The 12 core doesn't need to use top tier chips to be made only 6 of the chips need to work and reach the desired clock speeds."
It also fits into 105W TDP with 105W (or a little more) motherboards and 105W standard cooler. Hypothetical 16C/32T 3990X needs something like 125W to make sense. And of course it would be memory-limited in a whole lot of workloads, just 128bits of DDR4 serving 32 threads is insane.
The server chips may go through a lot more validation, but they don’t need to bin that high in clock. The sever parts will mostly be in the 2.5 GHz range or lower. They will bin them heavily for power consumption more than anything else.
Sever chips don't launch till Q3, Ryzen launches a minimum of 2 months and up to 5 months before the Rome Launch. The only binning they'll be doing in the meantime is strictly for Ryzen.
As they did before, it's a fair bet that all the very best clocking chiplets are being stockpiled for the next Threadripper. Which Lisa Su confirmed (after the keynote) that they are still working on.
Obviously the faster 8 core chiplets have to be binned to introduce the Ryzen 9 3950X, or the Ryzen 10 perhaps. It will take time for good 8 core chiplets to get set aside for new SKU's in the near future, and this was most expected. No mention of Ryzen 3 either. Zen 2 is obviously a staggered release over the second half of 2019. So I expect a 12-core Ryzen 7 3850X at or near 5Ghz, and a 16 core Ryzen 9 3850X and then a Ryzen 3 6-core with SMT and a 6-core without SMT. I'm sure we will see 6, 8, 12 and 16 core Ryzens for the Zen 2 family stack. There is simply no freggin way AMD would just toss all those extra fast or partially defective chiplets in the trash which could be used instead for Ryzen 9 or Ryzen 3. I'm betting the 5Ghz parts come later this year too once enough passing chiplets have been binned just to answer Intel's upcoming 10-core desktop parts. Think about it, the Zen 2 16 core at 5Ghz boost would easily make Intel's 10-core chips look silly as well. They can't let all their eggs out of the basket just yet. Besides its clear that many of the X570 boards have VRM's designed for 16 core CPU's.
5GHz this year... not likely. In 2 to 3 years... maybe. This is good improvement! We can Expect some golden samples to reach 4.7GHz maybe. The power usage seems to ramp up very quicly from 4.4 to 4.5GHz! So getting 4.6 GHz seems to be hard at this moment. Requires a good sample.
We don't know how high these new processors will actually clock. Intel was hitting 5.0GHz for years before 5.0GHz became the official boost/turbo speed. Of course, Intel doesn't talk about the REAL TDP of chips, just the TDP at base speeds. So if AMD is hitting 4.6GHz on 12 cores official boost, it is possible these chips CAN run faster with better cooling and a motherboard that can handle the increased power demand. Asus ROG boards tend to be overkill for the official clock speeds, and can handle a LOT more power demand.
"The power usage seems to ramp up very quicly from 4.4 to 4.5GHz!" TDPs describe the thermal emission of (all-core) base clocks, not (either all-core or single-core) boost clocks. Boost clocks have separate, generally unreported and *much* higher "TDPs" (plural because they depend on the number of active cores at boost clock), which the cooler needs to be able to handle so that boost clocks can be sustained for long periods of time. What's highly unusual about the 3800X (assuming it was not misreported) is that it has a +45W TDP for a mere +300 MHz base clock.
Intel and AMD measure TDP differently. What you described is how Intel rates their TDP. If memory serves, AMD measures TDP when all cores are boosting as fast as they can. I don't know how AMD measures it exactly, but Intel measures at all core on base frequency.
I suspect Zen 3 on TSMC's 7nm+ will certainly be able to hit 5GHz with EUV, and that just went into mass production. By the time Zen 3 comes out next year, TSMC will have had 6+ months with 7nm+ so the process should be more matured.
While those doesn't appear to be the case, it would be so nice and simple if it was:
Athlon should be 2C/2T and 2C/4T if they even bother with dual-core chipkets. Ryzen 3 should be 4C/4T and 4C/8T. Ryzen 5 should be 6C/6T and 6C/12T. Ryzen 7 should be 8C/8T and 8C/16T. Ryzen 9 should be 12C/12T and 12C/24T.
Then, later on, they could release a fully-enabled dual-chiplet CPU in 16C/16 and 16/16T variants, just to really mess with Intel. Call it Ryzen X(treme). :D
With Ryzen going to 16 cores, and Epyc going down to 8 cores, it doesn't really leave room for Thresdripper. Wouldn't be too surprised if TR is retired.
The only place AMD turned off SMT was on the Ryzen 3 models with 4 core/4 thread, otherwise, I don't expect AMD to ever leave SMT off on a Ryzen 5. AMD isn't Intel.
Ryzen 3 will probably be limited to 12 nm IGP parts which are a single chip. You probably need to get at least Ryzen 5 to get the 7 nm chiplet version with multiple chiplets.
5GHz isn't happening on TSMC 7nm. Not from the factory, anyway. Unless we're taking super-uber-elite cherry picked dies wrapped in the finest (ESD-safe, of course) cashmere, that you've got to sell your firstborn into slavery to acquire.
It's not a great node for high performance. It's a big step up from where AMD was at before (they're jumping ~2 nodes), but you'll have to wait a bit longer for those speeds. TSMC has some great stuff in the pipeline, but N7 is a bit of a dud.
Four years ago, you could get an Intel chip and overclock it to 5.0GHz, well before Intel made it the official "turbo" speed. Intel actually had the potential to just clock chips much higher than it did, giving a lot of room to just sit around and bump clock speeds over the past four years without any significant changes.
We don't know how high these new AMD chips can clock on all cores at this point, and it may require better motherboards than many people have. An Asus ROG Crosshair VI Hero from the first generation may be able to do better than many second generation boards, just because of the VRMs.
7nm with EUV might help (esp. for defect rates), but generally, these ultra-dense nodes haven't been kind to clock speeds or power delivery systems (Intel is still struggling with 10nm that is equivalent to TSMC 7nm). Copper resistance increases as it gets smaller, so putting high current through wires with higher resistance will generate a lot of localized heat energy. Without more exotic materials for transistors and wire connects (and studying different types of transistors like GAA FETs for 5nm and below), I think clock speeds will stagnate and energy usage for higher clocks will ramp significantly as we move to smaller lithographies. I expect cache sizes will be the next battle to retain as much data on-chip to both reduce power and increase performance per clock by not having to make as many trips out to slow RAM for instructions and/or data.
My guess: standing by for a last 2019 or 2020 refresh. AMD simply doesn't have to play their full hand right now to be competitive. Going to 16 core on AM4 looks to be trivial on paper since they're already doing 12 core: just a matter of clock speeds and core voltage to make it happen inside of AM4 parameters.
My guess is the 16 core is the base level threadripper . No way for dual channel memory to provide the bandwidth 16 cores need . Even the 12 core will have memory limitations in some applications
You can probably use higher speed memory with Zen 2 and it also looks like it may get better memory bandwidth utilization. Throw in some prefetch improvements and AVX 256 and it may outperform 16-core ThreadRipper for most applications. ThreadRipper may still start at 16-core though. I don’t know if they would sell a 64-core ThreadRipper. To run 64 cores at high clock would take ridiculous power. Epyc 2 with 64-cores will probably be under 2.5 GHz. Perhaps 48 core ThreadRipper.
16-core Threadripper will still make sense even if they release Ryzen with 16 cores - for more memory and IO intensive applications. Technically it has more sense than Ryzen 16.
No need to rip Intels balls off violently. Obviously there is room to maneuver. Space for more cores, higher mhz. Save some for later, when Intel desperately tries to respond and stumble, like they usually do.
AMD may have held back on the 16 core, just to see what the Intel response will be here. If a 4.6GHz boost from AMD is able to beat the 5.0GHz boost from Intel, if Intel does release a 12 core, no matter the speed, AMD is in the position to easily release 16 core chips. Power IS a concern, since many motherboards without good VRMs just won't be able to handle that rumored 135W TDP for those processors. AMD COULD do it at launch, but if only five existing motherboards could handle it, that hurts the perception that first generation motherboards can handle the third generation processors.
As far as the memory channels go, increased cache helps at least. Next year there should be a new socket, and AMD could potentially make it compatible with first through third generation processors.
AMD needs to keep something back for when Intel inevitably leapfrogs them back.
Once Intel takes back the mainstream performance crown (if they end up losing it this time around), a week later AMD can announce a pre-prepared 16 core, stealing at least some of Intel's thunder.
Also, why only sell a 16-core CPU to an enthusiast when you can first sell them 12-core, and 3 - 6 months later an additional 16-core?
Intel doesn’t seem to have anything other than pushing up max clock a little and they probably will not have anything for a while. The 10 nm parts they seem to have coming out in a reasonable time will probably be mobile parts. While I wanted a 16 core part, it may be the case that it would be significantly above the TDP designed into existing AM4 boards. If they want to sell it they may need to sell it at lower clock speeds, but people with boards that could support the power draw could then overclock it.
ok, that does NOT make sense..... for cooling and power spec, ok, might make sense, but BECAUSE they use a mix of 7 and 14 (especially when the 7 are the actual cores no longer tied down to being more or less 1:1 sync (so not crash etc)
If anything, by splitting up the 7 and 14 from one another it actually allowed them to get clocks etc as high as they ARE....
You might also mention that the 3900X's 32% better single thread performance vs. the 1800X would be a 15% boost clock increase. Therefore, the IPC plus memory bandwidth increase must be 17% (69% total over the previous arch). That's really good for an arch that was already boasting 52% performance over it's predecessor!
We do not yet know the all-core frequency of the 3800X. However it looks like TSMC's 7nm high performance node is oriented more toward power efficiency than performance. Perhaps its optimal voltage - frequency curve is winking more toward mobile processors than desktop processors? That extra +45W TDP for a mere +300 MHz clock of the 3800X was very surprising, though it might hint at the above. Is the optimal base clock per TDP of the 3700X/3800X at 3.3 to 3.4 GHz?
Well, that doesn't mean it's using 105W, just more than 65W. In addition, we are not seeing all the boost frequencies or durations for various combinations of active cores. It may be that the 3800X can boost more cores higher, for longer.
16 cores with 2 memory channels is equivalent to 32 cores with 4 memory channels and 64 cores with 8 memory channels. All three have 1 memory channel per 8 cores. AMD has already done 2 out of 3, so why not do all 3?
My guess would be that yields are still not that great and having the fastest part use only 6 cores gets rid of a lot more die with bad cores. No point in putting out the 16 core until Intel makes the next move.
not released yet because amd has no reason at all to do so.
intel can't even compete with the 12core model at the moment so amd will keep the 16core back leaving room to upgrade when they need to, along with a broader base of useable chips since there will probably be a lot more chiplets with 6 of the 8 cores good enough to clock to 4.6ghz for the 3900x than than there are full 8core chiplets that can do the same, especially this early in the 7nm process.
they even can use faulty chips with 6 great cores and issues on the remaining two in their most expensive (still cheap) consumer chip this way.
plus this way they do not completely cannibalize their threadripper lineupt immediately.
they win on all fronts keeping the 16core away from the market.
Takes guts to stick with that core count and at least you get to enjoy the full 70MB of cache. Good thing Blender and Cinebench all fits inside that, not sure you can ever say the same for productivity workloads.
I guess AM4 also means no real improvements on the PCIe lane count: Would love to see real and IF switches to give a bit of flexibility and what they plan for a new Threadripper.
Was busy watching live cast so I only caught up on the X570 bit afterwards :-)
Certainly a big improvement in bandwidth, but without switches the best you can hope for is bifurication. That means a PCIe 3 card will loose you half of the potential bandwidth with no way to recover it.
PCIe switch chips used to be quite common on higher-end motherboards, but these days Avago/Broadcomm seems to have pushed prices beyond reasonable, while I'm not sure they even have a PCIe 4 product.
Of course the Southbridge by itself is actually more of a switch but overall AM4 is just becoming very I/O starved with Ryzen 3.
Why bother? Wait for the next gen GPUs and use 8 PCIe lanes for each. Much easier and cheaper. Heck, using TR would be cheaper. Even ignoring cost, you have to find someone wanting to use 2x GPU + wants to upgrade to the new CPU and mobo, keeping old GPUs.
But a 16x gen 3 GPU will still only get 8x gen3 bandwidth on 8x gen4 slot, even if the theoretical maximum bandwidth is the same: Unless you have a switch instead of wires the translation is missing.
Raises hand. I certainly would run a flagship graphics card with PCIe 4.0 x8. Why wouldn't you? A Radeon VII or 2080TI barely even uses PCIe 3.0 x8 bandwidth, so PCIe 4.0 x8 is overkill in the extreme already for even the highest end graphics cards.
You should be well aware that PCIe 3.0 16x card will still use 16x PCIe lanes in a Gen3 mode (3.0) wasting, for all practical purposes, the theoretical bandwidth. The Ryzen 3, as far as current Gen3 devices goes, is still the same old regarding PCIE bandwidth. We won't be seeing Gen3 devices run any faster or taking any less PCIE lanes. Practically there is no change for this hardware.
2700 - 3.2GHz Base & 4.1GHz Boost 2700X - 3.7GHz Base & 4.3GHz Boost 3700X - 3.6GHz Base & 4.4GHz Boost
You are able to get almost the same base clock and a higher boost clock than the 2700X, which is a 105W TDP, all in a 65W TDP package. That means that the 3700X will out perform the 2700X by at least 10% all while having a 40W lower TDP.
I'd guess die quality and binning. A failed 8 core part may not do as well as a fully working 8 core part. 95W TDP gives AMD a lot more room to bin their chips.
It may also allow AMD to put their better 6 core chiplets into the 12C SKU, leaving the "worse" ones for the 6 core 95W unit. Not that they would be objectively bad, just relatively worse in some (or all) measures.
They're clearly hitting a wall. If you've ever overclocked, you'll know that at some point, voltage requirements increase DRAMATICALLY, and therefore power spkes.
piroroadkill or they are limiting the clocks to where they are....cause thats all they need to compete with intel... having a 4.6 ghz keeping up to a 5 ghz cpu, doesnt look good for intel, does it ? :-) zen 2 may go higher.. but we will need to wait till release, to know for sure unless AMD releases something...
I'd guess the 95W and 105W chips have more overclocking potential than the lower power versions. In the same way that we make fun of Intel for rating their chips based on the base speed, AMD bases power on all-core boost speeds. If boosting power allows for higher speeds beyond the official boost speeds, AMD may still be able to hit 5GHz with this generation(we can't know until testing is done).
They will need it as AMD just attacked them on practically every front and apparently has the support from board partners with tech features being ahead of Intel boards for the first time in over 15 years.. Their also releasing a ton of different motherboards which is a good indicator that companies like Asus/Msi are expecting these to sell extremely well.
2006 isn't the early 2000s. Before Core 2 the Pentium 4 and P3s couldn't beat an Athlon/64 at the same clock speed. The P4 had to be almost 1GHz faster to beat an Athlon 64. It wasn't until 1st Gen Core i that they were faster in Enterprise. While Core 2 was fast in a single socket, the shared FSB made the Opteron faster in 2+ sockets.
Adding to your point SC. I built a fair number of i7 920s and PhenomII systems. Yeah you took a hit but it wasn't really noticeable for most. They were behind in launch times but it wasn't really until the FX line came out that they really took a massive hit as their single threaded mediocrity was quite noticeable by then as with their A8s/10s etc. Great graphics sure but you could notice the tradeoff.
I would expect a 5 year old process like Intel’s 14 nm to be tweaked to get very high clock speed at this point. Getting higher clock is going to be difficult on smaller process sizes so much of the improvements will be from IPC style improvements and system architecture improvements.
What intel is missing is the density and the chiplet architecture. The 12-core Zen 2 part has 64 MB L3. Intel 14 nm parts top out at 38.5 MB L3 and most of those are Xeons that cost thousands of dollars. AMD will have 64 MB L3 on a $500 part and 32 MB L3 on a $200 part. Cache density scales very well with smaller process.
Intel did make a 56 core Xeon, but I don’t know who would ever want to buy it. It doesn’t have any of the power optimizations allowed by designing the chiplets to be in an MCM from the start and it is 14 nm still. The TDP is 400 Watts. It is just two standard 28 core parts placed on a single package. They are close to 700 square mm each, so they are not cheap. The yields wouldn’t be great even at 14 nm++. The yield of processes under 14 nm isn’t going to be too good so using tiny cpu chiplets is a big win for yields. The 64-core Epyc 2 might be around 1000 square mm of silicon total. The 56 core Xeon (2 die) would be close to 1400 square mm with much less cache; 77 MB L3 total. The 64 core Epyc 2 will have 256 MB L3. That is a massive difference in cache density.
Only with a PCIe switch chip. Those have been too expensive for consumer electronics since the PCIe 2 -> 3 transition. They're not getting cheaper with PCIe 4.0. Gen4 switches are also not widely available yet; X570 is probably going to be the first widely-available ASIC with Gen4 switching capability, but if you need to fan out more than four gen4 lanes, you're out of luck at the moment.
First thing that came to my mind was thinking that now I could run a GTX 2080ti on 8 4.0 lanes and have 4x 2x 4.0 lanes for NVMe on U.2 or TB3/USB 4.
But even if all that works in terms of bandwidth, unless somebody sells 4.0 switches at the price of glue logic, it's not going to happen: Avago/Broadcomm still wants their money back from the M&A spree that created them pretty much a monopoly in the PCIe switch space.
And that's also where I wonder if perhaps AMD would do well kicking another monopolist's shins by offering a slightly broader range of "SouthSwitches" instead of fixed-allocation Southbridges.
They could take 4, 8 or even 12 lanes, include 4.0 8x -> 3.0 16x/8-8/4-4-4-4 capabilities, slot-heavy or USB/TB heavy and you could have more than just one, different ones, too, on a motherboard.
Of course, that would cut into ThreadRipper territory and they are simply not big enough to diversify that much, but the whole motherboard area is in big need of an overhaul to match growing compute power and I/O demands coming out of a desktop CPU socket.
Intel now actually has all the motivation in the world to stay with PCIe 3 on the desktop for a long time, because they still define what mainstream is and would like to keep AMD away from feeding on PCIe 4.0 pastures as long as possible.
I could even imagine that Nvidia might not enable PCIe 4.0 8x for political reasons, even if they boast PCIe 4.0 for IBM Power.
Actually with Xe-Winter coming and AMD winning HPC contracts, Nvidia is in a very interesting position anyway.
I somehow hate it when politics gain over engineering, but it's fascinating nonetheless.
I have already heard about pci-e 4.0 SSDs. If nvidia supports pci-e 4.0 then they would probably support it at x8. They are made to auto negotiate the number of active lanes. I am not sure it would be spec compliant if they didn’t support falling back to x8 from x16. Intel probably doesn’t support pci-e 4.0 because they designed it into their 10 nm designs that have been delayed.
Yeeaah, now that AMD is competitive, it’s just mimicking Intel pricing.
I wonder whether the price for the flagship would settle at $500 or whether it would keep growing. If Intel decides to price its eventual 12/24 i9 at $600, AMD would surely price its 16/32 Ryzen 9 at $600 and so on.
I mean AMD has to make money sometimes. They still have over $1B of debt to pay down. Besides the prices are still quite good compared to Intel. Only 2 years ago we were paying $350 for a quad core.
I agree. There is progress from both amd and intel cpus so no hate there. GPUs on the other hand have gone backwards. Im ready for that half price 2070 competitor.
the 8C/8T core i7-9700k goes for $409 without an HSF. So including an HSF, it sits in between the 12C/24T 3900x and the 8C/16T 3800x
The 6C/6T core i5-9600k goes for $264 (currently on sale), without HSF, so price-wise it goes up against the 8C/16T 3700x (if the latter includes an HSF).
They should. However, AMD users generally base their loyalty on who sells the most product for the best price. Pricing things at a premium compared to their recent past will piss people off. Not a bad thing for users to be willing to switch whenever they want, but for AMD, it's not a good position to be in when you're easy to switch away from.
This of course, neglects the fact that AMD is doing really well right now, and there is no reason for their users to leave them, but it's worth pointing out. Intel's a household name and a strong brand, and it'll take AMD a lot of effort to equal Intel in that regard.
It's quite likely that AMD will provide good quality coolers on their 3700x and certainly their 3900x as they've stated they will include a cooler. That adds value over and above intels offerings in the same space. I
Intel have been trying to milk us for 7 generations, now. 10% improvements from one generation to another. AMD and it´s 3700x is making me think it is finally time to upgrade from my Intel i5 2500k @4.8Ghz! :)
I believe this is literally exactly the same pricing structure that the Ryzen 1xxx series launched at. You have a very skewed perspective if you think this is anywhere close to Intel prices. AMD compared their 12-core to an intel 12-core that cost $1200 during the keynote. Only $500 for a 12-core cpu with 64 MB of L3 seems like quite a bargain to me. AMD has a 32 MB L3 part for $200 dollars. You probably would have to pay thousands of dollars for an Intel Xeon with more than 32 MB L3.
I guess it’s the branding that bothers me. It seems like AMD feels a need to offer an edge over Intel in its flagship offering, and since it can’t offer more perf-per-core, in a classic move, it offers more cores, but then again, as pointed out above, it can’t offer 12c/24t for $300-400, so it jacks up the price to match Intel’s i9, which itself is priced higher than usual because Intel apparently feels it’s justified to ask an extra premium for an 8c/16t CPU because of the little IPC gain they can offer; and the whole thing just starts to shift the established perception of how much a flagship CPU costs. I don’t like that because we all saw where it led in the GPU space; I don’t want to have Nvidia’s Titan equivalent in the consumer CPU space, ever. Besides, was it all necessary? AMD’s new 8c/16t CPU is cheaper than Intel’s. And if we assume that a general consumer wouldn’t need more than 8 cores, why offer 12 in a consumer product at all? I guess they wanted some of that i9 pie, I just wish they offered the $500 CPU as a pro-sumer product, Threadripper, whatever, and kept thr consumer offerings below $400.
if it was up to intel.. we would still be stuck at quad core.. cause of amd . we have 8 and 12... cant offer more perfirmance per core ? you sure ? seems zen 2 is offering just that.. and at 300 to 400 mhz less then intel to boot
I remember when AMD announced the Excavator platform and their benchmarks. I wrote on their Facebook page "What have you done?!?!". They more than made up for it today. What an announcement. Can't wait for the reviews. Please, please, do compare it against previous generations as well (core 7*** and 8***, and ryzen 1*** and 2***).
He's probably to stunned to comment right now.. I don't know about you but this is the first time in years I've been actually excited about cpu/mb news. Amd just spanked Intel hardcore in the announcement arena. Plus .. considering the major participation by partners It's very likely the rumors are true on performance.
Very excited as well. For the past 6 months or so I've been holding on to the thought of building a new rig based around the Ryzen 3000 series to replace my ageing Xeon E3-1245v2. Seems I can make that happen soonish. :-)
azrael he's probably trying to figure out how he can spin, bend and twist this to make intel smell like roses, when, at the moment.. they smell like fertilizer :-)
Weird pricing yet again, reminds me on 1st gen. The 3600 and 3700X are both better than the 3800X and 3600X in terms of price/performance, plus they have better efficiency.
The 3700X seems to be the chip to buy this generation. Enough everything and quite cheap for what it offers.
Holding out on the 16 core to offload the 12 core CPUs that is a bunch of happy horsesh*t. I told myself the second they released these new Ryzens I'd grab the top tiered 16 core I'd been hearing about to use as a workstation build. Now by the time it comes out, my hype over this will be dead and they'd have lost a sale. I wonder how many others will wait or just forego this lineup.
So I wonder if 7nm has any more OC room than 14nm did ? Because if the 3600X can be OC'd from 4.4 to say 4.6, it's going to be a bit of a bargain at $249. Hell, with 15% better IPC, even if it only makes 4.5 it will be -much- faster than a 2600X.
No. It will literally be as fast. The 3600 and 2600X are identical. The only difference is the boost profile and the included cooler. Just like 2600 and 2600X is.
The AMD X series is generally for people that just wants the best the silicon is able to give without having to manually tweak settings.
Yeah.. their nice coolers. I wish they included them on more processors in the 2000 line. A lot of people use aftermarket coolers for their 2700x though and you can normally sell it for 30-50 bucks fairly quickly if you don't want it. They can be a little loud if you have tuned them properly but their good coolers.
Hah! Sorry, that's WAY too high on the price. I'm not getting my hopes up for those pre-release corporate benchmarks, either.. Performance better be AT LEAST as good as what they're saying.
We had the Intel monopoly exactly because of your attitude - AMD gives you 7nm 12 cores on AM4 for $499 which is cheaper than Intels 14nm 8 core 9900K, and people still moan that it is too expensive. What do you want, 16 cores for $50.
On top of all that, you get 7nm 8 core 16 threads that is basically 1:1 with Intel (according to AMD data) for half the price. And all AMD CPUs come with a box cooler that does not suck. What do you people want, seriously, if this is not enough and if you don't consider those CPUs for your next PC, be ready to be permanently bent over while Intel continues to violate your wallet and your rational thought down the future lane.
FP, you just have to look at board partners and announcements by others releasing hardware that will take advantage of what AMD is about to bring to the table. As pleased as I was about the 1st/2nd gen Ryzen (their great products..) Asus/MSI/Gigabyte and the like didn't actually release a lot of products for them... A very different scenario this time with the upcoming launch. Their not just doubling down their tripling and quadrupling down releasing some pretty high end stuff. That's a good indicator right there.. ad to that Amd strutting about like a proud lil rooster all indicates their not bsing us.
REQUEST Maya and V-Ray benchmarks when you test the R9. Even though this isn't HEDT, a lot of us would be considering this for content creation / animation work.
Anyone expecting to see cooling problems with on the parts with no filler for the missing chiplet die? I imagine heat spreaders are designed to make contact with the chip before the edges make contact with the package, so on the corner with no chiplet the heat spreader will get pushed down further than the other corners, either in manufacturing or when tightening down the cooler. The result is either the heat spreader warps, or the opposite corner is raised off the chip. Most will work, but may have higher defect rate. Reminds me of the problems with Alienware laptop tripod heatsink, which made flat pressure contact with the die difficult due to warping.
So exciting to see the CPU market competitive again. I'm hoping AMD can bring the heat to NVIDIA in the GPU market also (though the latest Navi details indicate they're not seriously pushing the price downward).
"The Ryzen 7 3700X is an eight core, sixteen thread CPU with a 3.6 GHz base frequency and a 4.4 GHz turbo frequency. It has 4 MB of L2 and 36 MB of L3..." 32MB not 36MB (What's with all the extra spaces?): "The Ryzen 7 3700X is an eight core, sixteen thread CPU with a 3.6GHz base frequency and a 4.4GHz turbo frequency. It has 4MB of L2 and 32MB of L3..."
Don't forget they did release the Special Edition 9900K with a 5GHz all core speed. You will just need to have a huge AIO to cool it and have a 1000W PSU minimum. Probably all for the low cost of $1000.
Different design that just doesn't crack that barrier. My 2700x runs at 4.2 on air. Im ok with that. I haven't paid to much attention to GHZ.. I also have the 8600k and it's hitting 4.9 but it's neither better or worse than the 2700x from a everyday use perspective. I've always liked them both.
Who says AMD needs 5GHz on all cores? GhZ aren't everything.
AMD could just raise their all core clocks to 4.5GhZ and you get an equivalent of 9900ks running all cores at 5GhZ in regards to actual performance. That's how good AMD's IPC actually is.
Plus, with AMD, you will end up with much lower power consumption to boot.
AMD also doesn't suffer from security issues like Intel does (in which case, Intel's performance suffers by quite a bit enough that even an overclocked 9900ks to 5ghz all cores would basically be on par with AMD's Zen 2 8c/16th stock.
But one of the main issues for lack of 5GhZ on Zen 2 is probably the node. 7nm TSMC had some issues which basically downgraded initial performance expectations from 40% to 25%.
Also, Intel has been using 14nm for a long time now, to the point where the node has been heavily refined.
you want higher clocks out the factory? Wait for Zen 3 (on 7nm+). The node will likely be improved by then to resolve some clock issues, so its not exactly a big problem.
TY. I don't need/want 5Ghz. Just asking. In the end , after all the tests my bet is that clock for clock, Intel will still be IPC king or equal (mitigations OFF) and it will still have a 10% clock speed advantage (8 cores, best part). That is nothing to sneeze at.
Clock speed and IPC are different things that work together to get your ST/MT performance. You can test IPC by having 2 different chips at the same clock speed and seeing which completes as set of benchmarks faster. AMD is claiming that the 3800X will be faster than the 9900K, Zen 2 should have about a 10% higher IPC than Coffee Lake R since the max ST boost of 3800X is 4.5GHz and 9900K is 5GHz. Intel has about a 10% clock speed advantage and needs that added clock speed to make up for the lower IPC, at least since AMD says the 3800X is a little faster
what about his clam that sunny cove is a new architeCture ?? when all intel did was add/update parts of it ? wouldnt it be better to call it sandy bridge 7 ??
These (single core?) boost clocks are kind of disappointing. I expected at least 4.8 GHz single core turbo clocks, though the new XFR will surely allow that provided the cooling is adequate. I also wonder why there is such a huge TDP difference between the 3700X and 3800X. The difference in clock speeds are not high enough to justify a +45W TDP for the 3800X. Does that mean the 3800X can sustain the turbo frequency longer?
Turbo frequency has little to no correlation with TDP, max turbo is for one core and the TDP is going to be reached running all cores at the base clock. And the base is 300Mhz higher which I absolutely believe would increase power draw by 50%, it is simply going way past the efficiency sweet spot of the chip. Very much in line with my experience overclocking the current Ryzen chips.
Santoval. considering zen 2 looks to be on par, or slightly faster then intel at the clocks they are at... why would they need to be higher ? do you want amd to make intel look worse ? :-)
notashill " And the base is 300Mhz higher which I absolutely believe would increase power draw by 50%, it is simply going way past the efficiency sweet spot of the chip. Very much in line with my experience overclocking the current Ryzen chips. " oh?? 300 mhz = 50% more power ??? would you have a source for this ? also.. Zen 2, is not current ryzen chips.. untill Zen 2 is released, and are reviewed, and in the hands of the public, how Zen 2 overclocks.. is still a mystery...
Just from my own testing on a 1700, 3.7Ghz all core is about 100W and 4.0 is about 150 and pushing the safe voltage limits. IIRC anandtech's various Ryzen reviews had some good power measurements with overclocking data.
Of course 7nm may result in wildly different voltage/power/freq scaling so who knows until the new chips are in the wild to test.
Linus Tech Tips has a pretty interesting take on all this via their youtube video that's trending up with nearly a million views and 6000 comments in the last 12 hours. I don't doubt for a moment that Ian here has a lot to say about all this stuff as well. Look forward to hearing his thoughts. You guys must be just swamped at Anand over all this. Hopefully pumped to. I have not seen this sort of buzz in quite some time.
Not sure what else they could drop that wouldn't be underwhelming in comparison to what was released over the last 24hrs. Biggest news day for Amd in over a decade.
It isn’t NUMA. The original Ryzen processors were not NUMA architectures either since it was a single die. ThreadRipper and Epyc 1 would have been considered NUMA architectures though. NUMA only covers access to main memory. Zen 2 will not be a NUMA architecture since the IO die handles all memory access.
They still have variable access to L3 caches though. There is still some penalty for sharing data across CCXs. Intel uses a mesh network between cores and cache slices to allow mostly uniform access to cache. This burns a lot of power to do this at core clock and it is actually higher latency than what you would see within an AMD CCX. This only comes up when you share data across a CCX boundary, like if you have two threads with shared memory running on different CCXs. You have 4 cores / 8 threads within a CCX, so you have plenty of resources for most things. If you do need to share data across CCXs, then it can still be done efficiently by doing it in a more coarse grained manner. This requires some software optimization in some cases.
With Zen 1, CCX to CCX traffic had to go through an infinity fabric switch at memory clock. This wasn’t really that much of an issue in the first place, but it should be less of an issue with Zen 2. There is no memory clock on the cpu chiplet, so it wouldn’t make any sense for it to operate at memory clock. It probably operates at core clock, so CCX to CCX communication on the same die will probably be much lower latency and higher bandwidth compared to Zen 1. The chip to chip latency should also be quite low due to the high clock speeds of infinity fabric. The bandwidth is more than double what it was in Zen 1.
The IF clock will probably still be locked to memory frequency, because the communication between the I/O die and the chiplets will be IF. By synchronizing the IF clock to the memory clock they can significantly reduce latency to send data from the cores to the memory controller.
The CCX to CCX communication could run with a different IF clock, but I'm not sure if that would make sense, because they would need multiple IF endpoints on every CCX then (one for CCX to CCX, one for CCX to I/O)
or maybe its just marketing, this way they can sell 3900x as the most premium sku then when the prices fall down from attrition introduce a new most premium sku at full price.
It's 24 from the CPU, 16 for the PCIe x16 Slot, 4 for M.2 NVME and 4 to the chipset. The chipset then provides another 16 PCIe lanes for a total of 40 lanes. But the lanes from the chipset can only get a total bandwidth of 4 PCIe 4.0 lanes.
An exceiting announcement. Two questions come to my mind:
Will the Ryzen 3000 CPUs have ECC? I guess that yes, because AFAIK all AMD CPUs since at least the Athlon 64 had it; I expect that we will hardly see Ryzen Pro CPUs with official support in the retail market for CPUs, though.
Does Zen2 have proper fixes for Spectre? With "proper fix" I mean something that does not cost performance, and that you do not disable if performance is more of an issue than security.
I guess the CPU I'd be most interested in isn't on the list. That would be a Ryzen 7 3200 (without the X). For those willing to overclock, IMO the CPU's to have in the first gen were the 1700 (what I have) and in Ryzen + the 2700. a 1700 will usually overclock to the same speed as a 1800x while the 2700 will usually hit the same speeds as a 2700x. So I was hoping the replacement for my 1700 would be a 3700. Perhaps if I wait a bit longer? Actually wait a bit longer is almost always the best option as prices are certain to drop at least a bit after a while.
I would _love_ to see an GHz-normalized IPC comparison between 3xxx Ryzen and 2xxx, plus the last couple generations of Intel kit. (7700k/8700k/9700k). Just saying.
Usually I do not read article on blogs, but I wish to say that this write-up very pressured me to try and do so! Your writing style has been surprised me. Thank you, very nice article. https://geekstechrenewal.com https://geekshelp.support
Why do you assume that the *only* thing consumers are interested in are games ? I use audio mixing software that includes stuff like convolution reverb (which is a bit like ray tracing for audio), and trust me, a few instances of that can use up as many cores as you want to throw at it.
FWIW, the difference between AMD and Intel now can be traced back to hiring of Dr Su. I knew of her at MIT when I was an undergrad. Went straight through to get her PhD in EE at MIT which is only reserved for the best of the best (most mortals at MIT are encouraged to go elsewhere for grad school). She was a superstar at IBM. It would have served IBM well to have made her CEO. IBM’s loss was AMD’s windfall. Intel prior to the recent hire was run by marketing people, not engineers.
That's the vibe I get as well. People Like Su, Jensen, Gordon Moore etc. They aren't like most executives who are usually sales or marketing types. They are actual engineers and have a deep understanding of what it is their company does. Which is a very important skill in tech. When Dr. Su gets on stage and talks about Ryzen, she isn't regurgitating lines marketing gave her, she is talking about a product that she helped create.
Most people don't understand that the position of CEO and President really are two different roles. The president of the company is the person who runs the company, while the CEO is responsible for getting investors and trying to hype the company and drive up stock prices. People like Steve Jobs were good at both positions, but the majority of CEOs out there are idiots with a MBA from some well known business school who have zero understanding of technology, so should NOT be running things at a technology company.
Get these CEOs out of the running of the companies they are supposed to hype, and those companies would do much better.
Has anyone asked how much DRAM is in the I/O die? The slides don't even mention it as a possible L4 cache. A gigabyte in there could serve an IGP really well.
Since the only Ryzen 3rd generation processors announced so far do not have a GPU, then expect that there isn't any memory on the I/O die. APUs will show up later, probably after the Ryzen 3 and 5 products get their specs released.
I'd say that with Zen 2, AMD is copying the pricing on the original Zen. They priced the original Zen flagship (the 1800X) at $499. They've also priced the Zen 2 flagship (the 3900X) at $499. The 1700X one step down from the 1800X, was priced at $399. The 3800X, one step down from the 3900X, is also priced at $399. Another step down gets us to the 1700 and the 3700X, both priced at $329. Below that are the 1600X and the 3600X, both priced at $249.
The only new price AMD has come up with is for the 3600, which is priced at $199, twenty dollars less that the $219 that the 1600 was listed at when it was introduced.
The 3900X is 5% more powerful than the 9920X, the 3950X has 32% more performance than the 9960X, if we compare processors with the same number of cores and threads. That is quite outstanding as the new AMD chips are pratically 3 times less expensive.
Hi, I do believe this is an excellent website. I stumbledupon it ;) I'm going to return yet again since i have book marked it. Money and freedom is the best way to change, may you be rich and continue to help others. and check https://instacartpromocode.com/
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
307 Comments
Back to Article
abufrejoval - Sunday, May 26, 2019 - link
So where is the 16 core?And I guess the answer is memory bandwidth and power.
The AM4 backward compatibilitiy means dual channel memory and it just gets absurd at that point, even if they push the clocks.
And then one disadvantage of the 7nm chiplet design is that the surface area is so limited that 16 cores would mean you'd have to lower the clocks.
FMinus - Sunday, May 26, 2019 - link
A better guess would be, not ready yet.Haawser - Monday, May 27, 2019 - link
I'd imagine the 16C is being held in reserve, so if Intel try to respond they'll just drop everything by one price level and insert the 16C at $499 again.flyingpants265 - Monday, May 27, 2019 - link
Yeah, and screw over their current customers. That's awesome.zodiacsoulmate - Monday, May 27, 2019 - link
It's not screwing over anyone if you keep offering the best deal in the marketwebsitetrafficchecker - Thursday, February 27, 2020 - link
https://www.warriorforum.com/members/volarex.htmlhttp://valtrex500.com/
Gesty - Monday, May 27, 2019 - link
Don't see how this would be "screwing over their current customers". It's PC hardware; Whenever you buy something, you know chances are you could get something with a better price to perfromance ratio within the next 6 months.just4U - Monday, May 27, 2019 - link
Considering the fact that we do not have a 12core part on the consumer side yet (the 3900x will be the first..) I fail to see how not releasing a 16core part at launch is screwing over anyone. Amd raised the bar here, it's a game changer. They also announced a new 64core part on the professional side of things..svan1971 - Saturday, June 1, 2019 - link
Its almost like intel sends brain dead fanboys to sites to to ask ignorant questions or make retarded statements.philehidiot - Sunday, June 2, 2019 - link
They sometimes have small roles in "The Walking Dead".Valis - Wednesday, July 10, 2019 - link
LMAO! LOL. xD Don't forget FOTWD and the new Black Summer.surt - Monday, May 27, 2019 - link
Sure, as soon as they introduce a processor at some price point, they must never again introduce any better processor at that price point. That's definitely how the computer industry was built.just4U - Monday, May 27, 2019 - link
Technically the 3700x is the 2700x replacement, running at a higher frequency /w 15% improvements clock for clock and a 65W TDP. The 3900X would be hmm.. a refresh of the first gen 1800x I guess.. since it's priced in and around there.Santoval - Monday, May 27, 2019 - link
They wouldn't screw anyone. If anything, 16 cores should be the super-maximum number of cores you should feed with merely 2 memory channels. I would personally prefer to max out at 12 cores. I believe one of the reasons the Zen 2 based Threadripper was either canned or postponed was because it would make little to no sense to go above 32 cores with merely 4 memory channels, and they probably have not yet decided if they want to max out at 32 cores (which is what they should do).The same applies to the 64-core Epyc and its 8 memory channels. Beyond 64 cores you need to either add more channels or, more sanely, move to DDR5. Both require a new platform.
Intel paired their new cream of the crop AP platform with *12* memory channels, and I believe they intend to max out at 56 cores. That's quite more reasonable.
jamescox - Monday, May 27, 2019 - link
That is barely a real product. I don’t know who will buy it. It is two standard 28 core cpus (at over 600 square mm each; really expensive) placed on one package. It doesn’t have any of the power optimizations that are possible with chiplets designed to be placed in an MCM. At 400 Watts TDP, it isn’t really a competitor to Epyc 2. The 12 channel memory is more incidental since this product would never have existed if it wasn’t needed as a marketing response to Epyc 2.RSAUser - Tuesday, May 28, 2019 - link
+112345 - Friday, May 31, 2019 - link
Lisa Su confirmed there would be more threadripper in an interview right after the keynote.“You know. it’s very interesting, some of the things that circulate on the Internet—I don’t think we ever said that Threadripper was not going to continue—it somehow took on a life of its own on the Internet,”
"If mainstream is moving up, then Threadripper will have to move up, up—and that's what we're working on,"
Targon - Tuesday, June 4, 2019 - link
I know exactly where the idea of "Threadripper has been cancelled" came from. There was a "leaked" roadmap without Threadripper on it, and some people assumed that was real. The lack of a discussion about Threadripper so far this year has served to make people believe it.johnmartin7042 - Saturday, June 1, 2019 - link
Our Geek Squad is expert in providing the best of solutions to our customers to help them with tech repairs.Geek Squad has made a huge area in providing the best services for office and home gadgets repair. The professional and enthusiastic experts of our team are capable of handling all sorts of contraptions, gadgets, hardware, electrical, control issues, and other adaptable issues that you would not find anywhere else.
https://supportcustomers.us/geek-squad-tech-suppor...
https://geektechsupport.me
Korguz - Saturday, June 1, 2019 - link
what is this ?? free advertising for a knock off of the best buy tech support of the same name ? i wonder how long before Best Buy sues for copy right infringementFlying Aardvark - Tuesday, May 28, 2019 - link
You get what you pay for at the time you buy it. The reward is that you get to enjoy it while everyone else who "waits", does not get a priceless experience. Too cheap for a 2080Ti? Well, great, but you won't get to enjoy a 2080Ti in it's heyday.That's how everything works. If you bought a Model T in 1915, you can't be mad that a Toyota Camry is better in 2019. I'd rather have bought the Model T when it came out and enjoy it, then keep upgrading. Because I have a job.
Korguz - Tuesday, May 28, 2019 - link
i have a job too.. but i cant justify paying 1500 to 2k for a video card... if you want to call some one cheap for that.. then.. must be nice to have more money then brainsGastec - Sunday, June 2, 2019 - link
What is this, the day of the teen trolls with jobs?Targon - Tuesday, June 4, 2019 - link
The whole NVIDIA RTX issue doesn't apply here, because very little supports ray tracing right now, so there isn't a lot of getting a much better performing product that 2080Ti buyers have gotten for all that extra money.Spending $500 for a Ryzen 9 3900X on the other hand, is something that will provide benefits, and even in another six years, you probably won't see 12 core chips as entry level(though 8 core may be at that point).
mariush - Friday, May 31, 2019 - link
If you truly need 16 cores, you have Threadripper, which goes up to 32 real cores.True, you won't get pci-e 4, but you do get 60 pci-e 3 lanes, so if for example you need a pci-e 4 m.2 ssd, you can instead use two pci-e 3 m.2 ssds in a raid or something to get same speed.
svan1971 - Saturday, June 1, 2019 - link
how are they "screwing" current customers exactly? Its not like they require a new chipset for the same socket like their competitor.Targon - Tuesday, June 4, 2019 - link
You have to figure that if Intel were to release a 12 core consumer chip, it won't be sold for less than $700, so AMD could leave prices where they are, but sell the 16 core chip for the same price as the Intel 12 core chip. The key comes down to competition, but also, not really increasing prices. When the Ryzen 7 1800X was released, it was sold for $500 and sold out very quickly. The 3900X for $500 is 50% more cores, plus boost was 4.0GHz for the 1800X, while the 3900X will have a 4.6GHz boost. That's a lot more CPU for the same price point, even two years later.Bulat Ziganshin - Wednesday, May 29, 2019 - link
yeah, they figured out how to combine two 6-core chips, but can't imagine how to combine 8c onesisn't obvious that 12 cores is enough to beat 9900K, and 16-core will appear AFTER Intel will rollout its 10-core beast? just like AMD pushed Intel to double amount of cores in 2 years, Intel pushed AMD back to rollout 12-core CPU, and it don't yet pushed AMD enough to get 16 cores on the market
msroadkill612 - Sunday, May 26, 2019 - link
They are also swamped by demand for top binned CCX chiplets for Epyc.These models soak up the rest of the Bell curve.
vFunct - Monday, May 27, 2019 - link
How is server/datacenter demand going for AMD?Do they have something that can compete with Intel's QuickAssist for TSL management and compression?
ChubChub - Monday, May 27, 2019 - link
I work for a relatively small ISP (~50 racks), and all of OUR new server purchases are exclusively being quoted out as Epyc (customers are loading whatever they want).At least by us, and I expect others, QuickAssist is largely seen as an unnecessary backdoor of likely poor security, and with dubious value. As well, Intel's Meltdown/Spectre/Fallout/etc vulnerabilities that have greatly reduced the processing power that we purchased is deplorable, and further erodes our faith in proprietary Intel implementations. On top of that, we also see QuickAssist as an unnneccessary "lock-in" to a CPU architecture, which is very bad business for people like us.
Having said that, as far as I know, AMD does not have an equivalent, but I doubt it matters to most people, as cost/performance, density (very important for us), security, and availability are the main decision points, and the new Epyc chips win on all fronts.
Gondalf - Monday, May 27, 2019 - link
Too bad your words are not assisted by hard facts or numbers, and AMD market share remain really low. Be careful, likely there is a security hole in your new SKU that you don't know because nobody have given notice of it.Marketing post.
just4U - Monday, May 27, 2019 - link
Their 64 core 7nm Rome CPU's might boost their market share.. no?just4U - Monday, May 27, 2019 - link
This is what I heard about the Rome CPU according to articles.It's the first to support PCIE 4.0, is a drop in replacement for existing boards, has 4x the floating point power of the previous gen, and 2x the speed per socket over naples. They showed a demo of 1 Rome cpu beating 2 flagship intels in a rendering test..
I don't know a whole helluva lot about cpu's like that but it sounds impressive..
zmatt - Monday, May 27, 2019 - link
AMDs numbers have grown supstantially in the enterprise space in the last two years. Intel still has the majority but these things don't happen overnight. Somebody is buying a lot of epyc based systems though and the extra attention given by cloud providers like Amazon means you should take it seriously.I don't admin nearly as many systems as OP but I am also looking at epyc for out next VM cluster later this year. And it really does come down to price/performance. They have a much more sensible upgrade path and road map too.
just4U - Monday, May 27, 2019 - link
Price/Performance has always been a selling point for AMD.. but from speculation by others it appears AMD will be competing with Intel across just about every sector and not just at certain price points.. either matching or beating them outright. Something that we haven't really seen before. It's gotten a lot of people really excited and should certainly bode well for all of us as Intel will be forced to compete on pricing as well as innovation.vFunct - Monday, May 27, 2019 - link
Going to be hard for AMD to compete against the very datacenter focused Intel Xeon products, with optimizations for video transcoding, as well as FPGA accelerators.zmatt - Monday, May 27, 2019 - link
Depends on what you are doing. The vast majority of businesses do not use their servers for video transcoding. Most of the work done is pretty mundane. AD domain controllers, SMB file servers, SQL servers, VDI and application hosting.Users with more specific needs will have to be more discerning but for everyone else its a simple price/performance arithmetic.
Santoval - Monday, May 27, 2019 - link
"Marketing post".Do you mean your own post?
Bulat Ziganshin - Wednesday, May 29, 2019 - link
it may seem sensible unless they have 8c for $300, so 16c $600 is even easier to producecaqde - Monday, May 27, 2019 - link
Could just be holding it back? The 12 core doesn't need to use top tier chips to be made only 6 of the chips need to work and reach the desired clock speeds. And either way even without the 16 core chip their processors are faster than anything Intel has to show so why not. It is unfortunate but AMD is now in the position where they can hold back products because Intel can't produce.msroadkill612 - Monday, May 27, 2019 - link
The military call it "the strategic initiative" & it is of paramount importance - it means victory is ~inevitable.All the enemy can do is respond to your moves. They cannot decide the battlegrounds by initiating offence.
Cygni - Monday, May 27, 2019 - link
Calm down Sun Tzu, we are talking about computer toys.nandnandnand - Monday, May 27, 2019 - link
We're talking about cutting-edge nanotechnology.Irata - Monday, May 27, 2019 - link
The post was talking about business, so msroadkill does have a point.III-V - Monday, May 27, 2019 - link
Point or not, Cygni's response was hilariousTargon - Monday, May 27, 2019 - link
It's a war between AMD and Intel, and at the moment, AMD has the advantage.Irata - Monday, May 27, 2019 - link
It wasBonez0r - Monday, May 27, 2019 - link
The nature of the 'battlefield' doesn't really matter, the same basic strategies still apply. If Intel strikes back now, AMD can immediately respond with whatever they're holding back.I also don't think that Lisa Su could have gotten AMD where it is today by thinking of their products as 'computer toys'.
UltraTech79 - Monday, May 27, 2019 - link
Computer toys? The world runs on this stuff you neanderthal.Arbie - Wednesday, May 29, 2019 - link
+1 to CygniGondalf - Monday, May 27, 2019 - link
Too bad AMD can not clock....this is the real problem. 7nm looks crapXyler94 - Monday, May 27, 2019 - link
Clocks are but one spec of a multitude of things a CPU can use to be a top performer.The most valuable is definitely IPC. If you have a low IPC, you need more clocks to get more performance. If you have a high IPC, you need less clocks. The fact this chip goes up to 4.6 ghz on a 12 core part is quite substantial. Intel's 5ghz i9 is on a single core, and yes they announced a 5 ghz all core on the 9900ks, but that's probably gonna require amazing cooling.
clock speeds are like horsepowers in a car. It's a spec, but it's not the vehicle's total performance. Torque and Efficiency are two very important things to consider, and just because the Ford GT's 650HP engine is greater than the F-150's 400ish, doesn't mean that the GT can tow better. It's quicker yes, but it's not a workhorse.
Cullinaire - Monday, May 27, 2019 - link
Thanks for all the drivel, but that doesn't change Gondalf's point regarding the issues with the silicon process...Gunbuster - Friday, May 31, 2019 - link
You might be lost. The article on intel 10nm struggling to do 4 core at 3.9 is ----> that way.Ratman6161 - Tuesday, May 28, 2019 - link
Yes. And if clock speed was really king, then the Pentium 4 would have been a great CPU rather than a slow, hot, power hog.caqde - Monday, May 27, 2019 - link
We don't know this for sure until someone tries to overclock these chips. Until then all talk about clockspeeds is nothing more than hearsay remember the 9900k is not a real "95w" chip because when it is set to run as a real 95W chip it loses a lot of it's performance. So it is possible that AMD's R7 3800X could reach 5Ghz (on air/water) when overclocked, but we won't know for sure until closer to or after July 7th.PixyMisa - Monday, May 27, 2019 - link
Oh no only 4.6GHz!Korguz - Monday, May 27, 2019 - link
" Too bad AMD can not clock....this is the real problem " maybe they dont need to... if Zen 2 is able to be on par or 1-5% faster then intel while being 300-400 mhz slower.. then why clock them that high ?? it could just make intel look even worse :-)schujj07 - Tuesday, May 28, 2019 - link
Clock speed means nothing if it doesn't have IPCto go with it. The Pentium 4 was able to clock very high, but it wasn't faster than the Athlon 64 clocked 1GHz slower. The AMD FX9590 was able to hit 5GHz but was slower than the Intel parts clocked 1GHz slower.Arbie - Wednesday, May 29, 2019 - link
You're just a troll, Gondalf. Please stay on troll forums.peevee - Tuesday, May 28, 2019 - link
"The 12 core doesn't need to use top tier chips to be made only 6 of the chips need to work and reach the desired clock speeds."It also fits into 105W TDP with 105W (or a little more) motherboards and 105W standard cooler.
Hypothetical 16C/32T 3990X needs something like 125W to make sense. And of course it would be memory-limited in a whole lot of workloads, just 128bits of DDR4 serving 32 threads is insane.
Chaitanya - Monday, May 27, 2019 - link
Server chips getting 1st priority on bins.jamescox - Monday, May 27, 2019 - link
The server chips may go through a lot more validation, but they don’t need to bin that high in clock. The sever parts will mostly be in the 2.5 GHz range or lower. They will bin them heavily for power consumption more than anything else.rahvin - Tuesday, May 28, 2019 - link
Sever chips don't launch till Q3, Ryzen launches a minimum of 2 months and up to 5 months before the Rome Launch. The only binning they'll be doing in the meantime is strictly for Ryzen.Haawser - Thursday, May 30, 2019 - link
As they did before, it's a fair bet that all the very best clocking chiplets are being stockpiled for the next Threadripper. Which Lisa Su confirmed (after the keynote) that they are still working on.SkOrPn - Monday, May 27, 2019 - link
Obviously the faster 8 core chiplets have to be binned to introduce the Ryzen 9 3950X, or the Ryzen 10 perhaps. It will take time for good 8 core chiplets to get set aside for new SKU's in the near future, and this was most expected. No mention of Ryzen 3 either. Zen 2 is obviously a staggered release over the second half of 2019. So I expect a 12-core Ryzen 7 3850X at or near 5Ghz, and a 16 core Ryzen 9 3850X and then a Ryzen 3 6-core with SMT and a 6-core without SMT. I'm sure we will see 6, 8, 12 and 16 core Ryzens for the Zen 2 family stack. There is simply no freggin way AMD would just toss all those extra fast or partially defective chiplets in the trash which could be used instead for Ryzen 9 or Ryzen 3. I'm betting the 5Ghz parts come later this year too once enough passing chiplets have been binned just to answer Intel's upcoming 10-core desktop parts. Think about it, the Zen 2 16 core at 5Ghz boost would easily make Intel's 10-core chips look silly as well. They can't let all their eggs out of the basket just yet. Besides its clear that many of the X570 boards have VRM's designed for 16 core CPU's.haukionkannel - Monday, May 27, 2019 - link
5GHz this year... not likely. In 2 to 3 years... maybe. This is good improvement! We can Expect some golden samples to reach 4.7GHz maybe. The power usage seems to ramp up very quicly from 4.4 to 4.5GHz! So getting 4.6 GHz seems to be hard at this moment. Requires a good sample.Opencg - Monday, May 27, 2019 - link
the tdps you are using boost clocks to compare have alot more to do with the base clocks. education sonTargon - Monday, May 27, 2019 - link
We don't know how high these new processors will actually clock. Intel was hitting 5.0GHz for years before 5.0GHz became the official boost/turbo speed. Of course, Intel doesn't talk about the REAL TDP of chips, just the TDP at base speeds. So if AMD is hitting 4.6GHz on 12 cores official boost, it is possible these chips CAN run faster with better cooling and a motherboard that can handle the increased power demand. Asus ROG boards tend to be overkill for the official clock speeds, and can handle a LOT more power demand.Santoval - Monday, May 27, 2019 - link
"The power usage seems to ramp up very quicly from 4.4 to 4.5GHz!"TDPs describe the thermal emission of (all-core) base clocks, not (either all-core or single-core) boost clocks. Boost clocks have separate, generally unreported and *much* higher "TDPs" (plural because they depend on the number of active cores at boost clock), which the cooler needs to be able to handle so that boost clocks can be sustained for long periods of time. What's highly unusual about the 3800X (assuming it was not misreported) is that it has a +45W TDP for a mere +300 MHz base clock.
Xyler94 - Tuesday, May 28, 2019 - link
Intel and AMD measure TDP differently. What you described is how Intel rates their TDP. If memory serves, AMD measures TDP when all cores are boosting as fast as they can. I don't know how AMD measures it exactly, but Intel measures at all core on base frequency.ishould - Wednesday, May 29, 2019 - link
I suspect Zen 3 on TSMC's 7nm+ will certainly be able to hit 5GHz with EUV, and that just went into mass production. By the time Zen 3 comes out next year, TSMC will have had 6+ months with 7nm+ so the process should be more matured.phoenix_rizzen - Monday, May 27, 2019 - link
While those doesn't appear to be the case, it would be so nice and simple if it was:Athlon should be 2C/2T and 2C/4T if they even bother with dual-core chipkets.
Ryzen 3 should be 4C/4T and 4C/8T.
Ryzen 5 should be 6C/6T and 6C/12T.
Ryzen 7 should be 8C/8T and 8C/16T.
Ryzen 9 should be 12C/12T and 12C/24T.
Then, later on, they could release a fully-enabled dual-chiplet CPU in 16C/16 and 16/16T variants, just to really mess with Intel. Call it Ryzen X(treme). :D
With Ryzen going to 16 cores, and Epyc going down to 8 cores, it doesn't really leave room for Thresdripper. Wouldn't be too surprised if TR is retired.
phoenix_rizzen - Monday, May 27, 2019 - link
Bah, 16C/32T obviously for a fully enabled CPU. Stupid phone keyboard and lack of edit!Lolimaster - Monday, May 27, 2019 - link
Why should AMD gimp you SMT? Accepting intel's lube too much?Targon - Monday, May 27, 2019 - link
The only place AMD turned off SMT was on the Ryzen 3 models with 4 core/4 thread, otherwise, I don't expect AMD to ever leave SMT off on a Ryzen 5. AMD isn't Intel.just4U - Monday, May 27, 2019 - link
I see no reason to purchase a 12C/12T cpu. It's pointless in that price range.jamescox - Tuesday, May 28, 2019 - link
Ryzen 3 will probably be limited to 12 nm IGP parts which are a single chip. You probably need to get at least Ryzen 5 to get the 7 nm chiplet version with multiple chiplets.III-V - Monday, May 27, 2019 - link
5GHz isn't happening on TSMC 7nm. Not from the factory, anyway. Unless we're taking super-uber-elite cherry picked dies wrapped in the finest (ESD-safe, of course) cashmere, that you've got to sell your firstborn into slavery to acquire.It's not a great node for high performance. It's a big step up from where AMD was at before (they're jumping ~2 nodes), but you'll have to wait a bit longer for those speeds. TSMC has some great stuff in the pipeline, but N7 is a bit of a dud.
Targon - Monday, May 27, 2019 - link
Four years ago, you could get an Intel chip and overclock it to 5.0GHz, well before Intel made it the official "turbo" speed. Intel actually had the potential to just clock chips much higher than it did, giving a lot of room to just sit around and bump clock speeds over the past four years without any significant changes.We don't know how high these new AMD chips can clock on all cores at this point, and it may require better motherboards than many people have. An Asus ROG Crosshair VI Hero from the first generation may be able to do better than many second generation boards, just because of the VRMs.
flofixer - Monday, May 27, 2019 - link
because of the lack of cobalt?JasonMZW20 - Tuesday, May 28, 2019 - link
7nm with EUV might help (esp. for defect rates), but generally, these ultra-dense nodes haven't been kind to clock speeds or power delivery systems (Intel is still struggling with 10nm that is equivalent to TSMC 7nm). Copper resistance increases as it gets smaller, so putting high current through wires with higher resistance will generate a lot of localized heat energy. Without more exotic materials for transistors and wire connects (and studying different types of transistors like GAA FETs for 5nm and below), I think clock speeds will stagnate and energy usage for higher clocks will ramp significantly as we move to smaller lithographies. I expect cache sizes will be the next battle to retain as much data on-chip to both reduce power and increase performance per clock by not having to make as many trips out to slow RAM for instructions and/or data.Kevin G - Monday, May 27, 2019 - link
My guess: standing by for a last 2019 or 2020 refresh. AMD simply doesn't have to play their full hand right now to be competitive. Going to 16 core on AM4 looks to be trivial on paper since they're already doing 12 core: just a matter of clock speeds and core voltage to make it happen inside of AM4 parameters.12345 - Monday, May 27, 2019 - link
Sound more like sandbagging to me.brunis.dk - Monday, May 27, 2019 - link
If you can win by only giving 80% .. :)sorten - Monday, May 27, 2019 - link
Exactly. This is like Usain Bolt skipping the last 10m of a 100m dash because he's so far ahead.Outlander_04 - Monday, May 27, 2019 - link
My guess is the 16 core is the base level threadripper . No way for dual channel memory to provide the bandwidth 16 cores need .Even the 12 core will have memory limitations in some applications
shing3232 - Monday, May 27, 2019 - link
No, 2990WX use quad channel and 32core, and it work out okish. Zen2 has larger L3, it would work even better. maybe there will be a 16C wxjamescox - Tuesday, May 28, 2019 - link
You can probably use higher speed memory with Zen 2 and it also looks like it may get better memory bandwidth utilization. Throw in some prefetch improvements and AVX 256 and it may outperform 16-core ThreadRipper for most applications. ThreadRipper may still start at 16-core though. I don’t know if they would sell a 64-core ThreadRipper. To run 64 cores at high clock would take ridiculous power. Epyc 2 with 64-cores will probably be under 2.5 GHz. Perhaps 48 core ThreadRipper.ishould - Wednesday, May 29, 2019 - link
I'm guessing 24, 36, and 48 core Threadrippers. Not sure if we're going to see 64 core Threadrippers as they may cannibalize Epyc salesishould - Wednesday, May 29, 2019 - link
32 cores*peevee - Friday, May 31, 2019 - link
16-core Threadripper will still make sense even if they release Ryzen with 16 cores - for more memory and IO intensive applications. Technically it has more sense than Ryzen 16.deil - Monday, May 27, 2019 - link
I think it's in the making as last resort, appearing as 3900X around january or later if intel will not play any amazing new things.brunis.dk - Monday, May 27, 2019 - link
No need to rip Intels balls off violently. Obviously there is room to maneuver. Space for more cores, higher mhz. Save some for later, when Intel desperately tries to respond and stumble, like they usually do.gojapa - Monday, May 27, 2019 - link
What a flip from a few years agoTargon - Monday, May 27, 2019 - link
AMD may have held back on the 16 core, just to see what the Intel response will be here. If a 4.6GHz boost from AMD is able to beat the 5.0GHz boost from Intel, if Intel does release a 12 core, no matter the speed, AMD is in the position to easily release 16 core chips. Power IS a concern, since many motherboards without good VRMs just won't be able to handle that rumored 135W TDP for those processors. AMD COULD do it at launch, but if only five existing motherboards could handle it, that hurts the perception that first generation motherboards can handle the third generation processors.As far as the memory channels go, increased cache helps at least. Next year there should be a new socket, and AMD could potentially make it compatible with first through third generation processors.
Hul8 - Monday, May 27, 2019 - link
AMD needs to keep something back for when Intel inevitably leapfrogs them back.Once Intel takes back the mainstream performance crown (if they end up losing it this time around), a week later AMD can announce a pre-prepared 16 core, stealing at least some of Intel's thunder.
Also, why only sell a 16-core CPU to an enthusiast when you can first sell them 12-core, and 3 - 6 months later an additional 16-core?
jamescox - Tuesday, May 28, 2019 - link
Intel doesn’t seem to have anything other than pushing up max clock a little and they probably will not have anything for a while. The 10 nm parts they seem to have coming out in a reasonable time will probably be mobile parts. While I wanted a 16 core part, it may be the case that it would be significantly above the TDP designed into existing AM4 boards. If they want to sell it they may need to sell it at lower clock speeds, but people with boards that could support the power draw could then overclock it.Dragonstongue - Monday, May 27, 2019 - link
ok, that does NOT make sense..... for cooling and power spec, ok, might make sense, but BECAUSE they use a mix of 7 and 14 (especially when the 7 are the actual cores no longer tied down to being more or less 1:1 sync (so not crash etc)If anything, by splitting up the 7 and 14 from one another it actually allowed them to get clocks etc as high as they ARE....
ballsystemlord - Monday, May 27, 2019 - link
You might also mention that the 3900X's 32% better single thread performance vs. the 1800X would be a 15% boost clock increase. Therefore, the IPC plus memory bandwidth increase must be 17% (69% total over the previous arch).That's really good for an arch that was already boasting 52% performance over it's predecessor!
looncraz - Monday, May 27, 2019 - link
The answer is NOT memory bandwidth. It's simply frequency.9900k maintains a high relative all core boost of 4.7GHz. The 3800X very likely drops closer to 4.2Ghz or so... maybe even lower.
Zen has much better multi-threaded scaling at the same clocks than Intel.
Santoval - Monday, May 27, 2019 - link
We do not yet know the all-core frequency of the 3800X. However it looks like TSMC's 7nm high performance node is oriented more toward power efficiency than performance. Perhaps its optimal voltage - frequency curve is winking more toward mobile processors than desktop processors? That extra +45W TDP for a mere +300 MHz clock of the 3800X was very surprising, though it might hint at the above. Is the optimal base clock per TDP of the 3700X/3800X at 3.3 to 3.4 GHz?GreenReaper - Thursday, May 30, 2019 - link
Well, that doesn't mean it's using 105W, just more than 65W. In addition, we are not seeing all the boost frequencies or durations for various combinations of active cores. It may be that the 3800X can boost more cores higher, for longer.Santoval - Monday, May 27, 2019 - link
16 cores with 2 memory channels is equivalent to 32 cores with 4 memory channels and 64 cores with 8 memory channels. All three have 1 memory channel per 8 cores. AMD has already done 2 out of 3, so why not do all 3?peevee - Friday, May 31, 2019 - link
2 and 3 at much lesser frequencies (and thus throughput requirements). 1 has to maintain high frequency to stay above 12C Ryzen 9.Dragonrider - Monday, May 27, 2019 - link
My guess would be that yields are still not that great and having the fastest part use only 6 cores gets rid of a lot more die with bad cores. No point in putting out the 16 core until Intel makes the next move.dustwalker13 - Tuesday, May 28, 2019 - link
where is the 16 core?not released yet because amd has no reason at all to do so.
intel can't even compete with the 12core model at the moment so amd will keep the 16core back leaving room to upgrade when they need to, along with a broader base of useable chips since there will probably be a lot more chiplets with 6 of the 8 cores good enough to clock to 4.6ghz for the 3900x than than there are full 8core chiplets that can do the same, especially this early in the 7nm process.
they even can use faulty chips with 6 great cores and issues on the remaining two in their most expensive (still cheap) consumer chip this way.
plus this way they do not completely cannibalize their threadripper lineupt immediately.
they win on all fronts keeping the 16core away from the market.
smart move.
peevee - Tuesday, May 28, 2019 - link
There is definitely a space left for 3990X, as well as a few Ryzen 3s and/or 3 Gs.svan1971 - Saturday, June 1, 2019 - link
You seem upset, Intel might be your best bet.platinumjsi - Saturday, June 1, 2019 - link
16 core = end of summerKOneJ - Sunday, May 26, 2019 - link
https://www.globenewswire.com/news-release/2019/05...GeoffreyA - Sunday, May 26, 2019 - link
These are wonderful times. Thanks, Ian and Gavin, for the live blog as well.hacksquad - Sunday, May 26, 2019 - link
Finally its here!guachi - Sunday, May 26, 2019 - link
These AMD processors look almost too good to be true.shing3232 - Monday, May 27, 2019 - link
No, 2990WX use quad channel and 32core, and it work out okish. Zen2 has larger L3, it would work even better.PixyMisa - Monday, May 27, 2019 - link
That's what we said in 2017.Also way back in 2011, only that time we were right.
abufrejoval - Sunday, May 26, 2019 - link
Takes guts to stick with that core count and at least you get to enjoy the full 70MB of cache. Good thing Blender and Cinebench all fits inside that, not sure you can ever say the same for productivity workloads.I guess AM4 also means no real improvements on the PCIe lane count: Would love to see real and IF switches to give a bit of flexibility and what they plan for a new Threadripper.
msroadkill612 - Sunday, May 26, 2019 - link
The x570 chipset IO is effectively a huge improvement in usable lanes.abufrejoval - Monday, May 27, 2019 - link
Was busy watching live cast so I only caught up on the X570 bit afterwards :-)Certainly a big improvement in bandwidth, but without switches the best you can hope for is bifurication. That means a PCIe 3 card will loose you half of the potential bandwidth with no way to recover it.
PCIe switch chips used to be quite common on higher-end motherboards, but these days Avago/Broadcomm seems to have pushed prices beyond reasonable, while I'm not sure they even have a PCIe 4 product.
Of course the Southbridge by itself is actually more of a switch but overall AM4 is just becoming very I/O starved with Ryzen 3.
Billy Tallis - Monday, May 27, 2019 - link
IO bandwidth doubled throughout the platform and you're saying it's becoming *more* IO starved?!Church256 - Monday, May 27, 2019 - link
I assume they mean, an x16 GPU with 3.0 lanes would use up all 16 CPU lanes and provide no benefit.Take 16 PCI-E 4.0 lanes into a switch and have 32 PCI-E 3.0 lanes come out. Dual 3.0 GPUs without any bandwidth drop.
Or what will actually happen is 2 cards get 8 lanes each limited by the 3.0 on the GPU side and they run in x8 3.0 mode.
This is only a problem until motherboard lanes match card lanes but it's still an issue.
Zizy - Monday, May 27, 2019 - link
Why bother? Wait for the next gen GPUs and use 8 PCIe lanes for each. Much easier and cheaper. Heck, using TR would be cheaper. Even ignoring cost, you have to find someone wanting to use 2x GPU + wants to upgrade to the new CPU and mobo, keeping old GPUs.schujj07 - Monday, May 27, 2019 - link
With PCIe Gen 4 they have effectively doubled the lanes from X470 since it will have double the bandwidth.sor - Monday, May 27, 2019 - link
Well, they haven’t doubled the lanes, unless flagship GPUs are going to ship with x8 PCIe 4.0. The math always comes down to the device count.levizx - Monday, May 27, 2019 - link
You do realize x16 GPUs will work on x8/x4 links as long as physical slots fit, right? Yes, I said x4 because even x4 links are usable now.abufrejoval - Monday, May 27, 2019 - link
But a 16x gen 3 GPU will still only get 8x gen3 bandwidth on 8x gen4 slot, even if the theoretical maximum bandwidth is the same: Unless you have a switch instead of wires the translation is missing.sor - Monday, May 27, 2019 - link
Sure, but who wants to run a x16 PCIe 4.0 flagship card in a x8 or x4 slot? How many people do that today with their PCIe 3.0 x16 cards?SaturnusDK - Monday, May 27, 2019 - link
Raises hand. I certainly would run a flagship graphics card with PCIe 4.0 x8. Why wouldn't you? A Radeon VII or 2080TI barely even uses PCIe 3.0 x8 bandwidth, so PCIe 4.0 x8 is overkill in the extreme already for even the highest end graphics cards.naxeem - Monday, May 27, 2019 - link
You should be well aware that PCIe 3.0 16x card will still use 16x PCIe lanes in a Gen3 mode (3.0) wasting, for all practical purposes, the theoretical bandwidth.The Ryzen 3, as far as current Gen3 devices goes, is still the same old regarding PCIE bandwidth.
We won't be seeing Gen3 devices run any faster or taking any less PCIE lanes. Practically there is no change for this hardware.
PixyMisa - Monday, May 27, 2019 - link
Navi is PCIe 4.0.Koenig168 - Sunday, May 26, 2019 - link
We have a winner in the 3700X if the TDP number is accurate.Irata - Monday, May 27, 2019 - link
Agree - was hoping for the 12 core 5 Ghz turbo part, but the 3700x sounds increasingly appealing at the stated wattage.just4U - Monday, May 27, 2019 - link
I don't quite get the hype on the 3700X. The vanilla 2700 was already 65W TDP.schujj07 - Tuesday, May 28, 2019 - link
2700 - 3.2GHz Base & 4.1GHz Boost2700X - 3.7GHz Base & 4.3GHz Boost
3700X - 3.6GHz Base & 4.4GHz Boost
You are able to get almost the same base clock and a higher boost clock than the 2700X, which is a 105W TDP, all in a 65W TDP package. That means that the 3700X will out perform the 2700X by at least 10% all while having a 40W lower TDP.
akrobet - Monday, May 27, 2019 - link
How come 3600X has 95W TDP when 3700X has only 65W?jeremyshaw - Monday, May 27, 2019 - link
I'd guess die quality and binning. A failed 8 core part may not do as well as a fully working 8 core part. 95W TDP gives AMD a lot more room to bin their chips.jeremyshaw - Monday, May 27, 2019 - link
It may also allow AMD to put their better 6 core chiplets into the 12C SKU, leaving the "worse" ones for the 6 core 95W unit. Not that they would be objectively bad, just relatively worse in some (or all) measures.piroroadkill - Monday, May 27, 2019 - link
They're clearly hitting a wall. If you've ever overclocked, you'll know that at some point, voltage requirements increase DRAMATICALLY, and therefore power spkes.Korguz - Tuesday, May 28, 2019 - link
piroroadkill or they are limiting the clocks to where they are....cause thats all they need to compete with intel... having a 4.6 ghz keeping up to a 5 ghz cpu, doesnt look good for intel, does it ? :-) zen 2 may go higher.. but we will need to wait till release, to know for sure unless AMD releases something...Targon - Monday, May 27, 2019 - link
I'd guess the 95W and 105W chips have more overclocking potential than the lower power versions. In the same way that we make fun of Intel for rating their chips based on the base speed, AMD bases power on all-core boost speeds. If boosting power allows for higher speeds beyond the official boost speeds, AMD may still be able to hit 5GHz with this generation(we can't know until testing is done).Sychonut - Monday, May 27, 2019 - link
Looking forward to Intel's 14++++.III-V - Monday, May 27, 2019 - link
They don't need it. They have the best performing process in the industry, and have held that crown since the early 2000s at the latest.What they do need, desperately, is an updated architecture. They have security holes in need of patching, and their IPC is lacking.
just4U - Monday, May 27, 2019 - link
They will need it as AMD just attacked them on practically every front and apparently has the support from board partners with tech features being ahead of Intel boards for the first time in over 15 years.. Their also releasing a ton of different motherboards which is a good indicator that companies like Asus/Msi are expecting these to sell extremely well.schujj07 - Monday, May 27, 2019 - link
2006 isn't the early 2000s. Before Core 2 the Pentium 4 and P3s couldn't beat an Athlon/64 at the same clock speed. The P4 had to be almost 1GHz faster to beat an Athlon 64. It wasn't until 1st Gen Core i that they were faster in Enterprise. While Core 2 was fast in a single socket, the shared FSB made the Opteron faster in 2+ sockets.just4U - Monday, May 27, 2019 - link
Adding to your point SC. I built a fair number of i7 920s and PhenomII systems. Yeah you took a hit but it wasn't really noticeable for most. They were behind in launch times but it wasn't really until the FX line came out that they really took a massive hit as their single threaded mediocrity was quite noticeable by then as with their A8s/10s etc. Great graphics sure but you could notice the tradeoff.jamescox - Tuesday, May 28, 2019 - link
I would expect a 5 year old process like Intel’s 14 nm to be tweaked to get very high clock speed at this point. Getting higher clock is going to be difficult on smaller process sizes so much of the improvements will be from IPC style improvements and system architecture improvements.What intel is missing is the density and the chiplet architecture. The 12-core Zen 2 part has 64 MB L3. Intel 14 nm parts top out at 38.5 MB L3 and most of those are Xeons that cost thousands of dollars. AMD will have 64 MB L3 on a $500 part and 32 MB L3 on a $200 part. Cache density scales very well with smaller process.
Intel did make a 56 core Xeon, but I don’t know who would ever want to buy it. It doesn’t have any of the power optimizations allowed by designing the chiplets to be in an MCM from the start and it is 14 nm still. The TDP is 400 Watts. It is just two standard 28 core parts placed on a single package. They are close to 700 square mm each, so they are not cheap. The yields wouldn’t be great even at 14 nm++. The yield of processes under 14 nm isn’t going to be too good so using tiny cpu chiplets is a big win for yields. The 64-core Epyc 2 might be around 1000 square mm of silicon total. The 56 core Xeon (2 die) would be close to 1400 square mm with much less cache; 77 MB L3 total. The 64 core Epyc 2 will have 256 MB L3. That is a massive difference in cache density.
webdoctors - Monday, May 27, 2019 - link
Hopefully good sales for the 3600X, maybe some microcenter cpu/mobo combos.Machinus - Monday, May 27, 2019 - link
So...does Intel even sell CPUs anymore???sorten - Monday, May 27, 2019 - link
Significantly more than AMD, but that's likely to change.jjj - Monday, May 27, 2019 - link
So they got the perf now and pushing prices up, kills all the excitement about this product.Unclear why you are excited about 8 cores at 65W, they've been offering that for 2 years now, why expect anything else this time around?
just4U - Monday, May 27, 2019 - link
How so? A 12 core 3900x is reasonably priced. The 3700x will come in at the same price as a 2700x or damn close..SquarePeg - Tuesday, May 28, 2019 - link
akyp - Monday, May 27, 2019 - link
No word on B550 motherboards?I don't want a fan on the motherboard nor do I want to spend well over $120 on one.
SaturnusDK - Monday, May 27, 2019 - link
Lower end stuff is usually announced later.nils_ - Monday, May 27, 2019 - link
So is it possible to split one PCIe 4.0 lane into two PCIe 3.0?Billy Tallis - Monday, May 27, 2019 - link
Only with a PCIe switch chip. Those have been too expensive for consumer electronics since the PCIe 2 -> 3 transition. They're not getting cheaper with PCIe 4.0. Gen4 switches are also not widely available yet; X570 is probably going to be the first widely-available ASIC with Gen4 switching capability, but if you need to fan out more than four gen4 lanes, you're out of luck at the moment.abufrejoval - Monday, May 27, 2019 - link
That's what I meant with "I/O starved".First thing that came to my mind was thinking that now I could run a GTX 2080ti on 8 4.0 lanes and have 4x 2x 4.0 lanes for NVMe on U.2 or TB3/USB 4.
But even if all that works in terms of bandwidth, unless somebody sells 4.0 switches at the price of glue logic, it's not going to happen: Avago/Broadcomm still wants their money back from the M&A spree that created them pretty much a monopoly in the PCIe switch space.
And that's also where I wonder if perhaps AMD would do well kicking another monopolist's shins by offering a slightly broader range of "SouthSwitches" instead of fixed-allocation Southbridges.
They could take 4, 8 or even 12 lanes, include 4.0 8x -> 3.0 16x/8-8/4-4-4-4 capabilities, slot-heavy or USB/TB heavy and you could have more than just one, different ones, too, on a motherboard.
Of course, that would cut into ThreadRipper territory and they are simply not big enough to diversify that much, but the whole motherboard area is in big need of an overhaul to match growing compute power and I/O demands coming out of a desktop CPU socket.
Intel now actually has all the motivation in the world to stay with PCIe 3 on the desktop for a long time, because they still define what mainstream is and would like to keep AMD away from feeding on PCIe 4.0 pastures as long as possible.
I could even imagine that Nvidia might not enable PCIe 4.0 8x for political reasons, even if they boast PCIe 4.0 for IBM Power.
Actually with Xe-Winter coming and AMD winning HPC contracts, Nvidia is in a very interesting position anyway.
I somehow hate it when politics gain over engineering, but it's fascinating nonetheless.
jamescox - Tuesday, May 28, 2019 - link
I have already heard about pci-e 4.0 SSDs. If nvidia supports pci-e 4.0 then they would probably support it at x8. They are made to auto negotiate the number of active lanes. I am not sure it would be spec compliant if they didn’t support falling back to x8 from x16. Intel probably doesn’t support pci-e 4.0 because they designed it into their 10 nm designs that have been delayed.yhselp - Monday, May 27, 2019 - link
Yeeaah, now that AMD is competitive, it’s just mimicking Intel pricing.I wonder whether the price for the flagship would settle at $500 or whether it would keep growing. If Intel decides to price its eventual 12/24 i9 at $600, AMD would surely price its 16/32 Ryzen 9 at $600 and so on.
sirmo - Monday, May 27, 2019 - link
I mean AMD has to make money sometimes. They still have over $1B of debt to pay down. Besides the prices are still quite good compared to Intel. Only 2 years ago we were paying $350 for a quad core.Opencg - Monday, May 27, 2019 - link
I agree. There is progress from both amd and intel cpus so no hate there. GPUs on the other hand have gone backwards. Im ready for that half price 2070 competitor.Irata - Monday, May 27, 2019 - link
Is it ?Checking Newegg
the 8C/8T core i7-9700k goes for $409 without an HSF. So including an HSF, it sits in between the 12C/24T 3900x and the 8C/16T 3800x
The 6C/6T core i5-9600k goes for $264 (currently on sale), without HSF, so price-wise it goes up against the 8C/16T 3700x (if the latter includes an HSF).
piroroadkill - Monday, May 27, 2019 - link
Why shouldn't theyIII-V - Monday, May 27, 2019 - link
They should. However, AMD users generally base their loyalty on who sells the most product for the best price. Pricing things at a premium compared to their recent past will piss people off. Not a bad thing for users to be willing to switch whenever they want, but for AMD, it's not a good position to be in when you're easy to switch away from.This of course, neglects the fact that AMD is doing really well right now, and there is no reason for their users to leave them, but it's worth pointing out. Intel's a household name and a strong brand, and it'll take AMD a lot of effort to equal Intel in that regard.
just4U - Monday, May 27, 2019 - link
It's quite likely that AMD will provide good quality coolers on their 3700x and certainly their 3900x as they've stated they will include a cooler. That adds value over and above intels offerings in the same space. Iwilsonkf - Monday, May 27, 2019 - link
Selling 12C24T CPUs at $300 will piss the whole industry off, and also the shareholders (of both AMD and Intel).just4U - Monday, May 27, 2019 - link
It might tick off intel customers and shareholders but... AMD also announced it's 64core professional solution.. so the bar has been raised here..PixyMisa - Monday, May 27, 2019 - link
The 3900X is the same price as the 1800X and twice as fast.100% more performance per dollar in two years.
Badelhas - Monday, May 27, 2019 - link
Intel have been trying to milk us for 7 generations, now. 10% improvements from one generation to another.AMD and it´s 3700x is making me think it is finally time to upgrade from my Intel i5 2500k @4.8Ghz! :)
Allan_Hundeboll - Thursday, May 30, 2019 - link
I'm going to upgrade my I5 2500k with Ryzen 3000 -Jusy need to decide what model to go for. It will probably be one of the 65w versionmaroon1 - Monday, May 27, 2019 - link
It is not twice as fast on average. Just almost 2x in cinebenchNot only cinebench scale with cores, but it also benefits greatly from upgrading FP on 7nm Ryzen. Most of other benchmarks won't see the same boost .
jamescox - Tuesday, May 28, 2019 - link
I believe this is literally exactly the same pricing structure that the Ryzen 1xxx series launched at. You have a very skewed perspective if you think this is anywhere close to Intel prices. AMD compared their 12-core to an intel 12-core that cost $1200 during the keynote. Only $500 for a 12-core cpu with 64 MB of L3 seems like quite a bargain to me. AMD has a 32 MB L3 part for $200 dollars. You probably would have to pay thousands of dollars for an Intel Xeon with more than 32 MB L3.yhselp - Wednesday, May 29, 2019 - link
I guess it’s the branding that bothers me. It seems like AMD feels a need to offer an edge over Intel in its flagship offering, and since it can’t offer more perf-per-core, in a classic move, it offers more cores, but then again, as pointed out above, it can’t offer 12c/24t for $300-400, so it jacks up the price to match Intel’s i9, which itself is priced higher than usual because Intel apparently feels it’s justified to ask an extra premium for an 8c/16t CPU because of the little IPC gain they can offer; and the whole thing just starts to shift the established perception of how much a flagship CPU costs. I don’t like that because we all saw where it led in the GPU space; I don’t want to have Nvidia’s Titan equivalent in the consumer CPU space, ever. Besides, was it all necessary? AMD’s new 8c/16t CPU is cheaper than Intel’s. And if we assume that a general consumer wouldn’t need more than 8 cores, why offer 12 in a consumer product at all? I guess they wanted some of that i9 pie, I just wish they offered the $500 CPU as a pro-sumer product, Threadripper, whatever, and kept thr consumer offerings below $400.mikato - Wednesday, May 29, 2019 - link
Yeah I'm not buying a GPU until there is a major change there.Korguz - Wednesday, May 29, 2019 - link
if it was up to intel.. we would still be stuck at quad core.. cause of amd. we have 8 and 12... cant offer more perfirmance per core ? you sure ? seems zen 2 is offering just that.. and at 300 to 400 mhz less then intel to boot
Brodz - Monday, May 27, 2019 - link
Suggestion: A page for the core to core IPC comparison for 2nd gen to 3rd gen please.schujj07 - Monday, May 27, 2019 - link
Zen+ was 3% better IPC than Zen, so Zen 2 should be 12% higher IPC than Zen+plonk420 - Monday, May 27, 2019 - link
seconded. and comparisons for Haswell, Sandy, Nehalem, and at least one of the chips from 6000 to 9000yankeeDDL - Monday, May 27, 2019 - link
I remember when AMD announced the Excavator platform and their benchmarks. I wrote on their Facebook page "What have you done?!?!".They more than made up for it today. What an announcement. Can't wait for the reviews.
Please, please, do compare it against previous generations as well (core 7*** and 8***, and ryzen 1*** and 2***).
just4U - Monday, May 27, 2019 - link
Ooh.. MSI finally releasing godlike series for AM4. Yep I'll be scooping up one of those! Likely have to stick with my 2700x for now but that's fine..brakdoo - Monday, May 27, 2019 - link
https://www.amd.com/en/products/specifications/pro...Memory is 3200 mhz, you can add that to those tables.
azrael- - Monday, May 27, 2019 - link
Is it just me or shouldn't there be a derogatory remark/post by HStewart by now? I am disappoint!just4U - Monday, May 27, 2019 - link
He's probably to stunned to comment right now.. I don't know about you but this is the first time in years I've been actually excited about cpu/mb news. Amd just spanked Intel hardcore in the announcement arena. Plus .. considering the major participation by partners It's very likely the rumors are true on performance.azrael- - Monday, May 27, 2019 - link
Very excited as well. For the past 6 months or so I've been holding on to the thought of building a new rig based around the Ryzen 3000 series to replace my ageing Xeon E3-1245v2. Seems I can make that happen soonish. :-)Manch - Monday, May 27, 2019 - link
I was just thinking that lolKorguz - Monday, May 27, 2019 - link
azrael he's probably trying to figure out how he can spin, bend and twist this to make intel smell like roses, when, at the moment.. they smell like fertilizer :-)Zizy - Monday, May 27, 2019 - link
Weird pricing yet again, reminds me on 1st gen. The 3600 and 3700X are both better than the 3800X and 3600X in terms of price/performance, plus they have better efficiency.The 3700X seems to be the chip to buy this generation. Enough everything and quite cheap for what it offers.
Badelhas - Monday, May 27, 2019 - link
Exactly what I think. I think I am finally upgrading my good old old 2500x at 4.8Ghz...Ham_the_Terrible - Monday, May 27, 2019 - link
Holding out on the 16 core to offload the 12 core CPUs that is a bunch of happy horsesh*t. I told myself the second they released these new Ryzens I'd grab the top tiered 16 core I'd been hearing about to use as a workstation build. Now by the time it comes out, my hype over this will be dead and they'd have lost a sale. I wonder how many others will wait or just forego this lineup.Haawser - Monday, May 27, 2019 - link
So I wonder if 7nm has any more OC room than 14nm did ? Because if the 3600X can be OC'd from 4.4 to say 4.6, it's going to be a bit of a bargain at $249. Hell, with 15% better IPC, even if it only makes 4.5 it will be -much- faster than a 2600X.SaturnusDK - Monday, May 27, 2019 - link
No. It will literally be as fast. The 3600 and 2600X are identical. The only difference is the boost profile and the included cooler. Just like 2600 and 2600X is.The AMD X series is generally for people that just wants the best the silicon is able to give without having to manually tweak settings.
SaturnusDK - Monday, May 27, 2019 - link
Obviously, meant 3600x in the first line.Haawser - Monday, May 27, 2019 - link
Obviously didn't read, I was talking about 3600X vs *2600X' not 3600.just4U - Monday, May 27, 2019 - link
I'd heard AMD is tooting 15-25% gains clock for clock..BushLin - Tuesday, May 28, 2019 - link
If you truly want a workstation then Threadripper says hi.John_M - Monday, May 27, 2019 - link
I like the fact that the 65 watt 3700X will come with a Wraith Prism cooler. https://www.amd.com/en/products/specifications/pro...just4U - Monday, May 27, 2019 - link
Yeah.. their nice coolers. I wish they included them on more processors in the 2000 line. A lot of people use aftermarket coolers for their 2700x though and you can normally sell it for 30-50 bucks fairly quickly if you don't want it. They can be a little loud if you have tuned them properly but their good coolers.flyingpants265 - Monday, May 27, 2019 - link
Hah! Sorry, that's WAY too high on the price. I'm not getting my hopes up for those pre-release corporate benchmarks, either.. Performance better be AT LEAST as good as what they're saying.FMinus - Monday, May 27, 2019 - link
We had the Intel monopoly exactly because of your attitude - AMD gives you 7nm 12 cores on AM4 for $499 which is cheaper than Intels 14nm 8 core 9900K, and people still moan that it is too expensive. What do you want, 16 cores for $50.On top of all that, you get 7nm 8 core 16 threads that is basically 1:1 with Intel (according to AMD data) for half the price. And all AMD CPUs come with a box cooler that does not suck. What do you people want, seriously, if this is not enough and if you don't consider those CPUs for your next PC, be ready to be permanently bent over while Intel continues to violate your wallet and your rational thought down the future lane.
just4U - Monday, May 27, 2019 - link
FP, you just have to look at board partners and announcements by others releasing hardware that will take advantage of what AMD is about to bring to the table. As pleased as I was about the 1st/2nd gen Ryzen (their great products..) Asus/MSI/Gigabyte and the like didn't actually release a lot of products for them... A very different scenario this time with the upcoming launch. Their not just doubling down their tripling and quadrupling down releasing some pretty high end stuff. That's a good indicator right there.. ad to that Amd strutting about like a proud lil rooster all indicates their not bsing us.Canam Aldrin - Monday, May 27, 2019 - link
REQUEST Maya and V-Ray benchmarks when you test the R9. Even though this isn't HEDT, a lot of us would be considering this for content creation / animation work.Gc - Monday, May 27, 2019 - link
Anyone expecting to see cooling problems with on the parts with no filler for the missing chiplet die? I imagine heat spreaders are designed to make contact with the chip before the edges make contact with the package, so on the corner with no chiplet the heat spreader will get pushed down further than the other corners, either in manufacturing or when tightening down the cooler. The result is either the heat spreader warps, or the opposite corner is raised off the chip. Most will work, but may have higher defect rate. Reminds me of the problems with Alienware laptop tripod heatsink, which made flat pressure contact with the die difficult due to warping.maroon1 - Monday, May 27, 2019 - link
Cinebech only ? What about other benchmarks ?What about gaming ?
TheWereCat - Tuesday, May 28, 2019 - link
There was Blender benchmark as wellHixbot - Monday, May 27, 2019 - link
So exciting to see the CPU market competitive again. I'm hoping AMD can bring the heat to NVIDIA in the GPU market also (though the latest Navi details indicate they're not seriously pushing the price downward).ballsystemlord - Monday, May 27, 2019 - link
Only on mistake in your article Ian, good work!"The Ryzen 7 3700X is an eight core, sixteen thread CPU with a 3.6 GHz base frequency and a 4.4 GHz turbo frequency. It has 4 MB of L2 and 36 MB of L3..."
32MB not 36MB (What's with all the extra spaces?):
"The Ryzen 7 3700X is an eight core, sixteen thread CPU with a 3.6GHz base frequency and a 4.4GHz turbo frequency. It has 4MB of L2 and 32MB of L3..."
phoenix_rizzen - Tuesday, May 28, 2019 - link
You always put spaces between values and units.ballsystemlord - Tuesday, May 28, 2019 - link
I must be too used to the more terse writing of the linux command line. :)halcyon - Monday, May 27, 2019 - link
So, Intel can do all 8-cores at 5Ghz (sans AVX), but AMD can't do more than ... how many cores exactly at 4.5Ghz (3800X)?GTan - Monday, May 27, 2019 - link
If you are talking about the i9-9900K, the 5Ghz turbo is only for two-cores. It does not run all 8 cores at 5Ghz.schujj07 - Monday, May 27, 2019 - link
Don't forget they did release the Special Edition 9900K with a 5GHz all core speed. You will just need to have a huge AIO to cool it and have a 1000W PSU minimum. Probably all for the low cost of $1000.halcyon - Monday, May 27, 2019 - link
9900KS does all-core turbo at 5Ghz. Yes it is expensive. Yes, it is probably binned parts.But it exists.
Why can't AMD do all-core 8-core 5Ghz?
just4U - Monday, May 27, 2019 - link
Different design that just doesn't crack that barrier. My 2700x runs at 4.2 on air. Im ok with that. I haven't paid to much attention to GHZ.. I also have the 8600k and it's hitting 4.9 but it's neither better or worse than the 2700x from a everyday use perspective. I've always liked them both.Qasar - Monday, May 27, 2019 - link
halcyon.maybe because they dont need to ?
deksman2 - Monday, May 27, 2019 - link
Who says AMD needs 5GHz on all cores?GhZ aren't everything.
AMD could just raise their all core clocks to 4.5GhZ and you get an equivalent of 9900ks running all cores at 5GhZ in regards to actual performance.
That's how good AMD's IPC actually is.
Plus, with AMD, you will end up with much lower power consumption to boot.
AMD also doesn't suffer from security issues like Intel does (in which case, Intel's performance suffers by quite a bit enough that even an overclocked 9900ks to 5ghz all cores would basically be on par with AMD's Zen 2 8c/16th stock.
But one of the main issues for lack of 5GhZ on Zen 2 is probably the node.
7nm TSMC had some issues which basically downgraded initial performance expectations from 40% to 25%.
Also, Intel has been using 14nm for a long time now, to the point where the node has been heavily refined.
you want higher clocks out the factory? Wait for Zen 3 (on 7nm+).
The node will likely be improved by then to resolve some clock issues, so its not exactly a big problem.
halcyon - Tuesday, May 28, 2019 - link
TY. I don't need/want 5Ghz. Just asking.In the end , after all the tests my bet is that clock for clock, Intel will still be IPC king or equal (mitigations OFF) and it will still have a 10% clock speed advantage (8 cores, best part). That is nothing to sneeze at.
Qasar - Tuesday, May 28, 2019 - link
maybe.. but if zen 2 is as fast as intel while being 400mhz slower... that doesnt look good for intel... does it ??schujj07 - Tuesday, May 28, 2019 - link
Clock speed and IPC are different things that work together to get your ST/MT performance. You can test IPC by having 2 different chips at the same clock speed and seeing which completes as set of benchmarks faster. AMD is claiming that the 3800X will be faster than the 9900K, Zen 2 should have about a 10% higher IPC than Coffee Lake R since the max ST boost of 3800X is 4.5GHz and 9900K is 5GHz. Intel has about a 10% clock speed advantage and needs that added clock speed to make up for the lower IPC, at least since AMD says the 3800X is a little fasterazrael- - Monday, May 27, 2019 - link
HStewart, is that you?halcyon - Monday, May 27, 2019 - link
Who?Manch - Tuesday, May 28, 2019 - link
^^LOL^^That dude is still quiet.
Qasar - Tuesday, May 28, 2019 - link
yep.. been looking forward to seeing how he would try to make intel look good vs zen 2...Xyler94 - Tuesday, May 28, 2019 - link
considering his claims of 1 Sunnycove core = 2 Zen2 cores...Korguz - Tuesday, May 28, 2019 - link
what about his clam that sunny cove is a new architeCture ?? when all intel did was add/update parts of it ? wouldnt it be better to call it sandy bridge 7 ??cyberguyz - Monday, May 27, 2019 - link
With a naming of 3900X I am guessing they are dropping the whole threadripper line which to date has had the X9XX numbering scheme.just4U - Monday, May 27, 2019 - link
I doubt they'll be dropping it... Their popular even if in limited supply. The 9 is just a way of showing what it's competing against. Intel's i9.PixyMisa - Monday, May 27, 2019 - link
Lisa Su said specifically that Threadripper will continue. They might need to go to hexadecimal though.Santoval - Monday, May 27, 2019 - link
These (single core?) boost clocks are kind of disappointing. I expected at least 4.8 GHz single core turbo clocks, though the new XFR will surely allow that provided the cooling is adequate. I also wonder why there is such a huge TDP difference between the 3700X and 3800X. The difference in clock speeds are not high enough to justify a +45W TDP for the 3800X. Does that mean the 3800X can sustain the turbo frequency longer?notashill - Monday, May 27, 2019 - link
Turbo frequency has little to no correlation with TDP, max turbo is for one core and the TDP is going to be reached running all cores at the base clock. And the base is 300Mhz higher which I absolutely believe would increase power draw by 50%, it is simply going way past the efficiency sweet spot of the chip. Very much in line with my experience overclocking the current Ryzen chips.Korguz - Monday, May 27, 2019 - link
Santoval. considering zen 2 looks to be on par, or slightly faster then intel at the clocks they are at... why would they need to be higher ? do you want amd to make intel look worse ? :-)notashill
" And the base is 300Mhz higher which I absolutely believe would increase power draw by 50%, it is simply going way past the efficiency sweet spot of the chip. Very much in line with my experience overclocking the current Ryzen chips. " oh?? 300 mhz = 50% more power ??? would you have a source for this ? also.. Zen 2, is not current ryzen chips.. untill Zen 2 is released, and are reviewed, and in the hands of the public, how Zen 2 overclocks.. is still a mystery...
notashill - Monday, May 27, 2019 - link
Just from my own testing on a 1700, 3.7Ghz all core is about 100W and 4.0 is about 150 and pushing the safe voltage limits. IIRC anandtech's various Ryzen reviews had some good power measurements with overclocking data.Of course 7nm may result in wildly different voltage/power/freq scaling so who knows until the new chips are in the wild to test.
just4U - Monday, May 27, 2019 - link
Linus Tech Tips has a pretty interesting take on all this via their youtube video that's trending up with nearly a million views and 6000 comments in the last 12 hours. I don't doubt for a moment that Ian here has a lot to say about all this stuff as well. Look forward to hearing his thoughts. You guys must be just swamped at Anand over all this. Hopefully pumped to. I have not seen this sort of buzz in quite some time.WaltC - Monday, May 27, 2019 - link
Oh, AMD is saving several eye-openers to be announced in the coming weeks--or my name isn't WaltC...;)just4U - Monday, May 27, 2019 - link
Not sure what else they could drop that wouldn't be underwhelming in comparison to what was released over the last 24hrs. Biggest news day for Amd in over a decade.audi100quattro - Monday, May 27, 2019 - link
Is it real AVX256 now? Also what is the max DDR4 speed for two (16GB) modules per channel?audi100quattro - Monday, May 27, 2019 - link
Also, is the new Ryzen 9 technically NUMA? What are the NUMA trade-offs AMD has made here?jamescox - Tuesday, May 28, 2019 - link
It isn’t NUMA. The original Ryzen processors were not NUMA architectures either since it was a single die. ThreadRipper and Epyc 1 would have been considered NUMA architectures though. NUMA only covers access to main memory. Zen 2 will not be a NUMA architecture since the IO die handles all memory access.They still have variable access to L3 caches though. There is still some penalty for sharing data across CCXs. Intel uses a mesh network between cores and cache slices to allow mostly uniform access to cache. This burns a lot of power to do this at core clock and it is actually higher latency than what you would see within an AMD CCX. This only comes up when you share data across a CCX boundary, like if you have two threads with shared memory running on different CCXs. You have 4 cores / 8 threads within a CCX, so you have plenty of resources for most things. If you do need to share data across CCXs, then it can still be done efficiently by doing it in a more coarse grained manner. This requires some software optimization in some cases.
With Zen 1, CCX to CCX traffic had to go through an infinity fabric switch at memory clock. This wasn’t really that much of an issue in the first place, but it should be less of an issue with Zen 2. There is no memory clock on the cpu chiplet, so it wouldn’t make any sense for it to operate at memory clock. It probably operates at core clock, so CCX to CCX communication on the same die will probably be much lower latency and higher bandwidth compared to Zen 1. The chip to chip latency should also be quite low due to the high clock speeds of infinity fabric. The bandwidth is more than double what it was in Zen 1.
AlexDaum - Saturday, June 1, 2019 - link
The IF clock will probably still be locked to memory frequency, because the communication between the I/O die and the chiplets will be IF. By synchronizing the IF clock to the memory clock they can significantly reduce latency to send data from the cores to the memory controller.The CCX to CCX communication could run with a different IF clock, but I'm not sure if that would make sense, because they would need multiple IF endpoints on every CCX then (one for CCX to CCX, one for CCX to I/O)
formulaLS - Tuesday, May 28, 2019 - link
Yes, it now has full-speed AVX 256 and the supported 3200 Mhz memory speed is 1 Dimm per channel.mdriftmeyer - Wednesday, May 29, 2019 - link
Yes it's real 256. Th e max DDR4 depends on the motherboard manufacturers designs.https://community.amd.com/community/gaming/blog/20...
NixZero - Tuesday, May 28, 2019 - link
or maybe its just marketing, this way they can sell 3900x as the most premium sku then when the prices fall down from attrition introduce a new most premium sku at full price.BushLin - Tuesday, May 28, 2019 - link
What's the deal with PCIe 4.0 lanes? Is it 40 as per the slide / Tech Report or 24 as per this article?AlexDaum - Saturday, June 1, 2019 - link
It's 24 from the CPU, 16 for the PCIe x16 Slot, 4 for M.2 NVME and 4 to the chipset. The chipset then provides another 16 PCIe lanes for a total of 40 lanes. But the lanes from the chipset can only get a total bandwidth of 4 PCIe 4.0 lanes.AntonErtl - Tuesday, May 28, 2019 - link
An exceiting announcement. Two questions come to my mind:Will the Ryzen 3000 CPUs have ECC? I guess that yes, because AFAIK all AMD CPUs since at least the Athlon 64 had it; I expect that we will hardly see Ryzen Pro CPUs with official support in the retail market for CPUs, though.
Does Zen2 have proper fixes for Spectre? With "proper fix" I mean something that does not cost performance, and that you do not disable if performance is more of an issue than security.
azrael- - Wednesday, May 29, 2019 - link
Most X570 board specs on manufacturers' sites state ECC support, so I'd wager the Ryzen 3000 is capable of that.mdriftmeyer - Wednesday, May 29, 2019 - link
Yes it has ECC. Go check out the X570 motherboards already being released.https://community.amd.com/community/gaming/blog/20...
From there go to the vendor sites and read their specs.
Ratman6161 - Tuesday, May 28, 2019 - link
I guess the CPU I'd be most interested in isn't on the list. That would be a Ryzen 7 3200 (without the X). For those willing to overclock, IMO the CPU's to have in the first gen were the 1700 (what I have) and in Ryzen + the 2700. a 1700 will usually overclock to the same speed as a 1800x while the 2700 will usually hit the same speeds as a 2700x. So I was hoping the replacement for my 1700 would be a 3700. Perhaps if I wait a bit longer? Actually wait a bit longer is almost always the best option as prices are certain to drop at least a bit after a while.HardwareDufus - Tuesday, May 28, 2019 - link
So, 16core/32thread Ryzen9 3950X @ 5Ghz after Christmas?. ;p,flashbacck - Tuesday, May 28, 2019 - link
hmm. Time to upgrade my Sandy Bridge?Gmn17 - Wednesday, May 29, 2019 - link
Going to wait for 3950x or TR 64 corezodiacsoulmate - Wednesday, May 29, 2019 - link
Is there no info on die to die latency, and memory to die latency?AlexDaum - Saturday, June 1, 2019 - link
Not yet, you'll have to wait for 3rd party benchmarks for thatCyrIng - Wednesday, May 29, 2019 - link
Looks appealing.Just wish to process buildroot in seconds...
Open Source OS ready ?
BKDG specs available ?
corinthos - Wednesday, May 29, 2019 - link
RIP Intel.dave_the_nerd - Thursday, May 30, 2019 - link
I would _love_ to see an GHz-normalized IPC comparison between 3xxx Ryzen and 2xxx, plus the last couple generations of Intel kit. (7700k/8700k/9700k). Just saying.nzweers - Friday, May 31, 2019 - link
Here are some ipc comparisons of last gen, bottom of the page: https://www.guru3d.com/articles_pages/intel_core_i...Plus over 15 years: https://www.reddit.com/r/Amd/comments/5v11tm/ipc_p...
johnmartin7042 - Saturday, June 1, 2019 - link
Usually I do not read article on blogs, but I wish to say that this write-up very pressured me to try and do so! Your writing style has been surprised me. Thank you, very nice article.https://geekstechrenewal.com
https://geekshelp.support
sonicmerlin - Saturday, June 1, 2019 - link
Why does a consumer need 16 cores? Or 12? 8 cores even seems like overkill but I guess maybe some games will eventually take advantage of it.Gastec - Sunday, June 2, 2019 - link
Consumer of games?Haawser - Monday, June 3, 2019 - link
Why do you assume that the *only* thing consumers are interested in are games ? I use audio mixing software that includes stuff like convolution reverb (which is a bit like ray tracing for audio), and trust me, a few instances of that can use up as many cores as you want to throw at it.zmatt - Tuesday, June 4, 2019 - link
I use FL Studio a lot for music production as a hobby. FL loves threads and will happily gobble as many as you can give it. This is true of most DAWs.Gibz - Sunday, June 2, 2019 - link
hmm.. Very informative.https://procrackerz.org
Qasar - Sunday, June 2, 2019 - link
now there are links to cracked, pirated software ??? whats next ???Gastec - Sunday, June 2, 2019 - link
Next are links to trojans.Chp5592 - Sunday, June 2, 2019 - link
FWIW, the difference between AMD and Intel now can be traced back to hiring of Dr Su. I knew of her at MIT when I was an undergrad. Went straight through to get her PhD in EE at MIT which is only reserved for the best of the best (most mortals at MIT are encouraged to go elsewhere for grad school). She was a superstar at IBM. It would have served IBM well to have made her CEO. IBM’s loss was AMD’s windfall. Intel prior to the recent hire was run by marketing people, not engineers.zmatt - Tuesday, June 4, 2019 - link
That's the vibe I get as well. People Like Su, Jensen, Gordon Moore etc. They aren't like most executives who are usually sales or marketing types. They are actual engineers and have a deep understanding of what it is their company does. Which is a very important skill in tech. When Dr. Su gets on stage and talks about Ryzen, she isn't regurgitating lines marketing gave her, she is talking about a product that she helped create.Targon - Thursday, June 6, 2019 - link
Most people don't understand that the position of CEO and President really are two different roles. The president of the company is the person who runs the company, while the CEO is responsible for getting investors and trying to hype the company and drive up stock prices. People like Steve Jobs were good at both positions, but the majority of CEOs out there are idiots with a MBA from some well known business school who have zero understanding of technology, so should NOT be running things at a technology company.Get these CEOs out of the running of the companies they are supposed to hype, and those companies would do much better.
evanh - Wednesday, June 5, 2019 - link
Has anyone asked how much DRAM is in the I/O die? The slides don't even mention it as a possible L4 cache. A gigabyte in there could serve an IGP really well.Targon - Thursday, June 6, 2019 - link
Since the only Ryzen 3rd generation processors announced so far do not have a GPU, then expect that there isn't any memory on the I/O die. APUs will show up later, probably after the Ryzen 3 and 5 products get their specs released.evanh - Thursday, June 6, 2019 - link
The IGP idea was just an example alternative use of such a large DRAM buffer. Using it as L4 cache would be the general use for max speed of a CPU.The question is, how much DRAM is in that I/O die?
BriComp - Wednesday, June 5, 2019 - link
Where is the Q&A?nn68 - Thursday, June 6, 2019 - link
Anyone know if Ryzen 3000 will have similar to Ice Lake's Galois Field instructions which supposedly boost performance in AI applications?https://en.wikichip.org/wiki/intel/microarchitectu...
evanh - Thursday, June 6, 2019 - link
The IGP idea was just an example alternative use of such a large DRAM buffer. Using it as L4 cache would be the general use for max speed of a CPU.The question is, how much DRAM is in that I/O die?
evanh - Thursday, June 6, 2019 - link
The IGP idea was just an example alternative use of such a large DRAM buffer. Using it as L4 cache would be the general use for max speed of a CPU.The question is, how much DRAM is in that I/O die?
KAlmquist - Friday, June 7, 2019 - link
I'd say that with Zen 2, AMD is copying the pricing on the original Zen. They priced the original Zen flagship (the 1800X) at $499. They've also priced the Zen 2 flagship (the 3900X) at $499. The 1700X one step down from the 1800X, was priced at $399. The 3800X, one step down from the 3900X, is also priced at $399. Another step down gets us to the 1700 and the 3700X, both priced at $329. Below that are the 1600X and the 3600X, both priced at $249.The only new price AMD has come up with is for the 3600, which is priced at $199, twenty dollars less that the $219 that the 1600 was listed at when it was introduced.
gronetwork - Friday, June 28, 2019 - link
The 3900X is 5% more powerful than the 9920X, the 3950X has 32% more performance than the 9960X, if we compare processors with the same number of cores and threads. That is quite outstanding as the new AMD chips are pratically 3 times less expensive.https://gadgetversus.com/processor/amd-ryzen-9-390...
Intel will have to seriously lower its prices if they want to stay in the PC race.
no0rsania - Monday, March 23, 2020 - link
Hi, I do believe this is an excellent website. I stumbledupon it ;) I'm going to return yet again since i have book marked it. Money and freedom is the best way to change, may you be rich and continue to help others. and check https://instacartpromocode.com/