Comments Locked

98 Comments

Back to Article

  • DigitalFreak - Tuesday, March 22, 2016 - link

    I'm surprised they stuck with "tick tock" this long.
  • TEAMSWITCHER - Wednesday, March 23, 2016 - link

    They haven't. Ivy-Bridge - Haswell - and Haswell Refresh was a PROCESS-ARCHITECTURE-OPTIMIZATION cycle.
  • jasonelmore - Wednesday, March 23, 2016 - link

    Great for consumers, we can use the same motherboard for 2-3 different CPU upgrades. Thats what i did with Haswell.. Still rocking z87 on a 4790K
  • marc1000 - Thursday, March 24, 2016 - link

    I am still on a 2500k @ 4ghz, and I see no need for more computing power for at least a couple more years.

    I have a P67 board with a couple USB 3 ports, below average power delivery (hence the 4ghz instead of 4.3 or 4.4, but that's ok), 16GB RAM and one SSD + HDD. and I run some virtualization/development, video conversion and games, so that's a bit above the average user needs. the price tag is just to high to justify 15% to 30% more performance. of course for some people the price is not that high, and the gap bigger. but not for everyone.
  • Samus - Monday, March 28, 2016 - link

    Hah I have an old Intel P67 in my media center. It's the recalled stepping, too. Unfortunately because it's the early silicon it doesn't support any ivy bridge cpu's but I did finally upgrade from a Pentium G630 to an i5-2500k used off eBay to play with over locking because the G630 was clunky with transcoding when I'd VPN in to stream remotely.

    I have it at 4GHz on a mild closed loop cooler (Corsair H60) and did some benchmarks, and it's almost identical in performance to my Xeon E3-1230v3 Haswell workstation, and actually kills it in single threaded performance.

    Another thing people don't know about the P67 is you can run ECC memory in it. I stumbled on this by surprise. No boot errors or anything. I don't know if the ECC is working (lol) but it's been stable that way for almost 5 years, (4) 1GB Kingston DIMMs I pulled from an HP server for an upgrade in 2011.

    Intel actively killed ECC support from everything but the C2xx chipsets around the time the 60-series was introduced, so you would need a compatible CPU and Chipset. Many older Core-based non-Xeon cpu's surprisingly supported ECC from Nehalem through Sandybridge.
  • marc1000 - Wednesday, March 30, 2016 - link

    nice to know i'm not alone. that ECC part is fun, but I would not change my 16gb of normal RAM to some ECC ram - I dont even have access to server-grade parts lol.

    put me in the line for i5-8600k, when it drops to palatable prices of course :)
  • extide - Thursday, March 24, 2016 - link

    No, the haswell refresh was not a new series. They do a refresh like the haswell refresh almost every generation, although the refresh for the Haswell generation was much more widely talked about because of the Devil's Canyon chips. The architecture did not change at all. Kaby Lake will be the first true Optimization release.
  • III-V - Tuesday, March 29, 2016 - link

    Haswell Refresh was not an optimization, by any means. It was just a gap filler, and used the same silicon. Kaby Lake on the other hand will have some actual improvements baked in.
  • rangerdavid - Tuesday, March 22, 2016 - link

    Tick, Tock, Toe?
  • solipsism - Tuesday, March 22, 2016 - link

    Or Rikki-Tikki-Tavi¡
  • Hulk - Wednesday, March 23, 2016 - link

    We have a bingo!
  • iamkyle - Tuesday, March 22, 2016 - link

    More like tick-tock-(buffering....)-tock
  • Solandri - Wednesday, March 23, 2016 - link

    Snap, Crackle, Pop
    Dewey, Cheatem, and Howe
    Klaatu, Barada, Nikto
    Kal, Ort, Por
  • BioHazardous - Wednesday, March 23, 2016 - link

    What shard did you play on?
  • redfirebird15 - Wednesday, March 23, 2016 - link

    Lol @ bart the fink
  • icrf - Tuesday, March 22, 2016 - link

    Is PAO pronounced? pow? pay-oh?
  • iamkyle - Tuesday, March 22, 2016 - link

    Pow, like Ellen Pao
  • nandnandnand - Tuesday, March 22, 2016 - link

    Pay-us
  • MrPoletski - Wednesday, March 23, 2016 - link

    Its' pronounced:
    Pay-oh.... paaaayay-oh... I want an intel pay-oh and I want one now.
  • Murloc - Wednesday, March 23, 2016 - link

    pah-oh sounds better
  • bernstein - Tuesday, March 22, 2016 - link

    well this basically stretches product cycles even longer... sandy bridge is still going strong... desktop as well as notebook.
  • nandnandnand - Tuesday, March 22, 2016 - link

    These process shrinks are more important for notebooks than desktops, because power consumption is a much bigger factor.

    This is more good news for AMD. More time to show the world what Zen can do and possibly get Zen+ or a future chip near enough to Intel to beat it in more areas.
  • DigitalFreak - Tuesday, March 22, 2016 - link

    AMD would need a miracle for that to happen. Even if they got one, Intel has more than enough resources to quickly turn the tide back in their favor. AMD only exists at this point because Intel would have antitrust issues to deal with if they didn't.
  • Frenetic Pony - Tuesday, March 22, 2016 - link

    Huh? Resources have nothing to do with it. Intel's been shovelling money at Moore's law and still isn't ahead, and the current CPU architecture for silicon and modern apps is unifying towards a common optimization. For all sakes and appearances Zen looks like Core looks like Apple's architecture at low levels more and more and more.

    We're getting into diminishing returns on all fronts as far as CPU architecture goes, and same goes with the advancement of Moore's law, which has benefited wide architecture like GPUs far more than CPUs as time has gone on. Until there's a major change, most likely from silicon to some other material, and both the process and architecture need to change dramatically, Intel's vast resources aren't actually going to do much for it except soak up the damage as profit margins drop.
  • Michael Bay - Wednesday, March 23, 2016 - link

    If Zen looks like Core down low, you can kiss that modern level of IPC goodbye. ^_^

    Seriously though, of course designs will converge on something that works best, and here`s where Intel`s Big Money are a great help, since they can pursue multiple research avenues simultaneously in hopes of at least one winning the day. AMD doesn`t have such luxury.
  • ImSpartacus - Tuesday, March 22, 2016 - link

    Yeah, it's frustrating that amd is so pitiful, but I don't see them seriously their situation any time soon. Still, I look forward to seeing zen the next year or two if only because it'll be something new and potentially interesting.
  • BurntMyBacon - Wednesday, March 23, 2016 - link

    @DigitalFreak: "AMD would need a miracle for that to happen."

    It is looking very much like AMD will have access to a 14nm node before Intel leaves it, so ...

    @DigitalFreak: "Even if they got one, Intel has more than enough resources to quickly turn the tide back in their favor."

    I don't think many people are suggesting that Intel won't reach the next node well before AMD has access to it. That said, wouldn't it be nice to see every other or every third product release be at node parity. Intel would still be a step or two ahead given time to optimize for the new process, but AMD has historically introduced new architectures and new process nodes concurrently, so they could close the gap a bit during these overlap periods. In any case, they would be in a much better position competitively, than they are currently.

    Consider the 65nm Phenom (Agena) architecture. The initial release was underwhelming and didn't really compete with its 45nm Core2 (Penryn) competition. It was widely believed that the Phenom (Agena) was hindered by its small L3 cache size. When the 45nm Phenom II was released, one of the major upgrades was the L3 cache. Interestingly, the Phenom II (Deneb) was a worthy competitor to the Core2 (Penryn) with the Phenom II 955 and the Core2 Quad Q9550 posting results as close as any AMD / Intel comparison since the pre-Pentium days. The problem is, it was now competing against Core ixxx (Nahalem/Westmere). If AMD had access to 45nm in time to compete with Core2 (Penryn), Intel would likely have still been ahead given the lack of time to optimize other parts of the architecture. However, just getting an appropriate cache in would have allowed the Phenom to compete much better. They would have been able to put more money into developing the current architecture and they may not even have felt the need to make a highly risky and radically alteration of the architecture into the Bulldozer (and derivatives) line. We may then have avoided the whole Sandy Bridge vs Bulldozer comparison that resulted in little competition from the AMD side forcing no more than incremental upgrades from Sandy Bridge - Ivy Bridge - Haswell - Broadwell - Skylake. Yes there have been notable improvements, but the fact that there are so many people with an i5-2500K that still don't feel the need to upgrade should tell you something. Meanwhile, AMD's "High-end" Piledriver is still on the same process node as Westmere (32nm). Their more mobile oriented Excavator is only a halfnode ahead of that (28nm) and not even close to node competitive with Intel's 22nm tri-gate process, much less their 14nm tri-gate process.
  • webdoctors - Tuesday, March 22, 2016 - link

    Ya I'm still rocking my Sandbridge desktop at home and it still hasn't shown any signs of being out of date. DDR4, USB3.1, and probably some other minor things I'm missing out on, but I still have all the main things like PCIE 3.0 and virtualization for my VMs. Its really hard to convince anyone with a 5 year old desktop to upgrade right now...
  • DigitalFreak - Tuesday, March 22, 2016 - link

    I'm in the same boat with my Ivy Bridge i7-3770k. It's more than enough for what I need, but the lack of newer features on the motherboard are an issue. Only 2 SATA 3 ports, etc. If I could run the 3770k in an Z170 motherboard, I'd do it in a heartbeat.
  • StevoLincolnite - Tuesday, March 22, 2016 - link

    I have Sandy-Bridge-E. 3930K... Can still out-bench the 5930k once I throw overclocking into the mix. (The 3930K will overclock higher. 5ghz under water.)
    And will push past the 5960X at stock...
    Just don't see the point upgrading this 5 year old machine other than to save electricity, Intel has given me no reason to do so.
    Ironically... Prices have skyrocketed since I built my 3930K. Which was about $400 AUD, today it's replacement the 5930K is $870 for the CPU's alone. Ouch.
  • Murloc - Wednesday, March 23, 2016 - link

    saving electricity is often not a good reason considering how much you can spend on new parts.
  • ImSpartacus - Tuesday, March 22, 2016 - link

    Yeah, I have a Haswell machine and I have no use for USB3 (let alone 3.1). Stuff like CPU perf, RAM speed and PCIe bandwidth only matter when you have more GPU perf than any reasonable gamer would pursue.
  • maximumGPU - Wednesday, March 23, 2016 - link

    isn't SB PCIE 2.0?
  • extide - Thursday, March 24, 2016 - link

    Yeah, but SB-E supports 3.0
  • MrPoletski - Wednesday, March 23, 2016 - link

    My i7-920 is still going strong.
  • bigboxes - Wednesday, March 23, 2016 - link

    In my HTPC
  • Samus - Monday, March 28, 2016 - link

    Sandy bridge was the last generation where Intel really nailed it. It capitalized on and simplified Nehalem's complex QPI platform, introduced competitive ondie graphics and still remains competitive if not downright current in performance. It's been bland ever since.

    Ivy bridge did almost nothing for sandy bridge but shrink the node which came with its own compromises, and Haswell is the most controversial launch in Intel's recent history, from the FIVR to broadwell mostly MIA to botched chipset launches such as the early stepping of 80 series that couldn't run the refresh cpu's to the 90 series existing to be compatible with broadwell which never materialized, then broken promises of skylake compatibility. A lot of people who upgraded from even the X58 first gen Core to Haswell have heavy regret to this day. All they really got was a slightly lower power bill.
  • melgross - Tuesday, March 22, 2016 - link

    I've been saying this for years. Intel, and others have seen slipping introductions for new process technology for some time.

    32nm delayed for 3 months.
    22nm delayed for 6 months.
    14nm delayed for 12 months.

    And with 14nm, we've seen Intel come out with simpler chips first, which is something they've never done before.

    Now, 10nm was supposed to come out in late 2016. Then it was delayed to early 2017, then late 2017. Anyone want to bet against the idea that as we move to late 2016, and early 2017, it will slip to early 2018?

    When we look at the cumulative delays so far, 10nm is already a year late.

    After that, we have the serious problem of 7nm. It's already been stated in various chip publications that fin-fet doesn't work for 7nm, because the spacing will be too close, among other problems. But the three other solutions for that problem haven't been working either. No one knows if they will work, and there's no other possible solution known at this time. While everyone doesn't doubt that 7nm will get working, the time scale is in real question.

    As for 5nm, many chip experts still think it may not be possible.
  • extide - Tuesday, March 22, 2016 - link

    What parts are 400mm^2? Until Broadwell-EP starts shipping, or Knights Landing, I think the biggest thing on 14nm is Xeon D. Or are some of those shipping already?
  • wicketr - Tuesday, March 22, 2016 - link

    If AMD were a serious competitor (as well as a couple other vendors), and Intel only had a 30-50% market share of desktop/laptop CPUs, I wonder if they'd be "optimizing" (aka slowing down research) as opposed to continuing on at a relentless pace that they once were.

    It seems that since they basically have a monopoly now, they are not nearly as interested in making their CPUs generationally faster. Moore's law has basically ended. And CPUs are becoming mere optimizations of their former self.
  • michael2k - Wednesday, March 23, 2016 - link

    I don't think they can make them generationally faster; the majority of their product is thermally limited to, like, 35W, in a laptop. 135W parts are shipping as 12 core parts, maybe 16, because they can't physically hit 6GHz.
  • bug77 - Wednesday, March 23, 2016 - link

    That's a good point. With no real competition, the delays may well be an attempt to milk the customers (though it's probably a combination of that and the added technical difficulties).
    However, next time the pressure on intel won't come from AMD, but from Samsung.
  • Murloc - Wednesday, March 23, 2016 - link

    well there is the tablet and phone market though.
  • Meteor2 - Wednesday, March 23, 2016 - link

    The June Top500 is going to be interesting reading ;-)
  • stephenbrooks - Tuesday, March 22, 2016 - link

    "Intel would upgrade their fabrication plants to be able to produce processors with a smaller feature set,"

    Smaller feature SIZE, right? (Smaller feature set is amusing though - are they going RISC?)
  • MrSpadge - Wednesday, March 23, 2016 - link

    Definitely looking forward to fewer features in my new CPU!
  • Cygni - Tuesday, March 22, 2016 - link

    RIP Moore's Law
  • bug77 - Wednesday, March 23, 2016 - link

    I think Intel should fire Moore now :P
  • Spectrophobic - Tuesday, March 22, 2016 - link

    I have a feeling Intel (and/or other fabs) will get stuck on the same process for more than 3 generations in the near future.
  • tygrus - Tuesday, March 22, 2016 - link

    As some point they need to use materials and new processes to create faster transistors using a larger feature size and lower power than current. Carbon nanotubes, germanium, something new ? Qubits are smaller but I don't think economical to mass produce.
  • gurok - Tuesday, March 22, 2016 - link

    I have been calling it, "tick, tack, tock", after tic-tac-toe.
  • Gunbuster - Tuesday, March 22, 2016 - link

    Seeing as Intel has stuck to 4 cores for mainstream while they have 18 core Xeon's for servers I would not exactly say they are hard pressed on the competition or research front...
  • RamarC - Tuesday, March 22, 2016 - link

    Andy Grove passed away today, so perhaps this is another sign that the MBA's have control of my favorite US technology company...
  • testbug00 - Tuesday, March 22, 2016 - link

    And about 2.5 years after the fact, Intel finally outs it to the public....
    http://semiaccurate.com/2013/01/29/why-intels-tick...
  • Michael Bay - Wednesday, March 23, 2016 - link

    >citing demerjian
    >even reading demerjian
  • Pneumothorax - Tuesday, March 22, 2016 - link

    So I guess CPUs are going to stagnate like gpus now?
  • shabby - Tuesday, March 22, 2016 - link

    Blame smartphones for the stagnation of gpu's, their soc's are taking priority over gpu's at tsmc.
  • Notmyusualid - Wednesday, March 23, 2016 - link

    +1
  • Arnulf - Wednesday, March 23, 2016 - link

    Mobile SoCs (along with embedded GPU!) are smaller than PC GPU chips and thus easier to manufacture on immature process (better yields).

    TSMC is running a business, not playing favorites, whoever can shell out the money per wafer given current yields gets into the queue. Mobile companies (well, Apple) can afford 16nm production sooner as there is less risk involved compared to production of large GPU dies so we see mobile SoCs coming out first.
  • bcronce - Wednesday, March 23, 2016 - link

    Like GPUs? Is that sarcasm? GPUs have been surpassing Moore's law for the past decade. Over 100% increases in throughput every 18 months and more than 50% power reductions every 18 months.
  • JoeyJoJo123 - Wednesday, March 23, 2016 - link

    Increases in what? Performance or transistor count?

    Because Moore's law does not say anything about the performance doubling every 18 months, only that the amount of transistors that can be placed on the same size chip can be doubled every 18 months and cost about the same. Doubling the amount of transistors does not mean doubling the performance. It just means that it doubles the complexity of manufacturing that chip.

    If you're one of the sorry few that misunderstands Moore's law to be about performance gains, then you truly have my condolences.
  • extide - Thursday, March 24, 2016 - link

    Moore's law is doubling the number of transistors every 18-24 months. Not throughput.
  • cobrax5 - Wednesday, March 23, 2016 - link

    You can't possibly think GPU's have stagnated, can you? Forget that they've been on 28nm for freaking ever, GPU's are highly parallel, and therefore can have more transistors thrown at increasing performance. The most recent gen GPU's are often > 50% faster than the previous gen, that's a whole lot better than the 5-15% you get with CPU's, especially since CPU's are still highly dependant on single threaded performance.
  • zodiacfml - Wednesday, March 23, 2016 - link

    This is simply Intel telling everyone that there's not much competition
  • Meteor2 - Wednesday, March 23, 2016 - link

    Intel compete with themselves. They want people to buy new chips. They don't want people to buy one chip then use it forever.

    Looks like 10 nm in 2017, 7 nm in 2020-21, and 5 nm in mid-2020s.
  • name99 - Wednesday, March 23, 2016 - link

    Perhaps Intel should have though of that before randomly spraying new features across each new processor generation in such an incoherent fashion that no-one actually bothers to support them because chances are their customers won't have the right chip to utilize the feature.
    *cough* TSX *cough* various flavors of AVX *cough* SHA *cough* ...
  • trane - Wednesday, March 23, 2016 - link

    The tick-tock was never a yearly cycle. It was always more like 1 year and 2-3 months per tick or tock. The first tick was in January. By Broadwell, that had slipped to December, one full year behind the tick-tock schedule. Now with three generations per process, they might actually accelerate generations from 1 year 2/3 months to 1 year.
  • naveenarur - Wednesday, March 23, 2016 - link

    Hickory-Dickory-Dock
  • Pork@III - Wednesday, March 23, 2016 - link

    Tick Tock lie than come another lie. Capitalism.
  • cobrax5 - Wednesday, March 23, 2016 - link

    Yeah, like all those awesome CPU's produced by communist economies...hah...
  • Pork@III - Thursday, March 24, 2016 - link

    I lived better in times when processors were not yet available for individuals.
  • blzd - Friday, March 25, 2016 - link

    Oh no, not a "good ol' days" comment.
  • FunBunny2 - Wednesday, March 23, 2016 - link

    there was a time, aka WinTel, when the symbiosis betwixt M$ and Intel was sufficient to keep both obscenely rich. Windoze/Office would barely work on current cpu, but OK on the one about to be released. a cycle source and a cycle sink. since the great demand for Office came from, well offices, both companies had known demand, and growing. if M$ held up its end by bloating the OS and Office on schedule. 99.9995% of what gets done in Word/Excel can be handled by a low-end Pentium, and has, obviously, for years. these days the symbiosis has shifted to gamers, and we know how deep pocketed and rampant they are. well, not so much. thus the SoC-ing of the desktop cpu. and such. we've reached the asymptote of computing. find someplace else to look for rapid growth.
  • willis936 - Wednesday, March 23, 2016 - link

    You can spot a crazy ramble when you see "M$" at least twice in four sentences.
  • BrokenCrayons - Wednesday, March 23, 2016 - link

    I've just gotten a call from the 1990's. They say they'd like their M$ back.
  • redfirebird15 - Wednesday, March 23, 2016 - link

    This was bound to happen eventually, and even if Intel could mass produce 10nm chips right now, the only real benefits would be the supposed power savings. Intel has put too much focus on low-power use cases that the power users who want/need absolute performance are still content with Sandy Bridge, except those who require the most cores available, in which case Haswell-E is available.

    I get it. We all want to lower power consumption and heat output, especially businesses who have hundreds or thousands of laptops. But us power users are desperate to see real performance gains from archicture and process improvements. Sadly, Sandy Bridge was such a radical jump in performance that Intel had set the bar too high for the next generations.

    I jumped from a socket 939 opteron dualcore to the i7 920. Massive improvement. I bought an i7 2500 sandybridge. Best proc ever. I have an ivybridge i7 laptop and now a skylake i3 6100. Day to day, with an ssd in each, they all perform the same.

    The power users who need absolute IPC performance are the ones getting screwed with each generation. But that is such a small subset of Intel's sales, i assume they just dont care.
  • Murloc - Wednesday, March 23, 2016 - link

    who are these power users?
    I mean, people I know who use lots of CPU computing resources just make the simulation run on university shared computing resources and stuff.
  • redfirebird15 - Wednesday, March 23, 2016 - link

    I'm definitely not one, but im sure some folks need the absolute best IPC available for their specific applications. Media professionals i suppose would want the best IPC so they can finish their current project and move on.

    The way i see it, there are pretty much 4 types of consumers: those who want the absolute lowest cost cpus and dont care about the performance, those with a limited budget who want the best perf/$, those who want the best perf/watt due to power and heat concerns, and those who need the best cpu to augment a specialized application i.e. highest single thread perf or most cores/cpu.

    I suppose the power users I'm referring to have a job/hobby where time is valuable, so they must find the best compromise of the above.

    Brandon
  • BrokenCrayons - Wednesday, March 23, 2016 - link

    For those citing Moore's Law - Moore's observation wasn't entirely tied to physics and engineering. There are other drivers to consider that are much more closely related to industry economics driven by customer demand and intertwined with software. While many of us get a gleeful little sparkle in our eyes when looking at new hardware, we often forget that the hardware is absolutely not the alpha and omega of computing. In fact, the reason why the hardware exists is to push the software and it's those programs that satisfy a variety of human wants and needs. Software hasn't been a major driver of new hardware adoption for a quite some time now and only demands the purchase of new equipment at a relatively relaxed pace when compared to earlier periods in computing industry history (say, the Win9x era, for example).

    Intel's lengthening of time on each manufacturing process is as much tied to economic factors as it is to engineering challenges. Credible competitive threats simply don't exist at present in Intel's primary processor markets. New system purchases aren't putting a large pull pressure on their supply chain. Software that requires vast improvements in CPU compute power are slow to emerge. Certainly, new manufacturing processes are becoming more difficult to develop, but we would be remiss if we didn't consider other factors besides physics.

    Then again, I still have a Q6600 at stock clocks in my last remaining desktop computer so what the crap do I know about any of this?
  • cobrax5 - Wednesday, March 23, 2016 - link

    I mean, that's true in a general sense, but not absolutely true. Think about how much processing power, bandwidth, storage, etc. it takes to run 4K video. You couldn't do that 10 years ago. Mobile SoC's can do that now. My TV can stream 4K video with it's (probably mobile-based) SoC inside. There have been huge strides in specialized blocks of silicon that are in every CPU/SoC sold now for things like encryption, video encode/decode, virtualization, etc.
  • orangefr2 - Wednesday, March 23, 2016 - link

    For intel: "Once you stop innovating you lose!"
    ― Me

    For AMD: “Vulnerability is the birthplace of innovation, creativity and change.”
    ― Brené Brown
  • Murloc - Wednesday, March 23, 2016 - link

    1. Intel is not stopping innovation, it's slowing down/not increasing investments at worst
    2. That quote is referred to people. Vulnerable companies most often just die.
  • Spartus - Wednesday, March 23, 2016 - link

    While the new PAO or ‘Performance-Architecture-Optimization’

    Needs Fixing (process not performance)
  • Murloc - Wednesday, March 23, 2016 - link

    I don't think so.
    1. Process shrink
    2. New architecture
    3. Optimization of said architecture.
    Rinse and repeat.

    How does performance make sense?
  • Murloc - Wednesday, March 23, 2016 - link

    derp I'm a retard
  • Dribble - Wednesday, March 23, 2016 - link

    So this actually means:
    process->architecture->0.1Ghz clock speed bump
    or tick->tock-not_a_lot
  • Pork@III - Wednesday, March 23, 2016 - link

    Streaming SIMD Extensions(first generation) at 1999 to today presented in all processors may has no come to end. 17 years one and the same. Ancient tecnologies that wasted space in the volume of instruction cache of processors.
  • extide - Thursday, March 24, 2016 - link

    Uhhh SSE is used quite a bit these days, it is one of those advanced instructions that you can pretty much count on being supported, so it is actually used quite a bit. Very helpful with stuff like compression or video encoding.
  • jsntech - Wednesday, March 23, 2016 - link

    This isn't so much news as it is stating that which we all knew would eventually happen. As theoretical limits of process technology push back harder and harder, it will be very interesting to see how parallelization plays into 'solving' some of those limits more and more. Of course we've seen lots of it already, but I think it's just the tip of the iceberg.
  • cobrax5 - Wednesday, March 23, 2016 - link

    One way to do it is they'll move from planar to 3D/TSV/stacking to pack more into less space. This is the only way I see chip makers integrating more pieces into one die/package.
  • damianrobertjones - Wednesday, March 23, 2016 - link

    In other words: There is no high end competition so we're going to work against the low end.
  • Shadowmaster625 - Wednesday, March 23, 2016 - link

    Its more like Tick+Tock+NoCompetitionSoWhyBotherAMIRITE?
  • Senti - Wednesday, March 23, 2016 - link

    Typical Ian's article: can't even properly copy what PAO is from the slide above. Performance? You wish.
  • jasonelmore - Wednesday, March 23, 2016 - link

    They dont wanna increase the price of the chips, so they are stretching out the lithography 3 years to pay for R&D and the equipment. Meanwhile, all of the competitors are gonna catch up, with Global Foundries Partnered with IBM, already working on scaling up 7nm.

    Intel better get their ducks in a row or they will lose their ace of spades.
  • extide - Thursday, March 24, 2016 - link

    Intel is still quite a bit ahead in terms of actual transistor density. Remember TSMS's 16nm is still based on the 20nm BEOL and is not really a shrink from 20nm. TSMC claims that 20nm is 1.9x more dense as 28nm, and 16nm is 2.0x more dense than 28nm, so pretty much the same as 20nm. Intel's 14nm was a true shrink from their 22nm process. TSMC's 10nm process will finally again be a real shrink and THAT process will be comparable or maybe a bit better than Intel's current 14nm.
  • lord_anselhelm - Thursday, March 24, 2016 - link

    PAO or ‘Process-Architecture-Optimization'?

    NO! They've completely missed the obvious successor here: 'Tic-Tac-Toe'!

    Wasted opportunity.
  • elabdump - Thursday, March 24, 2016 - link

    > Media professionals i suppose would want the best IPC so they can finish their
    > current project and move on.

    They should blame the software vendors for not going multi- or many-core.
  • Wolfpup - Tuesday, April 5, 2016 - link

    This actually died almost immediately after it started. There was no real tick (or tock, whichever) for Nehalam. There was another CPU, but it was dual core only, and moved the memory controller off die again (I'm not even sure if it launched on desktops, and not sure why you'd want it on desktops).

    Then of course the whole thing has been slowing down for years.

    Sandy Bridge should have launched in 2010, and last year we should have gotten the successor to Skylake.

    AND of course the amount of performance difference between these architectures is slowing down too...although Intel probably could make bigger chips if they needed to.

    But really this whole thing has been dead or dying since 2 years after it began.

Log in

Don't have an account? Sign up now