In a very short tweet posted to their Twitter feed yesterday, Intel revealed/confirmed the launch date for their first discrete GPU developed under the company’s new dGPU initiative. The otherwise unnamed high-end GPU will be launching in 2020, a short two to two-and-a-half years from now.

The tweet was posted amidst reports that Intel had given the same details to a small group of analysts last week, with the tweet being released to confirm those reports. The nature of the meeting itself hasn’t been disclosed, but Intel regularly gives analysts extremely broad timelines for new technologies as part of outlining their plans to remain on top of the market.

This new GPU would be the first GPU to come out of Intel’s revitalized GPU efforts, which kicked into high gear at the end of 2017 with the hiring of former AMD and Apple GPU boss Raja Koduri. Intel of course is in the midst of watching sometimes-ally and sometimes-rival NVIDIA grow at a nearly absurd pace thanks to the machine learning boom, so Intel’s third shot at dGPUs is ultimately an effort to establish themselves in a market for accelerators that is no longer niche but is increasingly splitting off customers who previously would have relied entirely on Intel CPUs.

Interestingly, a 2020 launch date for the new discrete GPU is inside the estimate window we had seen for the project. But the long development cycle for a high-end GPU means that this project was undoubtedly started before Raja Koduri joined Intel in late 2017 – most likely it would have needed to kick off at the start of the year, if not in 2016 – so this implies that Koduri has indeed inherited an existing Intel project, rather than starting from scratch. Whether this is an evolution of Intel’s Gen GPU or an entirely new architecture remains to be seen, as there are good arguments for both sides.

Intel isn’t saying anything else about the GPU at this time. Though we do know from Intel’s statements when they hired Koduri that they’re starting with high-end GPUs, a fitting choice given the accelerator market Intel is going after. This GPU is almost certainly aimed at compute users first and foremost – especially if Intel adopts a bleeding edge-like strategy that AMD and NVIDIA have started to favor – but Intel’s dGPU efforts are not entirely focused on professionals. Intel has also confirmed that they want to go after the gaming market as well, though what that would entail – and when – is another question entirely.

Source: Intel

Comments Locked

56 Comments

View All Comments

  • peevee - Wednesday, June 13, 2018 - link

    "looking at Intel's efforts outside their core and foundry business, well it sucks"

    It's core business did not have a significant architectural progress since Sandy Bridge.
    It's foundry botched "10nm" node in a spectacular fail.

    That is what you get with an accountant as your Chairman.
  • Diji1 - Thursday, June 14, 2018 - link

    Sucks how? The main reason for it existing is to avoid the additional cost of a GPU to non-gaming systems. It's done pretty well at that.
  • boeush - Wednesday, June 13, 2018 - link

    Larrabee, the next generation?
  • idealego - Wednesday, June 13, 2018 - link

    Intel released a discrete GPU called the i740 back in 1998, and is why the article title has "(Modern)" in it:
    https://en.wikipedia.org/wiki/Intel740
  • peevee - Wednesday, June 13, 2018 - link

    2.5 years? How hard would it be to scale iGPU from Ice Lake and add HBM2 controller, for a first version?

    Looks like Intel's engineering organization sucks. And that is not the first indication, obviously.
  • sheh - Wednesday, June 13, 2018 - link

    You already have some FreeSync TVs:
    https://www.pcworld.com/article/3278593/gaming/amd...

    And Intel spoke of supporting adaptive sync 2 years ago, but I don't know what happened in the end.
  • eastcoast_pete - Wednesday, June 13, 2018 - link

    While I love the idea of NVIDIA and AMD finally getting competition in dedicated graphics, I share the "believe it if I see it" view. Also, I can't help the impression that Intel's dedicated graphics, once/if they materialize, are the leftover table scraps from their AI/machine learning efforts.
  • Machinus - Thursday, June 14, 2018 - link

    Will these be built out of x86 cores? Or does Intel have a different GPU unit that it is going to use to build cards out of?
  • peevee - Thursday, June 14, 2018 - link

    Of course, their iGPU has nothing to do with x86.
    Modern "all-purpose" architectures, be it x80, x68, ARMv8 etc are really NO-PURPOSE. They are useless for anything but high-level control of many many other processors on other architectures which actually do all the work. Not just GPU, but photo encoding/decoding/processing, video encoding/decoding/processing, sound, wireless and wired connections, display control (not the same as GPU BTW), AI etc etc are handled by specialized processors because the old ARM/Intel etc are totally inadequate despite the cost of them. They are 100 times worse in performance per W and often in performance per area compared to sane designs.
  • MJDouma - Thursday, June 14, 2018 - link

    Did they say if it would be manufactured with 14n or 10nm XD

Log in

Don't have an account? Sign up now