Intel has published a whitepaper on their new Gen11 processor graphics, providing details about the underlying architecture. The upcoming Sunny Cove processors and Gen11 graphics were unveiled back in December at Intel's Architecture Day, where Intel had stated that Gen11 was "expected to double the computing performance-per-clock compared to Intel Gen9 graphics", which would obviously be a massive improvement over their current offerings. Intel promises up 1 TFLOP performance from Gen11, with its 64 EUs (execution units) and other improvements providing up to a 2.67x increase over Gen9 – though Intel does clarity that "there may be different configurations" so we will very likely see the usual segmentation.
"The architecture implements multiple unique clock domains, which have been partitioned as a per-CPU core clock domain, a processor graphics clock domain, and a ring interconnect clock domain. The SoC architecture is designed to be extensible for a range of products and enable efficient wire routing between components within the SoC.
Gen11 graphics will be based on Intel’s 10nm process, with architectural refinements that promise significant performance-per-watt improvements, according to Intel. Intel also states that memory bandwidth has been addressed to meet the demands of the increased potency of the GPU, with improvements to compression, larger L3 cache size, and increased peak memory bandwidth. All major graphics APIs are supported including DirectX, OpenGL, Vulkan, OpenCL, and Metal – the last of which makes sense as these will very likely be powering the next generation of Apple's MacBook line.
Intel states that beyond the increases in compute and memory bandwidth, Gen11 will introduce "key new features that enable higher performance by reducing the amount of redundant work", and list Coarse pixel shading (CPS) and Position Only Shading Tile Based Rendering (PTBR) among them. Many more details are provided in the document, available at the source link (warning, PDF).
” Intel promises up 1 TFLOP
” Intel promises up 1 TFLOP performance from Gen11, with its 64 EUs (execution units) and other improvements providing up to a 2.67x increase over Gen9 ”
I can’t help but wonder if they’re talking about the current 24-EU graphics.
Because Skylake had Iris Pro: the i7-6770HQ was a 72-EU GPU, and, while it (coupled with the 64MB of on-chip EDRAM) was definitely better than the HD 530 in the regular chips, but I wouldn’t favorably compare it to pretty much any discrete GPU except the very bottom of the barrel (or ones much older.)
So I sure hope that Gen 11 on a per-EU basis is significantly better than Gen 9 (and what happened to Gen 10? Did they just skip it, like Windows 9?)
So below is the TPU’s GPU
So below is the TPU’s GPU database entry for Vega 11 on the 2400G. And the 3000 series APU parts at 12nm/Zen+ are on the way. And what of Intel’s Gen 11 graphics Shader core counts as well as the TMU and ROP counts and their respective Texture and Pixel fill rates. It’s hard to judge gaming performance without Texture and Pixel fill rates as just total FP/Int Flops rates do not provide the full information. Shader core counts are another bit of info that I would like provided in any whitepaper along with more TMU, ROP texture and pixel fill rate metrics.
If Intel can produce higher Pixel fill rates along with higher Texture rates then that’s more of a plus than only what the FP/Int flops metrics can provide.
These rates also are of importance among others:
•Bilinear filtering.
•Trilinear filtering.
•Anisotropic filtering (most expensive, best visual quality)
Vega 11 Graphics on the 2400G:
Shading Units: 704
TMUs: 44 (Texture Rate: 54.56 GTexel/s)
ROPs: 8 (Pixel Rate: 9.920 GPixel/s)
Compute Units: 11
FP16 (half) performance: 3.492 TFLOPS (2:1)
FP32 (float) performance:1.746 TFLOPS
FP64 (double) performance:109.1 GFLOPS (1:16)
GPU clock: 300 MHz, which can be boosted up to 1240 MHz.
CPU: Base Clock: 3.6GHz Max Boost Clock: 3.9GHz(From AMD’s entry)
Strangely enough TechPowerUp has no Entry in it’s CPU database for the 2400G but they do have a GPU database entry for the “Vega 11” graphics. The 3000 series APUs are starting to be announced in products that will begin shipping over the next few months.
Also I’d expect that some Zen2 based APUs(Navi Graphics) on TSMC’s 7nm process node will be known about towards the end of 2019. So Intel will be forced to go beyond the top end gen 11 EU counts and/or get gen 12 ready ASAP. The question that always must be asked about Intel’s graphics is how many SKUs will get the best graphics and will the majotiry of Intel’s offerings only come with some middle range variant of Gen 11 graphics.
Here is a nice lecture on textures from UC Berkeley:
“Lecture 6:
Texture Mapping”
https://cs184.eecs.berkeley.edu/uploads/lectures/06_texture/06_texture_slides.pdf
Rick C: The 64EU with 1TFLOP
Rick C: The 64EU with 1TFLOP shading power will now be the GT2 baseline. So it will indeed jump from 24EUs to 64EUs for most people. It should result in 1.5-2.5x increase in performance for an average of 2x(2x according to Intel).
GT4 with the 72EUs really sucked for all the power it used, the eDRAM and the compute power it had. It was also very expensive too. This will likely beat it and be available in nearly every Intel CPU. That’s the real significant thing.
HereAndNowAndLaters:
Gen 11 has 8 samplers, meaning 8 ROPs and 32 TMUs. It’s FP32 performance is what will be 1TFLOPs assuming its at 1GHz. FP16 will be double that.
Of course those are higher level architectural features. Lower level features are what determines the final performance.
BTW PCPER:
I really hate your CAPTCHAS. It’s insulting to your readers, and the image change is super slow.
What’s the fill rate on those
What’s the fill rate on those ROPs in GPixels per second and ditto for the 32 TMUs on GTexels per second, Intel needs to show the Shader, ROPs and and TMU counts and the relevent Giga/Whatevers per second.
Tom’s hardware is listing some rumored SKUs:
Model Number, Codename, Tier, Execution Units, Shading Units
Intel Iris Plus Graphics 950 iICL11LPGT2U6425W GT2 64 512
Intel Iris Plus Graphics 940 iICL11LPGT2U64 GT2 64 512
Intel Iris Plus Graphics 940 iICL11LPGT2U48 GT2 48 384
Intel Iris Plus Graphics 930 iICL11LPGT2Y64 GT2 64 512
Intel Iris Plus Graphics 930 iICL11LPGT2Y32 GT2 32 256
Intel UHD Graphics 920 iICL11LPGT2U32LM GT2 32 256
Intel UHD Graphics 910 iICL11LPGT2Y32LM GT2 32 256
Intel UHD Graphics, Gen11 LP iICL11LPGT2Y48 GT2 48 384
Intel UHD Graphics, Gen11 LP iICL11LPGT2Y48LM GT2 48 384
Intel UHD Graphics, Gen11 LP iICL11LPGT2U48LM GT2 48 384
Intel UHD Graphics, Gen11 LP iICL11LPGT2U32 GT2 32 256
Intel UHD Graphics, Gen11 LP iICL11LPGT0 GT0 N/A N/A
Intel UHD Graphics, Gen11 LP iICL11LPGT0P5 GT0 N/A N/A
And only 2 SKUs with 64 EUs!
P.S. WCCF has a table with more Infirmation but they do not list a source so I’m not really trusting the figures. And if any website gets the correct Pixels per clock metrics then the website should do the math and give that in GPixels/sec GTexels/sec etc. So whatever metrics that are confermed by the clock rate needs to be computed in Giga-Whatevers/second. And that includes any:
“•Bilinear filtering.
•Trilinear filtering.
•Anisotropic filtering (most expensive, best visual quality)” rates per second also.
The autoplaying video content more than the CAPTCHAS are what should rate an ad block in chrome/any browser. I even hate videos that load and steal data caps even if the video does not autoplay!
Those are not SKUs. SKUs are
Those are not SKUs. SKUs are ones that are sold to consumers. They are merely some of the names they could use.
Do you not know how to calculate fillrate? It’s number of relevant units * frequency.
From the Vega example above:
ROPs: 8 (Pixel Rate: 9.920 GPixel/s)
9.92/8 = 1.24GHz
You are also vastly simplifying things. Low level detail is what will determine the final performance.
You obviously do not know
You obviously do not know what you are talking about as I’m not asking for any clock speeds on Vega I’m wanting the pixel fill rates on the Intel gen 11 64 EU part. What is the pixel fill rates on all the Intel parts?
The pixel fill rates on the Radeon Vega integrated graphics in known but the Intel parts are not. You do the math on the Intel parts and those parts listed listed and they may not have SKU numbers but they have numeric IDs.
Do the math for the INTEL PARTS’ texture rates also including all the various levels of filtering: Bilinear filtering, Trilinear filtering, and Anisotropic filtering/etc. Shader counts, ROP counts, TMU counts, cache sizes etc!
I’m tired of specifications missing on GPU’s/graphics hardware feature sets on Intel’s CPUs with integrated graphics. What is Intel trying to hide!