One of the demos that AMD had at CES was their new Vega architecture running DOOM with Vulkan on Ultra settings at 4K resolution. With this configuration, the pre-release card was coasting along at the high 60s / low 70s frames per second. Compared to PC Gamer’s benchmarks of the Vulkan patch (ours was focused on 1080p) this puts Vega somewhat ahead of the GTX 1080, which averages the low 60s.
Some of the comments note that, during one of the melee kills, the frame rate stutters a bit, dropping down to about 37 FPS. That’s true, and I included a screenshot of it below, but momentary dips sometimes just happen. It could even be a bug in pre-release drivers for a brand new GPU architecture, after all.
Yes, the frame rate dipped in the video, but stutters happen. No big deal.
As always, this is a single, vendor-controlled data point. There will be other benchmarks, and NVIDIA has both GP102 and Volta to consider. The GTX 1080 is only ~314 mm2, so there’s a lot more room for enthusiast GPUs to expand on 14nm, but this test suggests Vega will at least surpass it. (When a process node is fully mature, you will typically see low-yield chips up to around 600mm2.)
PC Perspective's CES 2017 coverage is sponsored by NVIDIA.
Follow all of our coverage of the show at https://pcper.com/ces!
When people are pointing to a
When people are pointing to a single drop at 37fps, that’s good. Much better than pointing at an average of 50fps for example.
As for Volta, with Nvidia not showing GTX 1080Ti, either they are not going to show a GTX 1080Ti (making their partners even more angry), or Volta GeForce cards are pushed back for an Autumn launch.
Yeah, exactly… especially
Yeah, exactly… especially for a pre-release (or even a relatively new) product with drivers, during a triggered, graphically-intensive event. Could have just been fetching a texture or something.
Rumors have it that 1080Ti
Rumors have it that 1080Ti will be shown or released around Pax East in march.
With AMD still a far way from releasing Vega, there was little point in revealing 1080Ti officially at CES. Better to do it at a more gaming centric venue.
Rumors also have it that nVidia is doing some driver rework which we’re supposed to see results of in a few months time.
Pax East would be a suitable time to introduce that driver rework together with 1080Ti.
I haven’t seen much credible around consumer oriented Volta, so not commenting that part.
Awesome, with no competition
Awesome, with no competition we can expect the 1080ti to cost around $900. The gtx 570 the cut down big Fermi chip launched at $349, cut down big Pascal possibly $900 to $1,000, as consumers we should all be pulling for some competition.
You do realize that Nvidia
You do realize that Nvidia can not come out with a GTX 1080Ti AND then follow shortly with Volta cards. It will not be good to the consumer throwing $700-$1000 only to see his card becoming old in less than 3-6 months, it’s not good for Nvidia’s partners who would love to have a GTX 1080Ti in the market if they can’t have the Titan.
On the other hand this company has no respect for it’s consumers and it’s partners, so who knows.
This demo was far from
This demo was far from impressive, I am in the mid 50’s on Doom with a 390x@1150/1600 with the same settings. It looks to be on par with 1080 with a much bigger die, which is not good. The 1080 is only a x80 chip in name only the Fermi x60 chips were around the same die size as the 1080 and cost whole lot less. Amd cannot bring out a gpu that provides 1080 performance a year late with Volta waiting in the wings with a possible 2ghz base clock. They needed to release Vega this month for around $499 if they wanted to stay relevant.
AMD is finally bringing out a
AMD is finally bringing out a video card that is going to be competitive with the gtx 1080 around a year after the gtx 1080 launched give or take a few months? Whilst hyping it up to the max as some sort of huge achievement? lol
First off they have to figure
First off they have to figure just how much extra Cache(L1, L2, L3, I$, D$) is taking up space on Vega’s die and how much extra circuitry is needed for the programmable/primitive shaders and other compute that AMD has more resources dedicated to. Is this testing GPU even going to be the final stepping that makes it to market or will there be more tweaks to firmware, drivers, hardware, and microcode adjustments! Those frame rate issues sound like maybe a driver issue! So until the final SKUs are released to market and independent testing it’s best to assume that all testing GPUs demonstrated are engineering samples and that there will be more tweaking done up until final release and very likely some done afterwords as is usual for any new GPU product!
Doom itself needs more time to be tweaked for Vega, as will all of the games/gaming engines and the DX12/Vulkan APIs. Vega is very different from what GCN versions came before and more information is still needed to answer other questions also! That NCU is new, as is the primitive shaders, the Draw Stream Binning Rasterizer, and HBM Cache controller with its large virtual address space.
Looking at the HBM cache controller on Vega makes me think that maybe there will be options for HBM2 and GDDR5X to be used together with the high bandwidth cache controller able to use AI sorts of predictive Caching logic to allow the Vega GPU to work from mostly HBM2 memory while still making use of a larger pool of GDDR5X memory for more total texture/data store. This feature could make for some lower cost consumer SKUs while still allowing for some of the bandwidth features of HBM2! It will probably be utilized on any Zen/Vega APUs with maybe a single or dual HBM2 stack/s acting as cache for a larger pool of regular DDR4 DIMM based DRAM on Laptop SKUs so the integrated APU graphics will work mostly from the HBM2 stack/s and not be starved for bandwidth if the laptop’s OEM decides to only give the APU a single channel to DDR4 DIMM based DRAM. That HBM Cache controller will probably allow for even the staging of textures on regular DDR4 memory and be brought into HBM2 Cache in the background and allow for laptop APU SKUs to make use of even the regular system memory for extra texture store to be swapped into the HBM Cache for the GPU to work on with no loss of effective bandwidth for the integrated graphics!
That HBM Cache controller will also manage textures to be paged to an on GPU PCIe card based NVM SSD for discrete gaming GPU SKUs for large amounts of directly addressable pageable/virtual texture/data memory! This pageable memory will be managed by the HBM cache memory controller and not have to be managed by the game. The games maker will only have to ask for as much memory as the game needs and the HBM memory controller will manage the page swap in the background without any direct management needed, just like CPUs manage virtual memory paged to paging files.
Using HBM2 as a Cache is a great way to hide latency and slower bandwidth RAM/storage(I/O) deficiencies from the GPU, especially for any laptop based Zen/Vega APU SKUs.
Good, I guess, although I’m
Good, I guess, although I’m starting to feel that what it comes to Intel / Nividia / AMD I am always reading actual reviews of shipping product for the first two and only teasers/demos of the latter (at the high end, which is what I care about.)
My fear is that by doing these demos what AMD is really doing is signaling to Nivida “all clear to phone it in and/or overcharge on the GTX 1080ti” per usual. I’d rather they leave them guessing so they feel more pressure to come with the best they can. Or maybe AMD actually has much better cards and is trying to fake them out 😉
amd love to demo their cards
amd love to demo their cards with either trashy game or with non-graphic intensive game..why dont they demo a bf1/titanfall/tombraider..
ID10T error
ID10T error