Vega meets Radeon Pro
The Radeon Pro lineup expands to feature Vega
Professional graphics cards are a segment of the industry that can look strange to gamers and PC enthusiasts. From the outside, it appears that businesses are paying more for almost identical hardware when compared to their gaming counterparts from both NVIDIA and AMD.
However, a lot goes into a professional-level graphics card that makes all the difference to the consumers they are targeting. From the addition of ECC memory to protect against data corruption, all the way to a completely different driver stack with specific optimizations for professional applications, there's a lot of work put into these particular products.
The professional graphics market has gotten particularly interesting in the last few years with the rise of the NVIDIA TITAN-level GPUs and "Frontier Edition" graphics cards from AMD. While lacking ECC memory, these new GPUs have brought over some of the application level optimizations, while providing a lower price for more hobbyist level consumers.
However, if you're a professional that depends on a graphics card for mission-critical work, these options are no replacement for the real thing.
Today we're looking at one of AMD's latest Pro graphics offerings, the AMD Radeon Pro WX 8200.
Radeon Pro WX 8200 | Quadro P5000 | Titan Xp | RTX 2080 | |
---|---|---|---|---|
Process | 14nm | 16nm | 16nm | 12nm |
Code Name | Vega 56 | GP104 | GP102 | TU104 |
Shaders | 3584 | 2560 | 3840 | 2944 |
Rated Clock Speed | 1500 MHz (Boost) | 1730 MHz (Boost) | 1582 MHz (Boost) | 1800 MHz (Boost) |
Memory Width | 2048-bit HBM 2.0 (ECC) | 256-bit G5X (ECC) | 384-bit G5X | 256-bit G6 |
Compute Perf (FP32) | 10.75 TFLOPS | 8.9 TFLOPS | 10.8 TFLOPS | 10.6 TFLOPS |
Compute Perf (FP64) | 0.672 TFLOPS | 0.343 TFLOPS | 0.380 TFLOPS | 0.314 TFLOPS |
Frame Buffer | 8GB | 16GB | 16GB | 8GB |
TDP | 230W | 180W | 250 W | 225 W |
Street Price | $999 | $1799 | $2000 | $800 |
For anyone who has followed graphics cards for the past few years, the specifications of the Radeon Pro WX 8200 will be quite familiar. Based on a Vega 64 GPU, the only real standout feature offered by the WX 8200 is the support for ECC-enabled memory.
Still, looking at the raw numbers, you can see the Radeon Pro card has quite the lead over all of the NVIDIA options in rated double-precision (FP64) compute capabilities.
While the outward appearance of a professional graphics card isn't something that is generally discussed, I did want to say that I'm quite impressed with the Radeon Pro WX 8200 in this capacity.
The shroud is made from cast metal and features a quite nice automotive-level paint job. It's refreshing to see a graphics card that isn't an RGB monstrosity that would stick out like a sore thumb in a rackmount chassis but does a good job at conveying the branding while still feeling like a premium product.
The choice of display outputs is a bit disappointing, with only four connections all of which are Mini DisplayPort. However, the use of these smaller connectors allows for additional exhausting via the rear of the card.
Power wise, the Radeon Pro WX 8200 uses an 8-pin and a 6-pin connector, down from the dual 8-pin requirement we see on the RX Vega 56 card.
Test Setup
PC Perspective GPU Testbed | |
---|---|
Processor | Intel Core i9-7960X |
Motherboard | ASUS Prime X299 Deluxe |
Memory | 32GB (4x8GB) Corsair LPX DDR4-3200 (Running at DDR4-2667) |
Storage | Intel Optane SSD 900P 480GB |
Power Supply | Corsair RM1000X |
OS | Windows 10 x64 RS4 |
As for the GPUs in our test setup, we wanted to provide a wider view of the marketplace. While the Radeon Pro WX 8200 and the Quadro P5000 are the only true "professional GPUs" with both ECC and professional driver support, the Titan Xp has some of the same driver optimizations. We included the RTX 2080 to show how a more powerful, but completely gaming-focused GPU compared in these tasks.
Applications in use:
- SPECviewperf 13
- Luxmark 3.1
- Revit 2019
- Blender 2.79b
- Radeon Pro Render
- Cycles Render
SPECviewperf 13
SPECviewperf 13 is a benchmarking application centered around workstation graphics performance. Both OpenGL and DirectX performance is measured by the various workloads or "viewsets."
While it may only be represented here in one chart, SPECviewperf provides an immense amount of data, and an outlook on all of the top professional applications including 3ds Max, CATIA, Creo, Maya, Siemens NX, Solidworks, and Autodesk Showcase.
As always, SPECviewperf reveals some interesting data points about the professional software market.
For example, the most powerful GPU of the bunch, the RTX 2080 fails to measure up to the other GPUs in several applications like Solidworks, Siemens NX, and CATIA. On the other hand, in applications that use DirectX, like Maya, Showcase, and 3ds Max, the RTX 2080 excels. This shows the differences that professional drivers can make for application performance.
As far as the WX 8200 is concerned, it tends to lose to the Quadro and Titan Xp in applications that see massive performances from professional drivers but generally loses in applications which are not optimized.
Luxmark 3.1 – Hotel
In the purely OpenCL-focused Luxmark, the Radeon Pro WX 8200 manages to outpace the NVIDIA Quadro P5000, while falling behind the Titan Xp and RTX 2080.
Autodesk Revit 2019 – RFOBenchmark
RFO Benchmark is a community developed series of scripts that can be used to evaluate system-level performance in Revit, the popular building information modeling software from Autodesk used primarily by those in the Architecture fields.
In general, the Graphics comparison test in RFO Benchmark shows little in the way of difference between graphics cards. While the Radeon Pro WX 8200 comes in last in most tests, it manages to slightly best the Quadro P5000 in the Standard View subtest.
In general, it seems GPU selection doesn't matter much for Revit users.
Blender 2.79b – BMW Workload
With the BMW workload, we once again see the WX 8200 coming out on top of the Quadro P5000, but losing to the Titan Xp and RTX 2080.
Radeon ProRender (Blender 2.79b) – BMW
For simplicity's sake, we used a version of the same BMW workload tested above, which has been ported by the community to support Radeon ProRender.
Despite leading the Quadro P5000 in the previous Blender test, using the native "Cycles" renderer, when switching to Radeon ProRender, the WX 8200 manages to lose the test.
At an MSRP of $1000, the Radeon Pro WX 8200 provides an impressive amount of value. The next closest price competitors in the professional market scene are the GTX 1070-based Quadro P4000 at around $750, and the GTX 1080-based Quadro P5000 for just around $1800.
For $250 more, buyers can get a faster GPU, with access to much faster memory which could be ideal in certain workloads, in particular, rendering very complex scenes.
If the price is no object, the new Turing-based Quadro RTX cards from NVIDIA will provide the fastest possible experience on professional applications.
However, if you are just starting to get into work that requires a professional GPU, the Radeon Pro WX 8200 provides a great balance of performance and an attainable price.
Review Terms and Disclosure All Information as of the Date of Publication |
|
---|---|
How product was obtained: | The product is on loan from AMD for the purpose of this review. |
What happens to the product after review: | The product remains the property of AMD but is on extended loan for future testing and product comparisons. |
Company involvement: | AMD had no control over the content of the review and was not consulted prior to publication. |
PC Perspective Compensation: | Neither PC Perspective nor any of its staff were paid or compensated in any way by AMD for this review. |
Advertising Disclosure: | AMD has purchased advertising at PC Perspective during the past twelve months. |
Affiliate links: | This article contains affiliate links to online retailers. PC Perspective may receive compensation for purchases through those links. |
The spec table, row:
Compute
The spec table, row:
Compute Perf (FP64)
I’d probably recommend putting a zero in front of the decimal point. I was confused how the parts had such crazy fast fp64 performance till I realized there was a little decimal point.
Are there any good applications that showcase the fp64 performance?
Good point; I added leading
Good point; I added leading zeros to those entries in the table.
The scientific/math workloads that really take advantage of double-precision performance are way over my head, but I know that some distributed computing projects (MilkyWay@Home, etc) are significantly boosted by good FP64 performance.
“The choice of display
“The choice of display outputs is a bit disappointing, with only four connections all of which are Mini DisplayPort. ”
With the greatest respect: F£&@ you.
Mini DisplayPort can output:
– DisplayPort (obviously)
– HDMI (native, using passive adapter)
– Single-link DVI (HDMI with reduced link clocks)
– VGA (admittedly somewhat nonstandard)
The sole connection you ‘lose’ with mini DP is dual-link DVI.
That is an extremely low price to pay for the gain in rear-of-card real estate. Using HDMI n place of DP is already inexcusable – you gain zero HDMI outputs, but lose a DP output – and mini DP provides will the same benefits of DP with a smaller connector.
It’s disappointing purely
It's disappointing purely from a physical standpoint. I had to dig through the office to find the right dongle to connect to a display. Lack of options leads to that sort of thing.
That being said, AMD has been doing this with their pro cards for a while, so users who are upgrading generation-to-generation wouldn't have that issue.
From Newegg
Included:
4 x
From Newegg
Included:
4 x Mini-DisplayPort to DisplayPort adapters
1 x Mini-DisplayPort to Single-Link DVI Adapter
1 x Mini-DisplayPort to HDMI 2.0 Adapter
1 x Stereo 3D Connector Bracket
1 x Board Extension Bracket for OEM Chassis (Bent)
1 x Board Extension Bracket for OEM Chassis (Flat)
Doesn’t seem like a bad deal to me.
The AMD / Nvidia war is
The AMD / Nvidia war is consistently looked at the wrong way.
The same with AMD processors.
Who really cares who makes the fastest?
What really matters is Performance/Price.
Nvidia (I hope) have shot themselves in the foot with their greedy pricing.
For the masses lets hope AMD really can deliver near NVIDIA performance for AMD prices.
Even so. Are we all being led down the garden path? Truth is AMD could probably use affibity fabric or a config (they already know of) to simply utilise multi GPU dies on single cards.
These companies probably solved this years ago but ALL monopolise and choose to drip feed us minor improvements to squeeze as many purchases out of us as possible over time.
I mean come on, you dont think they have solved this already in their labs?
Both companies are holding back.
As an example. The DGX from Nvidia. They bark on about how expensive and difficult it is to fabricate smaller faster GPUs when really all everyone needs are faster buses. Which i guarantee they have sitting there working perfectly. Whats Nvidias moto? The more you buy, the more you save?
The real race is not about GPUs, its about Architecture. Im praying AMD take a chance and just stick two of their fastest GPUs on one card at least. With the top Nvidia coming in at 2500$ for a titan, id rather pay all that mobey towards an architecture thats genuinely scalable.
Clearly these companies AND others are keeping huge secrets from us the paying public.
It took Nvidia two years to release the 2080ti? A card that realistically doesnt deliver anything more mind bending than the 1080ti.
So any lamen would rather have spent all that extra money on joining the performance of 2 x 1080ti’s together using architectural solution. Not some daft bridge.
Do the maths, even without changing any GPU internals we would already have double the compute power.
We’re being taken for a ride.
I do understand this was about pro products by the way.
“What really matters is
“What really matters is Performance/Price.”
Doesn’t matter if you have a great performance/price if the performance is too low to get any sales. The GPU market is pretty saturated (GPU buyers are almost entirely people who have GPUs already), and nobody wants to sidegrade no matter how ‘good value’ a card is.
Yep. Performance year over
Yep. Performance year over year is not increasing faster than the rate at which the prices are climbing.
I paid $220 for my 8GB RX 480 on launch day over two years ago. If I pay $440 today I will not even get twice the performance.
That’s what the
That’s what the price/performance metric is there for if you cared to read carefully. So if the price of the GPU goes up that Price/Performance becomes worse as the price rises.
Now for these professional cards and AMD’s non open source OpenGL that’s geared towards quality of graphics and not any sorts of FPS sorts of metrics. To this day gamers are always barking up the wrong tree when they begin to complain about AMD’s proprietary OpenGL implementation that’s there for the Professional Graphics workloads where image accuracy and fidelity are the main focus on these Radeon Pro WX SKUs.
This SKU is not necessarily for double percision number crunching as much as it is made for for high fidelity image creation with no artifacts. So no damn dirty gaming graphics drivers need apply and the Pro Drivers go through a lengthy certification process by AMD and the makers of the major Graphics software packages. These GPU SKUs have Error Corretion Functionality that the consumer cards lack.
If you are looking for double percision then Vega 20 will have 1/2 DP FP to 1 SP FP ratio and also some extended AI Instruction Set Extentions that will be new to Vega 20 based SKUs only. I’d imagine that AMD tradmarked that Vega 20 graphic design because there will be Vega 20 bins that will be made into several Radeon Pro WX variants that will be replacing these current WX branded offerings. Vega 20 will also be first used for the MI60 and other MI variants also for AI workloads. But AI is also going to be made use of for Professional Graphics in the form of AI based denoising and AI based Graphics filtering also. And Vega 20 will at least have some new AI oriented instruction extentions to accelerate and Graphics filtering tasks for the professional graphics software packages.
I’d also look for AMD to be making some Dual Vega 20 SKUs for the AI and professional Graphics/CAD Markets where whole computing clusters are utilized for some graphics workloads like Animation and CAD/Engineering/Scientific visualization workloads.
No, perf/price is not the
No, perf/price is not the ultimate yardstick, otherwise everyone would be gaming on the iGPU from the Ryzen3 2200G which has the best perf/price.
The is especially true for professional cards.
If you are paid 100$ per hour and a 2000$ boosts your productivity by 25% over a 1000$ card, the 3000$ difference is paid back in 40days .
It would be stupid to go for the cheaper best perf/price card.
I have to join the others,
I have to join the others, the “price/perf” is the what matters” is not practical in real terms. two examples,
1) at the design firm, “sorry boss I could not finish the render for the client, but I spent as little money as possible in our graphics card”.
2) at the gaming convention, “sorry bros, for the next competitive online match, I can only play at 30FPS so I will get killed right away, but hey, I got the best deal ever”
I guess the concept has to be refined, “best price/perf subject to a minimum performance”, for gaming, you can think of the best price/perf subject of playing your desired game 100FPS or more, or in work, subject to be able to finish the client render in less than 6 hours.
Remember that one time when
Remember that one time when there were like two days when anonymous/unregistered people weren’t allowed to post here? Those were a good two days.
And it just took me like 2 minutes and ~10 captchas to be able to post. That’s ridiculous. Are there any other systems you can use? Like what’s the point of having an account if I still have to see the anonymous morons and their walls-of-text and I still have to jump through hoops to post here?
Fondly … but there were
Fondly … but there were repercussions.
There will be some changes now that we have a new dictator.
Heil Walrus! Wait, Josh is
Heil Walrus! Wait, Josh is the new dictator right?
Many HPC applications scale
Many HPC applications scale nearly perfectly across many gpus. For one that I was dealing with, the performance was about the same with dual gpus vs. a single gpu with 2x the FLOPS and 2x the bandwidth. AMD could do multiple smaller gpus with HBM memory and then connect multiple interposers together on a single board with infinity fabric. The infinity fabric is still much slower than the HBM, but this is part of why AMD set up HBM as cache.
For the consumer market though, most multi-gpu set-ups require at least some support from game and game engine developers. This has mostly not materialized since almost no one has multiple gpus, there is little reason to do any extra work to support them. It is unfortunate, since multiple gpus could increase performance capability significantly. The process tech will just get more expensive though, so larger die gpus are going to be more and more expensive. AMD has similar issues with multi-gpu. Since they have a small installed base, they do not get as much software optimization at game release. GPUs are still in a state where software optimization for the specific architecture can make massive differences. This often isn’t the case with cpus. I tried doing some compiler flags to target the specific cpu architecture I was using and it made no detectable difference.
I would suspect that some of these applications only have a small amount of optimization work for AMD gpus. CUDA is a huge problem for the industry. You probably will get better performance on nvidia with cuda, but it is a proprietary vendor lock in. It helps nvidia hold onto that monopoly, which it should be obvious is bad for consumers after the resurgence of AMD cpus. The PC market was stagnant for a long time with almost no reason to upgrade. Now we have possible another doubling of core counts with AMD using 7 nm. Intel would have continued to sell 4 core cpus for $350 to $500 without AMD.
TL;DR
I don’t know if we will
TL;DR
I don’t know if we will see much multi-gpu support in the consumer market for games. Software developers just don’t seem to be putting the work in.
Decent review, but why there
Decent review, but why there was not Vega Frontier in the mix of comparators -hint, I have one and I an generally happy with it except with it gets stubborn and does not want to switch between Pro and Gaming drivers without a BSOD.
A Vega Frontier, with additional shaders and double the memory (to 16GB) might be a better alternative specially as of now it can be found cheaper than 1k.