It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
On today's show, things are a bit shorter than usual because the PCPer crew is in a rush to attend a…uh…business meeting. Rrrrawwwrrrrr.
00:56 – Ryzen 7 1800X all-core overclock vs. stock boost for gaming?
03:31 – Color format and bit depth for PC output to 4K HDR TV?
06:37 – Vega 64 vs. CrossFire RX580?
09:55 – Adding SATA power connectors to PSU?
12:23 – High DRAM prices not affecting NAND flash?
14:18 – Intel and AMD processors with more than 2 threads per core?
17:26 – Wax on, wax off
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
It is a bit unclear how the
It is a bit unclear how the chrominance subsampling works with a computer connected to a tv. RGB is essentially 4:4:4 (no subsampling). You generally would never want subsampling for anything with text. It would degrade the quality of text significantly, unless it is strictly black and white text. Chrominance subsampling is used by some video camera, video transmission, and video storage formats/systems. I would think anything on a PC would be converted to full RGB for display. Is it even an option to output a subsampled signal from a video card? It generally isn’t noticeable in actual video, but a video game with text would probably have noticeable artifacts in the text.
There’s a good chance I don’t
There’s a good chance I don’t know what I’m talking about but I have been using 4K from pc to TV since the first 4kSeiko at 30 Hertz 5 years ago. Since then I’ve upgraded and found nvidia does not allow rgb 444 at 4k60 and requires the “limited” color setting in the nv control panel. To achieve correct tv color space I have to have change tv to ycbcr and not rgb.
Then to go 4k30,1080p120 and 1080p60 I have to switch both to rgb444 in the tv options and nv control panel. If you dont match the color space,it will be off, and it’s noticible the screen is losing black shades or is overly bright.
I understand that games are not made in 10 bit and only 8 bit so there is no additional data to realize by switching from 8 bit. Not sure about 4k blu rays if they are produced in 10 bit.
Btw I only have 1 input that accepts 4k60 on my old vizio.
In addition I hope the original guy who asked sees this cause Ryan gave a really poor answer, idk if he was tires, rushed , distracted or what because he actually knows all of this stuff very well. Actually, I don’t know why he even mentioned edidds, i dont see the card and tv agreeing to the correct setting, both will just accept whatever and not auto calculate.
Btw for my eyes text is fine, but not perfect in 4k60 at non rgb. I know for some it’s huge but not that bad for me. But 30hz gaming oth kills me.
For running significantly
For running significantly more threads per core, you need a lot bigger cache to support that. You are also prioritizing throughout over single thread performance. Using something like 8 threads, you can hide some latency, but any one thread is going to execute slower. The first level cache can probably be a bit slower but larger with 8 other threads to choose from while the cache is being accessed. The latency for any one thread will be higher; it is still waiting on the cache while those other threads are executions. These processors are made to be running with possibly hundreds of threads active (runnable) at all times. In that case, the throughput will be better than a chip with only 2 threads per core. You generally aren’t going to have hundreds of threads runnable in a consumer PC. Most applications probably don’t even take full advantage of 16 threads on Ryzen. You wouldn’t want a throughput optimized architecture for playing games, unless it is emulating a GPU. A processor built like that would probably do terrible on games due to higher latency for individual threads. Ryzen 2xxx seems to have gotten good improvement from small cache latency tweaks.
These systems also are significantly more expensive. The larger caches required to support that many threads takes up huge amounts of die area. IBM uses external cache chips to support these processors. Look up what a Power 5 MCM looks like. I don’t know what the current Power processor packages look like, but they are not cheap. They also need massive memory bandwidth to support running hundreds of threads simultaneously. I would question whether such a massively multi-threaded chip is really worth it. You can get up to 64 cores / 128 threads in a dual socket Ryzen system. The distributed architecture allows it to supply the necessary memory bandwidth. The more monolithic architecture needs to have huge caches and huge amounts of bandwidth going to that single chip which is much more expensive. AMDs distributed system is much cheaper and probably competes very well. Similar thing with intel at the moment where their 28 core Xeon might be $10,000 dollars since it is a huge single chip with a large amount of signals that need routed to it. It will be interesting to see what will happen if AMD can double that up to 128 cores / 256 threads per dual socket system in the next generation.
Also note that SMT is not new. The Alpha EV8 that Jim Keller worked on around 1999 was supposed to have 4 way SMT; in addition to on die router network and other modern features. It was way ahead of it’s time. Unfortunately, it was canceleled before tape out in favor of Intel Itanium. The microprocessor companies have simulators to explore such architectural trade offs. If it had been reasonable to do, then they probably would have gone to more threads. It most likely hurt single thread performance too much, was too expensive to implement, or just wasn’t worth it with AMD’s distributed architecture. It may have been a more valid way for intel to go for servers processors with the giant monolithic die they already sell, but those are not looking that competitive compared to Epyc. They also wouldn’t be able to use a throughput optimized core for desktop parts.
It seems like GPUs would be a
It seems like GPUs would be a great case to have SMT, with it being very parallelizable and needing a bunch of bandwidth. Why don’t GPUs have more than one thread per cuda core/stream processor? Or am I completely off base with everything I just said.
How good is the Ryzen APU PRO
How good is the Ryzen APU PRO (desktop and mobile) used for light CAD work, does the on die GPU allow for a good experience?
Will the Pro APU’s get PRO Graphics drivers?