The reason you give for "no" does not work, I don't think I would switch platform for a Pro console per say... However, if the other platform offered something else ALONGSIDE the proconsole I would get the most powerful option when I do switch.
I don't think the ps5 pro will be zen4c, at most it might be the ps5 cpu ported to 5nm or even 4nm. too many architecture changes between zen 2 and zen 4, it'd make code compatibility a nightmare. Could see a much higher boost clock though, 5ghz+.Pro consoles can improve framerates for legacy as well.
The reason boost mode on PS4 Pro sucks is because of it is hamstrung by a potato CPU only got boosted clock. This time it should be Zen 4c and more VRAM.
Zen 4 is just additional 256-bit SIMD blocks(compares to Zen 2 128-bit) better cache latency with unifying L3 cache pool instead of 2 ex and AVX-512 support thanks to FP scheduler, and AVX-512 can be added into compiler and take advantage specifically by the Pro consoleI don't think the ps5 pro will be zen4c, at most it might be the ps5 cpu ported to 5nm or even 4nm. too many architecture changes between zen 2 and zen 4, it'd make code compatibility a nightmare. Could see a much higher boost clock though, 5ghz+.
Have you heard about the little beast that can do 1440p/60fps RT for just 299 USD?I will be on PS5 Pro day 1.
Xbox could do what ever they want on that regards, I won't spent anything on Cancer of Gaming.
Zen 3 and Zen 4 have different cache architecture, different branch prediction, different CCX construction (unified all 8 cores instead of 2x4 CCD like on zen 2) which will mean different inter-core communication as well as cache coherency, different instruction scheduling, different register sizes, different latency and timings for various operations and instructions.Zen 4 is just additional 256-bit SIMD blocks(compares to Zen 2 128-bit) better cache latency with unifying L3 cache pool instead of 2 ex and AVX-512 support thanks to FP scheduler, and AVX-512 can be added into compiler and take advantage specifically by the Pro console
There is nothing vastly different between the two.
IPC uplift is 37%, it would be dumb not to switch
to Zen 4 since that is one way to improve framerate without having to use bigger GPU.
Also Zen 4c would be more benefit on console since both CPU and GPU on PS5 used the same L2 pool cache across all 8 cores rather than rely on different L2 caches on specific cores like PC CPU.
GDK wise it meant literally nothing. The compiler will prioritize using pool of L3 cache, but the IPC improved so better performance.Zen 3 and Zen 4 have different cache architecture, different branch prediction, different CCX construction (unified all 8 cores instead of 2x4 CCD like on zen 2) which will mean different inter-core communication as well as cache coherency, different instruction scheduling, different register sizes, different latency and timings for various operations and instructions.
While the L3 cache increased, L3 bandwidth per core actually went down because Zen 3/4 kept the same interface to the cache pool despite the cache pool serving double the core count. I can see that being a big deal in games where you're actively and aggressively optimizing your code to hit the caches.
Oh and not to mention zen4c is a chiplet architecture, we have no idea what kind of compatibility issues that will cause.
I doubt it. The chiplets are a power hog and in console form factors you don't want that. PS5 apu is small enough that the reason for chiplets is moot, which is to save money on more expensive nodes, but you sacrifice latency and power, both which are bad for console form factor. There's no reason for them to do the rdna4 portion of the apu on chiplets, the navi 33 (rdna 3) is monolithic. Also i doubt they will want rdna4 if it keeps the dual issue fp32 which wastes so much real estate, they can get a 60 CU gpu on the same die area as the 36 of the PS5 just using the node shrinkGDK wise it meant literally nothing. The compiler will prioritize using pool of L3 cache, but the IPC improved so better performance.
Also RDNA 4(41 and 42) which is what PS5 Pro will be based on will also be chiplet based design, so Zen 4c is more convenient. CCD size should be around less than 300nm^2
Also I think they gonna pad out their wafer contract between Pro and based consoles, rather than using 1-3 SDEs to manufacture 2 SKUs like shitbox did. So 4-5nm monolithic for based model, and 4 nm chiplet for Pro.
Chiplets are designed to use less power because of the way they stacked to reduce die size though.I doubt it. The chiplets are a power hog and in console form factors you don't want that. PS5 apu is small enough that the reason for chiplets is moot, which is to save money on more expensive nodes, but you sacrifice latency and power, both which are bad for console form factor. There's no reason for them to do the rdna4 portion of the apu on chiplets, the navi 33 (rdna 3) is monolithic. Also i doubt they will want rdna4 if it keeps the dual issue fp32 which wastes so much real estate, they can get a 60 CU gpu on the same die area as the 36 of the PS5 just using the node shrink
Chiplets reduce power in server apps because you can stick a bunch of them together and run the whole thing at lower clocks and get more performance, doesn't work that way in gaming applications...which is why the 7900xtx is using +100 watts over the 4090 despite having less compute and 25% less performance.Chiplets are designed to use less power because of the way they stacked to reduce die size though.
PS4 Pro uses GCN 4.0 Polaris, launch around the time of RX 400 series. So safe to assume it RDNA 4 this time around. Also back in the day they got stuck with Bulldozer because Zen isn't ready, this time they got Zen 4 so I bet they go for better performance.
This time I don't think they will even try to uplift IPC, just improve BVH traversal, increase base clocks,as well as some other aspects that lackluster compared to Ampere/Lovelace, reduces node to 4nm and call it a day, sell it at mid range prices like the leaks suggests.
Also more WGPs I argue would be wastes because of power costs and dimishing returns, 30 WGPs from 20 WGPs would equal to 25-30% performance uplift at same clock speed, the more you go the less performance uplift achieve.
Sony probably would try to find a way to benefit from dual issue FP32, it is impossible to take 100% of it, but just 50% is already good enough.
I don't think it is chiplet, just AMD GCD/MCD design. In this case 6 MCD draws massive amount of power to make up for GCD potential strave on cache bandwidth, it just a poor design. I don't understand why they just stack cache on to cores like their 3D CPU lineup, the cost is somewhat large but power consumption is lower.Chiplets reduce power in server apps because you can stick a bunch of them together and run the whole thing at lower clocks and get more performance, doesn't work that way in gaming applications...which is why the 7900xtx is using +100 watts over the 4090 despite having less compute and 25% less performance.
For example zen 4 epyc can do 96 cpu cores with a TDP of 400 watts, that's 4 watts per core... because they run at lower clock speeds. Meanwhile zen4 desktop runs at 5.7ghz max boost and draws 10+ watts per core.
Fables, there is no such thing.Have you heard about the little beast that can do 1440p/60fps RT for just 299 USD?