PS5 Pro updated rumors/leaks & technical/specs discussion |OT| PS5 Pro Enhanced Requirements Detailed.

anonpuffs

Veteran
Icon Extra
29 Nov 2022
8,297
9,517
On paper. But with optimization and the fact that a console doesn't have windows and other programs weighing it down... it will of course perform far above an equivalent GPU in a PC setup...other things being equal.

I think it will be great and there really is no other option to enjoy GTA6 or the latest PS exclusives etc.
I don't think it'll perform that much better than PCs on average... most games are now 3rd party and it's clear 1st party is a bit of a mess right now. Maybe a couple games will put it to good use but regardless the DLSS competitor, if it's good, will be worth it for that alone.
 

Nhomnhom

Veteran
25 Mar 2023
7,205
9,792
Nah, it's just Sony trying to get the most out of their upgrades.

Just like when you are upgrading your PC, the vast majority of the time and for the most common use cases you'll get the best return by investing as much as possible on the GPU.

Most PS5 games are already 60fps and not CPU bound, what would be the point of a much better CPU?

You also need to consider than on consoles CPU and GPU share the memory bandwidth, you want as much of that as possible available for the GPU.
 
Last edited:

Gediminas

Boy...
Founder
21 Jun 2022
5,720
7,306
Nah, it's just Sony trying to get the most out of their upgrades.

Just like when you are upgrading your PC, the vast majority of the time and for the most common use cases you'll get the best return by investing as much as possible on the GPU.

Most PS5 games are already 60fps and not CPU bound, what would be the point of a much better CPU?
CPU upgrade is most useless upgrade.
 
  • brain
Reactions: Nhomnhom

Nhomnhom

Veteran
25 Mar 2023
7,205
9,792
CPU upgrade is most useless upgrade.
It is, specially in the case of a midgen console.

If a game is locked at 30 and CPU bound on the base PS5 it's extremely unlikely that CPU improvements would get the game all the way to 60.

If the game is already 60fps on the base PS5, what is the even the point of a much better CPU?
 

Gediminas

Boy...
Founder
21 Jun 2022
5,720
7,306
It is, specially in the case of a midgen console.

If a game is locked at 30 and CPU bound on the base PS5 it's extremely unlikely that CPU improvements would get the game all the way to 60.

If the game is already 60fps on the base PS5, what is the even the point of a much better CPU?
it isn't PS3 days any more.
 
  • brain
Reactions: Nhomnhom

Jim Ryan

Not Lyin
VIP
22 Jun 2022
1,288
2,485


Ice T React GIF by The Tonight Show Starring Jimmy Fallon
 

ToTTenTranz

Veteran
Icon Extra
4 Aug 2023
505
562






Mystic
@11:34 - Why Is The CPU Upgrade Minimal?


A console needs to be a balanced system, and if the CPU was any faster it could become unbalanced and performance could actually suffer.


The PS5 uses an Unified Memory Architecture (UMA).

This means both the CPU and GPU use the same system memory pool, in this case 256bit of GDDR6.
While this is very practical to share data between CPU and GPU, and dynamically trade e.g. O.S. allocation for game allocation, the reality is the memory controller has 2 big clients that "compete" between themselves for accessing the memory.

This also means if they have a very fast GPU it will be more demanding on the memory controller (I processed this bit already, give me the next bit) and less bandwidth will be available for the CPU. And if the CPU is "too fast" then it'll flood the memory controller with access requests and the GPU will be halted more frequently.


In the case of the PS5 Pro, they only increased the total memory bandwidth by 28.6% (14Gbps vs 18Gbps GDDR6) while the GPU got a whopping >50% faster on rasterization alone.
If it doesn't look like the RAM bandwidth kept up with the GPU performance, then there's even more reason to hold down on CPU performance. Had Sony put there a 5GHz 12-core Zen4 it would have implemented a CPU that constantly eats away precious GPU bandwidth, leaving the latter to stall while waiting between CPU requests.




There's a number of ways AMD/Sony could have prevented this memory bandwidth bottleneck, but all of them have caveats;

- Wider memory bus, e.g. with 5x or 6x 64bit channels (like the One X and Series X), but this means a larger SoC to house more PHYs, a more expensive memory setup with more memory chips that could go unused when running PS5 code, as well as a more expensive PCB.

- Significantly faster GDDR6 (Samsung has 24Gbps chips) but that could mean a revised and larger memory controller, could raise supply issues, and these "fastest" memory chips tend to get more relaxed latencies which don't go well with low-level optimizations in consoles.

- Large caches on the CPU to increase CPU cache hit rates and decrease memory access requests (like AMD already does with Zen3 and 4), but this costs a lot of die area, i.e. the SoC could get very big (and expensive) very fast.

- Large caches on the GPU to increase GPU cache hit rates and decrease memory access requests (like AMD already does with Infinity Cache), but again this costs a lot of die area.





Sony went with the cheapest path, which is to just use the cheaper-but-faster GDDR6 18Gbps and integrate a ~50% wider GPU with new features that let it render at a lower base resolution (less bandwidth needed), plus faster raytracing.
Without significanly more memory bandwidth, there's no reason for Sony to adopt a much faster CPU.


The fact that Sony is apparently doing only a modest increment in the SoC's size and not meddling on the memory is what makes me believe the PS5 Pro isn't going to be a lot more expensive than the base version.
 

rinzler

Banned
12 Mar 2024
352
244

I've already addressed this at least for my personal use case. My 4090 for example is not just for gaming, I needed a GPU that was not only fast but also has a large amount of VRAM for Unity and Blender work.

I also wanted a GPU that could drive a 5K HMD without any hiccups. A PlayStation 5 Pro is not by any means something useless however I will say that it is far less of a justifiable necessity than the PlayStation 4 Pro was.

As other users have notated there's only about five games on the PlayStation 5 which are not 60 FPS. As we all know advanced features and rendering are now being utilized to reach 4K that was simply not possible with the PlayStation 4 Pro.

So we're already where things need to be, last generation that wasn't the case. The television market was transitory from 1080p to 4K, we're not transitory to 8K right now because there's just no content and with the PPI as condensed as it is at proper viewing distance it looks the same with no real benefit regardless.

That's why I'm kind of puzzled as to why they're really making it in the first place, I don't see a function to where it's actually necessary or serving a new purpose.
 

panda-zebra

Active member
2 Mar 2024
151
231
NW UK
As we all know advanced features and rendering are now being utilized to reach 4K that was simply not possible with the PlayStation 4 Pro.
PS4 Pro implemented a hardware solution to reach 4K, it was a commendable effort but software CBR implementations were soon more performant or flexible with advantages in temporal aspects, then the whole thing eventually got left behind for superior techniques. Yes, on the ps5 and all other consoles there are software solutions used to present anti-aliased 4K output from lower res native resolutions, but far too often the end results are not quite there - particularly when working from too low a base resolution. The results are sometimes just messy. The more the gen goes on and we move away from effectively super-charged last gen games, the more these kind of problems become apparent.

So we're already where things need to be, last generation that wasn't the case.
Nah, nowhere near. You, with your 4090 and DLSS 3.x, might be where you need to be, but console gamers are certainly not. PS5 Pro addresses this issue thanks to PSSR - another bespoke hardware-assisted solution from Sony, along with some other nice upgrades.

I mean, do you use FSR2 as your default upscaling solution when playing games on your PC? Heh, like fuck you do, of course not ;-) Would you be fine if DLSS was taken away from you today and you were left with FSR2 as your best case solution? Doubt! In all honesty, your pontificating suggests you're actually thinking I'm where I need to be and console gamers need to know their place stuck with their shitty software-based solutions, lol... Kinda master race as it gets that, ngl.

That's why I'm kind of puzzled as to why they're really making it in the first place, I don't see a function to where it's actually necessary or serving a new purpose.
Well duuuh, you'd have us believe you take time away from games on your 4090, working in blender and inside that sweet, sweet 5k HMD to share your wisdom here, so naturally you don't see why it's necessary or serves a purpose because you have nothing upon which to base anything worthwhile or relevant. If you played current gen games on console you'd know that current software solutions for upscaling and anti-aliasing are increasingly not good enough, if you had a Series S like myself you'd wish eye-bleach was a thing because that's just how offensive some of this shit looks. Going into the 2nd half of the gen Sony has this console to address that kind of crap and a few other things, and is using it as a platform upon which to work towards PS6 and beyond where AI/ML will play an increasingly more important role in offering better performance.

IIRC the cpu didn't change on the PS4Pro, right?

Is that an ease of development thing?
It got a 33% upclock. There's lots of reasons that make sense why it's just 10% this time out in the DF video: power, heat/cooling, die space, existing architecture, etc. Alex talks about it at the end in very PC terms but misses the point entirely - the point of a Pro console is to maximise presenting the exact same games the best way possible rather than being an entire system upgrade - that's what generations are for, not mid-gen refreshes. The same reason xbox didn't need a hugely more powerful CPU in the Series X over the Series S's CPU - just a few 100mhz - when the GPU was 3 times as powerful... they're both running the same games just working to present them differently. DF's whole approach to this aspect here was far less forgiving than their assessment of S & X around launch... 🤔
 
Last edited:
  • they're_right_you_know
Reactions: xollowsob

Nhomnhom

Veteran
25 Mar 2023
7,205
9,792
Or.... Hear me out... They are not bumping the CPU for cost reasons, and because a massive bump would also compromise development on the base console. Just a thought.
That is a much more reasonable and obvious explanation than making stuff up.

Assuming Cerny is the guy behind the PS5 Pro I'll put my money on him making the right decisions since this isn't his first time doing a pro console.
 

ksdixon

Dixon Cider Ltd.
Icon Extra
22 Jun 2022
1,729
1,099
For those of us whom are not technically minded, what are some layman's explanations for proportionate PS5/Pro comparison? E.g. "Pro will cold load a game faster, shaving off nth seconds" or "download/install --> play time is quicker, it'll handle this size game going through that process at this amount of time".
 

Satoru

Limitless
Founder
20 Jun 2022
6,800
10,250
One thing I'm finding... Curious about these leaks is that they keep stating that the card has 30WGP with a total of 60CU.

To understand why I find this curious, I want to point out something about RDNA architecture.
  • Work Group Processors (WGP) are comprised of two Compute Units (CU) each
  • You then have Shader Engines, each with a set amount of WGPs
Take the PS5, for example. It's based on the Navi 10 / 22 Graphics Compute Die (GDC), with a floorplan comprised of
  • 20 WGP / 40 CU - 2 WGP are disabled for yields
  • 2 Shader Engines, each with 10 WGP
861-block-diagram.jpg


Now when looking at RNDA3 floorplans, we only have ONE GDC that they could possibly be using if keeping with AMD's standards, the Navi 32.
  • 30 WGP / 60CU
  • 3 Shader Engines, each with 10 WGP
Why is this relevant? Well, because if accurate, either Sony decided not to accept any APUs with borked WGP, which would be a moronic decision since it would severely impact yields, or their GPU is custom in that it features 30 WGP with somehow more compute units than they should be allowed - which would make no sense.

That said, there are two possible combinations I can see here - It will have either 28 or 27 enabled WGP, for a total of anything between 54 and 56 CUs. Funny enough, there is a card that matches the 54 CU number perfectly, the Radeon RX 7700 XT.

Now we know the compute unit targets, what else? Well, we do not know the clockspeed, but we do know the supposed teraflop numbers (dual issue) 67 TF fp16 / 33.5 TF fp32 - Like for like, this will put us at around 16.75 TF when compared like for like with the standard PS5. This will now allow us to calculate the actual clockspeed for the GPU:

Compute Units x Shaders per CU x Clockspeed x 2 = Teraflop
  • 54 x 64 x ? x 2 = 16.75 | ? = 16.75 / (54 x 64 x 2) | ? ≈ 0.00242, or 2.42 GHz
  • 56 x 64 x ? x 2 = 16.75 | ? = 16.75 / (56 x 64 x 2) | ? ≈ 0.00234, or 2.34 GHz
My guess here could be completely wrong, but I don't see the math in this table below being correct, either TF wise or GPU speed wise:

9X1ekBX.png


Funny enough, when "Oberon" was announced people were talking about how the PS5 was 9.2TF because they assumed the console would be 36CU based on the Navi10 nomenclature. What's even funnier is that if you calculate Teraflops based on the PS5 "oberon" having 40CU you get...

40 x 64 x 2GHz x 2 = 10.24 TF, which is incredibly close to the final figure we got from Sony.

My personal bet goes to this console having 54 enabled Compute Units running at 2.42 GHz

Edit - I also find it funny that the "expert" Kepler_l2 mentioned that the card full config has 64 CU organised differently from what we'd expect from RDNA 2 or 3. To put things in perspective, each shader engine usually has 10 WGP, this would require 2 of them to have 16 each. That would be a terrible engineering decision considering how caches are laid out in RDNA cards - and unless Cerny found some gold somewhere, this would be antithetical to his efficient designs.



However then says that 300 tops suggests a clockspeed of 2.45 GHz (pure coincidence that this pretty much matches my math)



So which is it? I will keep on calling BS on the 2 shader engine configuration since there's no RDNA 3 card with it other than the tiny Navi 33

Edit - If anyone asks, one of the main issues with the Series X hardware architecture is that the shader arrays are simply "too long" for the caches they have - there are efficiency losses the further away a compute unit is from it's L1 cache. The PS5 has 4 shader arrays with 10 CU each, the Series X has 4 shader arrays with 12 CU each, leading to further efficiency losses. I can only begin to imagine 4 shader arrays with 16 compute units each and how dumb that would be.

I'm perfectly ok If I'm wrong, but I'm very curious to the configuration itself, much more than tittyflops

RDNA2-2.jpeg


Wdit again - Someone shared this image from apparently AMD a while back, and if it's real, it gives credence to my theory on the 7700xt / 54CU machine

GI58K_8W4AASQo5
 
Last edited: