Unreal Engine 5.1 now 60fps on Next Gen Console with Lumen, Nanite, and Virtual Shadow Maps

D

Deleted member 13

Guest
But 95% of games are cross-generation so if anything, last generation would be holding back this generation and yet, it's not. GOWR was mentioned earlier as being held back but we'll see what happens with Spider Man 2. I'm not expecting a revolutionary game. Will it be amazing? Absolutely but I think people are expecting games to be revolutionary when in reality, everyone should be expecting games to be an evolution. Same games as last generation, just better across the board.

We're not even close to the middle. Generation is going eight years minimum so we're only 25% of the way through. We have a long ways to go. I personally have no issues thus far with this generation. Excluding two games, the other 23 on my list have been or will be 60fps, I get ultra fast loading times and just a better overall performance compared to last generation. Two years in, this generation has already surpassed last generation for me thus far and the rest of the generation overall would have to be complete shit for that to happen.
Agreed.

We need to get away from the myth that consoles can perform like high end GPUs and the developers just haven't had time to figure it out yet. It's just not realistic. You aren't getting a 3090/4090 performance with all the graphics options (native 4k,60FPS, DLSS, RTGI, RTshadows, RT ambient occlusion, 16x anisotropic texture filtering, Nanite geometry, full blown hair with self-shadowing, virtual texturing of 4k texture maps, etc.. etc..) you want for $500. Sorry the world doesn't work that way (high quality things for cheap). You want a 2k diamond ring for $1000 instead of $10,000. Not going to happen.
 
D

Deleted member 13

Guest
This new scalability level is detailed in UE 5.1 docs and says exactly what I quoted.


That is why you need to read more than just the release notes… the new scalability mode is called High.



I’m really curious to see a comparison between Epic and High to understand what they mean with not “acceptable quality” yet.

Edit - I just got a reply on Beyond3D from Alex…
I really hate it when I say these things and get super duper pushback for no apparent reason other than people not liking what I have to say.

I have continuously said that rendering is all about approximating real life lighting computations. And to do that, you need to take samples around your environment. You can't possibly come up with an analytical solution to the rendering problem (i.e. you can solve the equation with a verified analytical function). You HAVE to take samples.

The big difference between film and games is that film takes higher samples. The more samples you take the more realistic and genuine the image is going to be. People brush off differences between 4x anisotropic filtering, for example, compared to 16x anisotropic filtering but the visual difference is jarring. You are essentially only taking 4 samples in a texture compared to 16 samples and getting a lot less "accurate" solution.

All Lumen is doing is taking less samples until the desired framerate is achieved. It WILL affect how good the image looks. There is NO argument with that.

Back to the film vs. games comparison. Just imagine taking more samples than the fastest realtime graphics card such that you aren't even in 1 frame per second anymore.. You are several minutes or hours per frame. That's film. So when people love shouting out "film is different from gaming", I laugh at that because they are clueless about how both sides are doing the same thing.
 

ethomaz

Rebolation!
21 Jun 2022
12,088
9,806
Brasil 🇧🇷
PSN ID
ethomaz
GDDR6 has double the transfer rate of GDDR5, lower power draw, better latency, faster access times...

Bandwidth isn't the end all be all of memory.
What are you taking about?

The double transfer rate is due the pin increasing to 16Gb/s that is archivable using two pins in GDDR5.

In fact there is no different in 224 GB/s GDDR6 vs 224GB/s GDDR5 except GDDR6 is using half the pins that translate in less modules (aka cheaper)… ohhh what I said…

Latency is decreased due the high clocks but that only happens well if you are using higher clocks… that is not the exactly case of Series S… plus latency doesn’t affect the type of parallel GPU processing (is more critical for out of order CPU processing) and the GDDR6 is Series S is slow (aka cheaper) that doesn’t have that much lower latency (in fact it probably has higher latency than high clocked GDDR5).

There is no faster access times if the final bandwidth is the same… the access time to will be the same.

There is two reasons to use GDDR6 and none of which you guys are trying to made up being “better” is one of them.

Or you want to go cheaper that GDDR6 allow the same speed at cheaper price than GDDR5 or you want more bandwidth where the peak speeds of GDDR6 is higher than peak speeds of GDDR5.

So in a high end products you use high speed and expensive modules to archive what GDDR5 can’t and in low end products you use less mobiles at low speeds to archive the same as GDDR5 at lower price.
 
Last edited:
  • they're_right_you_know
Reactions: Darth Vader
P

peter42O

Guest
Agreed.

We need to get away from the myth that consoles can perform like high end GPUs and the developers just haven't had time to figure it out yet. It's just not realistic. You aren't getting a 3090/4090 performance with all the graphics options (native 4k,60FPS, DLSS, RTGI, RTshadows, RT ambient occlusion, 16x anisotropic texture filtering, Nanite geometry, full blown hair with self-shadowing, virtual texturing of 4k texture maps, etc.. etc..) you want for $500. Sorry the world doesn't work that way (high quality things for cheap). You want a 2k diamond ring for $1000 instead of $10,000. Not going to happen.

Yep. I agree with you in return. If anything, people need to keep their expectations in check.