[OT] : Next-next gen console/PC predictions

  • Thread starter Deleted member 13
  • Start date
OP
OP
D

Deleted member 13

Guest
I'd be disappointed if PS6 and Series X2 couldn't beat a 3080 Ti in overall performance; AMD's been seeing some really good gains with each RDNA gen, we'll have very mature 3nm and 2nm (at the least) by 2028, AMD will probably be fully in on chiplet GPUs (same with Nvidia and Intel)...only bottleneck could be the RAM. GDDR's gotta go; HBM is the future and HBM-PIM is THE future, at least for embedded fixed-configuration systems, IMHO.
You aren't considering a timeframe to develop. AMD still hasn't released a graphics card that can compete with Nvidia on RT. Assume that their leading tech will be with a standalone GPU first before developing console hardware. Where is the RT tech that competes with Nvidia as of today? When do you think they will have a piece of hardware that can compete and when will it be released to the public? When do you think the clock starts on a PS6 development? Today? Next year? What will be out at that time?
 
  • brain
Reactions: Deleted member 51

Killer_Sakoman

Veteran
21 Jun 2022
2,104
2,015
You aren't considering a timeframe to develop. AMD still hasn't released a graphics card that can compete with Nvidia on RT. Assume that their leading tech will be with a standalone GPU first before developing console hardware. Where is the RT tech that competes with Nvidia as of today? When do you think they will have a piece of hardware that can compete and when will it be released to the public? When do you think the clock starts on a PS6 development? Today? Next year? What will be out at that time?
Honestly, this thread is 4 years ahead. Assuming PS6 development likely started even before PS5 released. Then we already are talking 6 to 7 years of development time. We don't know what AMD is cooking and we don't know if sony/ Microsoft will stick with AMD.
Also the world might be completely nuked by 2023 😂
 
  • haha
Reactions: Deleted member 13
OP
OP
D

Deleted member 13

Guest
Honestly, this thread is 4 years ahead. Assuming PS6 development likely started even before PS5 released. Then we already are talking 6 to 7 years of development time. We don't know what AMD is cooking and we don't know if sony/ Microsoft will stick with AMD.
Also the world might be completely nuked by 2023 😂
That's my point though. I believe whatever AMD makes would be released on PC first before consoles so we would have a general idea. I also agree that they are probably already constructing it now based on the tech that is available today.
 
24 Jun 2022
3,982
6,954
You aren't considering a timeframe to develop. AMD still hasn't released a graphics card that can compete with Nvidia on RT. Assume that their leading tech will be with a standalone GPU first before developing console hardware. Where is the RT tech that competes with Nvidia as of today? When do you think they will have a piece of hardware that can compete and when will it be released to the public? When do you think the clock starts on a PS6 development? Today? Next year? What will be out at that time?

Well, truth be told dev (at least early planning and conceptual ideas) for the next PlayStation and Xbox have been started, they probably started in early 2021 or maybe even in the back of 2020 before the consoles released. But with the PC-like nature of consoles today, they have a lot more time afforded to them before finalizing on specs and design elements, while still having enough time to build, prototype, test, certify, and bring to market. And I'm not even of the mind the 10th-gen consoles will come anytime soon; they're probably 2028 at the earliest.

AMD don't have Nvidia money, but they have Sony and Microsoft as technology partners, and between the three of them I'm more than sure they can have RT at Nvidia's level (or better) by the time 10th-gen consoles are ready. Intel are giving it a shot; the Arc line seems like a wash but Battlemage might put up a serious fight with Nvidia and AMD if the rumors and current speculations hold water. Sony & Microsoft are constantly investing into patented solutions with tech, including RT, and we've come across a decent few of Sony's in that regard as well as ML-related patents.

Right now there's lots of good word on RDNA3, there's a general belief that it'll have RT capabilities to massively close the gap between AMD and Nvidia, and on a more efficient architecture. And that's with GPUs due at the end of this year; between now and the next six or so years AMD at the very least aught to have RT abilities that compete with and easily surpass what the RTX 30 series can provide today. The question should really be, will what they have by the end of the decade be able to compete with what Nvidia have by the end of the decade, not just in RT but also DLSS-like features?

That's a question way more open-ended IMO, and even if AMD have such answers in their top-end GPUs by then, the 10th-gen consoles likely won't have them at that scale due to their design and the constraints they need to consider, as you've mentioned a few times earlier. But the other question, if a PS6 & Series X2 will have hardware capabilities besting a 3080 Ti? I think that's an easy "yes".

That's my point though. I believe whatever AMD makes would be released on PC first before consoles so we would have a general idea. I also agree that they are probably already constructing it now based on the tech that is available today.

Well for sure they're designing them based on today's tech, but a lot of that will be placeholder stuff that can be switched out or upgraded as better tech comes along. IIRC early PS5 devkits were using Vega GPU cards; some time before that even earlier devkits were probably using 1080s or what have you.

Alongside that, "today's tech" could run a big gamut of stuff, depending on what areas of the tech market are being looked at. When Sony were designing the PS5's SSD sub-system and features, they were probably looking at SSD system setups in data centers just as much as they were looking at consumer SSDs on the market at the time, especially if that aspect of that system was being fleshed out years in advance.

Today stuff like DDR-PIM and HBM2E-PIM memories aren't in any consumer devices. Stuff like CXL 3.0 or OpenCapi OPI-XRS aren't in consumer devices today. But there's no reason Sony and Microsoft can't be looking at that tech right now and taking it into consideration (alongside AMD) for building 10th-gen systems around.
 
Last edited:
OP
OP
D

Deleted member 13

Guest
That's all well and smooth from what you are saying but history is telling you exactly the opposite.

Your specific predictions were way overboard this generation (along with most people on GAF). Hardly anyone thought that the PS5/XSX would have limited RT (actually using just shader cores for ray intersections instead of their own cores) and no AI tensor cores (which was the latest tech on the block). And you are doing it again for PS6. :)

My take is that there is only but so much advanced tech you can put into a $500 box. If MS/Sony can make a card that bests the 3080, it won't be 'easily'. The gap in performance between the 2080 and 3080 is significant and easily one of the biggest generation gaps we've seen by Nvidia. That's why I think it'll be on par or at the maximum a little bit more. It certainly won't come within 4x000 series status even if consoles are 6yrs away. I also think Nvidia will slow down on production advancement to probably a new board every 3yrs. That puts us at 5x000 when the new consoles come out. I believe the gap will widen between the PC and consoles until consoles are totally abandoned and cloud gaming takes front stage.
 

Sircaw

Pro Flounder
Moderating
20 Jun 2022
6,952
12,206
The only thing that worries me about next-gen consoles of the future would be the power requirements, i can't remember exactly but someone posted a nvidia graphics card, the thing was a monster as in size, it was like a frigging bus.
 

Killer_Sakoman

Veteran
21 Jun 2022
2,104
2,015
That's my point though. I believe whatever AMD makes would be released on PC first before consoles so we would have a general idea. I also agree that they are probably already constructing it now based on the tech that is available today.
You are assuming Sony and Microsoft will stick with AMD. You are also assuming the next gen consoles will be based on 2020 GPU. If both consoles release in 2028 and still with AMD, then they will be based on whatever AMD offers in 2027 or early 2028 just like this gen. I believe you are judging too early without knowing what AMD or others are going to offer in the near future. Come late 2025 or early 2026 and I might put more trust in your speculation.
 

mansoor1980

Well-known member
4 Jul 2022
285
432
as for the NVIDIA 4000 series
https://wccftech.com/chinese-gpu-su...40-graphics-cards-in-less-than-a-month-rumor/

f7513c732a9627411e9052183ebdea054df78bf3a920c305a05c604ab8b5c9de.png
 
OP
OP
D

Deleted member 13

Guest
You are assuming Sony and Microsoft will stick with AMD. You are also assuming the next gen consoles will be based on 2020 GPU. If both consoles release in 2028 and still with AMD, then they will be based on whatever AMD offers in 2027 or early 2028 just like this gen. I believe you are judging too early without knowing what AMD or others are going to offer in the near future. Come late 2025 or early 2026 and I might put more trust in your speculation.
No hardware company will be able to quickly put a GPU chip on board within one year of it's release. Where are you getting 'just like this gen' from? AMD had been working on their RDNA 2 for years.
 
  • brain
Reactions: Deleted member 51

Killer_Sakoman

Veteran
21 Jun 2022
2,104
2,015
No hardware company will be able to quickly put a GPU chip on board within one year of it's release. Where are you getting 'just like this gen' from? AMD had been working on their RDNA 2 for years.
What? AMD released RDNA2 on both consoles and PC at the same year. No one said they did the R&D in and developed the chip in one year. Though, this claim is more believable than claiming that AMD will sell us a 2020 GPU in 2028.
 
  • they're_right_you_know
Reactions: mansoor1980
OP
OP
D

Deleted member 13

Guest
What? AMD released RDNA2 on both consoles and PC at the same year.
I wasn't talking about release. I'm talking about development time to release time.
No one said they did the R&D in and developed the chip in one year. Though, this claim is more believable than claiming that AMD will sell us a 2020 GPU in 2028.
RDNA 2 boards are faster than the 2x00 series boards from Nvidia. The consoles are equivalent to 2x00 series boards. Nvidia's 3x00 series boards are significantly more outperforming than the 2x00 series boards. Those boards are more than 2x in performance than their typical generation releases.

The big question to you is to prove how it can happen. You misjudged this generation by a country mile and yet you are still predicting 4x00 series performance for $500 (while the boards themselves will cost over $1,500 for PCs) from a company that's lagging behind Nvidia by a significant margin with regards to hardware tech like RT cores and Tensor cores with DLSS.
 
  • they're_right_you_know
Reactions: Deleted member 51
24 Jun 2022
3,982
6,954
That's all well and smooth from what you are saying but history is telling you exactly the opposite.

Your specific predictions were way overboard this generation (along with most people on GAF). Hardly anyone thought that the PS5/XSX would have limited RT (actually using just shader cores for ray intersections instead of their own cores) and no AI tensor cores (which was the latest tech on the block). And you are doing it again for PS6. :)

So for whatever reason, you think PS6 won't have dedicated RT cores in its GPU? You think it won't have dedicated ML logic? You think it won't be able to outdo the 3080 Ti in pixel or texture fillrate? Geometry culling rate? A PS6 in 2028 won't be able to beat a nigh high-gen Nvidia GPU in these areas (and likely others) released in 2022, a full six years earlier? That doesn't make any sense.

I don't even remember making any speculation on PS5 and Series X specs before they started getting leaked and revealed. The 10th-gen stuff I posted like a year back or whatever, yeah some of that was wild. Even maybe from half a year ago, wild stuff. But I was learning a lot of stuff along the way.

Anyway my speculation this time around isn't on hard-set specs (not really), but on technology features. If the consoles have PIM & PNM-based architectures, shift to HBM-based memories and CPU/GPU designs that fit those architectures alongside being chiplet-based on (by that point) mature 2nm or even 3nm processes, they should smoke a 3080 Ti with relative ease in pretty much every measurable gaming-related metric. Maybe they don't "clearly" outspec it in TF but that would be it, and I would be really surprised if they failed to do that as well.

My take is that there is only but so much advanced tech you can put into a $500 box. If MS/Sony can make a card that bests the 3080, it won't be 'easily'. The gap in performance between the 2080 and 3080 is significant and easily one of the biggest generation gaps we've seen by Nvidia. That's why I think it'll be on par or at the maximum a little bit more. It certainly won't come within 4x000 series status even if consoles are 6yrs away. I also think Nvidia will slow down on production advancement to probably a new board every 3yrs. That puts us at 5x000 when the new consoles come out. I believe the gap will widen between the PC and consoles until consoles are totally abandoned and cloud gaming takes front stage.

That's...a hell of a take :S. I don't agree with it, I think it's way too pessimistic on consoles and what you're thinking about happening won't really happen until the 10th-gen is underway (remember it's not just about if the tech for cloud gaming is there; ISPs have to play ball and who knows if another economic recession happens down the line that'll make the costs of stuff like internet access (and therefore cloud gaming) unappealing).

Basically, I don't think 10th-gen consoles have to try beating PC GPUs at GPU perf to actually close the gap (or perform above what simply GPU paper specs would suggest) with top-end GPUs on the market today or will be on the market in the next year or two. Remember, "arithmetic is free", and embedded systems like consoles can address data movement and locality in ways PC can't. That's what 10th-gen consoles will hone in on, or, they should hone in on.

That and some basic level VR as standard in the box.
 
OP
OP
D

Deleted member 13

Guest
So for whatever reason, you think PS6 won't have dedicated RT cores in its GPU? You think it won't have dedicated ML logic? You think it won't be able to outdo the 3080 Ti in pixel or texture fillrate? Geometry culling rate? A PS6 in 2028 won't be able to beat a nigh high-gen Nvidia GPU in these areas (and likely others) released in 2022, a full six years earlier? That doesn't make any sense.
No no. You got me all wrong my friend. I'm saying that a PS6 *will* have those things since the 3x00 series boards have them. I'm saying that AMD is still a generation behind the 3x00 series boards with respect to hardware RT/DLSS w/Tensor cores. They don't even have their top of the line board with the hardware that Nvidia has in their top of the line boards and we are 2yrs into the new generation already. However, I believe they will finally 'get it right' in time for their next console.

What I am ALSO saying is that it will probably be just that. The 3x00 series boards are significantly more powerful than the 2x00 boards by a wide margin. It's not the typical generational speed up that we normally have seen in the past, so I count that bump as 2 generations instead of 1. I don't believe the PS6 will have the same power as the 4x00 series cards (or bandwidth) and be engineered in such a way to make it into a small unit to sell for $500 from a company like AMD (who is known to be behind Nvidia on design).

That's pretty much my argument in a nutshell. I could totally be wrong and AMD might come up with some secret sauce that Nvidia never thought about, but if I were a betting man, I would bet that Nvidia will keep their advantage in innovation and tech.
 
24 Jun 2022
3,982
6,954
No no. You got me all wrong my friend. I'm saying that a PS6 *will* have those things since the 3x00 series boards have them. I'm saying that AMD is still a generation behind the 3x00 series boards with respect to hardware RT/DLSS w/Tensor cores. They don't even have their top of the line board with the hardware that Nvidia has in their top of the line boards and we are 2yrs into the new generation already. However, I believe they will finally 'get it right' in time for their next console.

What I am ALSO saying is that it will probably be just that. The 3x00 series boards are significantly more powerful than the 2x00 boards by a wide margin. It's not the typical generational speed up that we normally have seen in the past, so I count that bump as 2 generations instead of 1. I don't believe the PS6 will have the same power as the 4x00 series cards (or bandwidth) and be engineered in such a way to make it into a small unit to sell for $500 from a company like AMD (who is known to be behind Nvidia on design).

That's pretty much my argument in a nutshell. I could totally be wrong and AMD might come up with some secret sauce that Nvidia never thought about, but if I were a betting man, I would bet that Nvidia will keep their advantage in innovation and tech.

I wouldn't say the RTX 30 series are significantly more powerful than the RTX 20 series across the board, because their pixel fillrate and texture fillrate didn't go up that much. Really the improvements came in Tensor cores for ML and dedicated RT, and Nvidia making some changes to the ALUs for favoring compute. That's why in real-world gaming performance comparisons, a 3080 doesn't have the perf boost over a 2080 (in non-RT scenarios) as the TF would make you believe.

As for AMD everything I've been hearing is that they've got something very efficient in RDNA3 and have their own Tensor core-type equivalents, are pushing bigger last-level caches, are taking a chiplet approach to the high-end cards and doing all of this plus notable TF gains and higher clocks at lower power draw than Nvidia. So if Nvidia take the perf crown again, it'll be because they're throwing away power efficiency to do so. That's at least going from what the leaks seem to be suggesting, anyway.

So from that I get the impression AMD have more headroom to expand their GPU performance going forward with a lot less roadblocks than Nvidia, who might have to radically change their architecture and focus on something chiplet-based (tho they don't have much experience with chiplets unlike AMD and Intel), and that might lead to Nvidia taking longer between the RTX 40x series and the 50x, which you've even suggested. So then say the RTX 50x series won't come until 2025; by then we'll have RDNA4 and AMD will most likely be taking their A1000 enterprise-focused chiplet module design, parts of it anyway, and using that as the basis for future RDNA GPUs anyway.

I wouldn't expect AMD to fall behind Nvidia by that point; maybe with the RTX 60x series Nvidia establishes a clear lead again but by that point would it even matter for consoles? I don't think you need that much more computing power to get photorealistic visuals, even with RT, and at some point the high-end GPUs six, seven years away will just be tailored for absolute power monsters and professional graphic designers, CG modelers & animators, video producers and those types. Basically things way more focused on the creation (professional) side than the playing/watching/listening (customer) side of entertainment.

Even if AMD's GPUs by that point are behind whatever Nvidia has, it won't matter for 10th-gen consoles. They'll have their pick for GPU tech good enough for a console by that point, and even something on the lower-mid side from AMD by then should be enough to on paper at least be on par with a 3080 Ti or even 3090 Ti. If the consoles do what I think they should, and focus on a PIM/PNM-based architecture for the memory (going with HBM), GPU, CPU, and some data sub-system for handling data locality processing, movement, and decompression for all (or most) I/O devices in the system, in practice they should be able to deliver performance well ahead of a 3080 Ti or 3090 Ti, and at least the majority of the RTX 40x series, too.

Go look at what the top-end Nvidia GPU was at the time the PS4 and XBO came out; the 780 Ti. Almost 3x more powerful than PS4 in TF, over 2x RAM bandwidth, way more ROPs and TMUs, etc. Yet the One X in 2017 beats it in raw TF count, is within striking distance for RAM bandwidth, has way more RAM too. Only thing that really would've held it back in comparisons was the crappy Jaguar CPU. Then you get to PS5 & XBO and they just blow past the 780 Ti inn most aspects. At least 2x in TF, 2x - 3x in pixel fillrate, over 50% more in texture fillrate, well more geometry culling, etc. on a (theoretical; not all logic scales down to it for practical chips) 4x smaller node process.

So you take something like a 3090, take into account 10th-gen systems will (or, again, absolutely should) be chiplet-based, have PIM & PNM-based architectures with GPU & CPU designs integrated into them, switch to HBM (maybe HBM3-PIM), do something in terms of robust data management with an active interposer, have support for CXL 3.0 devices & memories, be on 3nm or even 2nm process...and it's actually really easy to see PS6 & Series X2 replicating a lot of the gains PS5 & Series X have over the 780 Ti. At least doubling (or close to doubling) in TF, way more RAM bandwidth, better RAM (lower latency especially), much higher pixel fillrate, higher texture fillrate, much higher geometry culling, etc.

Again, I'll be shocked if this doesn't happen.
 
OP
OP
D

Deleted member 13

Guest
I wouldn't say the RTX 30 series are significantly more powerful than the RTX 20 series across the board, because their pixel fillrate and texture fillrate didn't go up that much. Really the improvements came in Tensor cores for ML and dedicated RT, and Nvidia making some changes to the ALUs for favoring compute.
Of course I'm talking about the RT and MIL. That's going to take center stage. It's just that AMD wasn't ready for it. Even Nvidia's initial designs aren't powerful enough for a full RT pipeline (with low samples) targeting 60FPS.

So from that I get the impression AMD have more headroom to expand their GPU performance going forward with a lot less roadblocks than Nvidia, who might have to radically change their architecture and focus on something chiplet-based (tho they don't have much experience with chiplets unlike AMD and Intel), and that might lead to Nvidia taking longer between the RTX 40x series and the 50x, which you've even suggested. So then say the RTX 50x series won't come until 2025; by then we'll have RDNA4 and AMD will most likely be taking their A1000 enterprise-focused chiplet module design, parts of it anyway, and using that as the basis for future RDNA GPUs anyway.
And that's where things get screwy with predictions. These chips take time to develop and you are suggesting that AMD will use designs from 2025 to put into consoles that will be released 3 years later? I'm not buying that. Also, while all of these patents and rumors look great in articles, until we have something actually released they are vaporware. I also do not expect everything to follow a linear timeline fashion. Life never works that way.

I wouldn't expect AMD to fall behind Nvidia by that point; maybe with the RTX 60x series Nvidia establishes a clear lead again but by that point would it even matter for consoles?
I don't see RTX 60x series boards being released before a PS6. That's way too fast turn around for such complicated processes. You are assuming an exact 2yr new product cycle timeline with nothing added in for entropy in the world and I just don't see that happening even if that was the past.

I don't think you need that much more computing power to get photorealistic visuals, even with RT, and at some point the high-end GPUs six, seven years away will just be tailored for absolute power monsters and professional graphic designers, CG modelers & animators, video producers and those types. Basically things way more focused on the creation (professional) side than the playing/watching/listening (customer) side of entertainment.
I can assure you that even a RTX-6x won't be enough to render today's CG visuals in real time. We have a LONG way to go for that. And I mean LONG.

Even if AMD's GPUs by that point are behind whatever Nvidia has, it won't matter for 10th-gen consoles. They'll have their pick for GPU tech good enough for a console by that point, and even something on the lower-mid side from AMD by then should be enough to on paper at least be on par with a 3080 Ti or even 3090 Ti. If the consoles do what I think they should, and focus on a PIM/PNM-based architecture for the memory (going with HBM), GPU, CPU, and some data sub-system for handling data locality processing, movement, and decompression for all (or most) I/O devices in the system, in practice they should be able to deliver performance well ahead of a 3080 Ti or 3090 Ti, and at least the majority of the RTX 40x series, too.
So you guys use the 2yr cycle of upgraded GPUs in order to predict the consoles power range. You assume that if there is a 6yr gap, that Nvidia will have put out 3 generations of boards by that time and then you take the previous generation boards to assume the consoles will be using that. Thereby maintaining only a 1 generation behind tech and assuming that costs will significantly drop for previous gen whenever something new comes out. Essentially when a 4x00 series board comes out, the 3x00 series board should be discounted over 50-70% less simply because it's a new gen?

If that's the case, that's why most people were off on their predictions this generation. Nothing in life is that linear and that steady. Every new generation is completely different in both timeline and scale than the previous one. Limits are reached with particular designs, costs can rise exponentially and manufacturing isn't linear either.
 
  • Informative
Reactions: Deleted member 51
24 Jun 2022
3,982
6,954
The only thing that worries me about next-gen consoles of the future would be the power requirements, i can't remember exactly but someone posted a nvidia graphics card, the thing was a monster as in size, it was like a frigging bus.

Power requirements for 10th-gen systems should be within the ballpark of where the current top-end consoles are, maybe even lower, if a combination of chiplets/PIM memory/PNM & PIM design/lower-power (but more in number) CPU cores are utilized in their designs. Switching to HBM (HBM-PIM in particular) absolutely would help a ton there as well.

No hardware company will be able to quickly put a GPU chip on board within one year of it's release. Where are you getting 'just like this gen' from? AMD had been working on their RDNA 2 for years.

They've also been working on RDNA3 for years, too, though, as well as RDNA4.

Of course I'm talking about the RT and MIL. That's going to take center stage. It's just that AMD wasn't ready for it. Even Nvidia's initial designs aren't powerful enough for a full RT pipeline (with low samples) targeting 60FPS.

Not yet, but it also depends on what level of graphics fidelity we're talking about and don't forget, game budgets also play a big role in how much of the hardware is tapped as well. You could probably get 4K60 for the Nvidia marbles demo on a 3090 if some texture and geometry settings were turned down. I mean right now that same demo can do 4K48 on the same GPU paired with a 5950


And that's where things get screwy with predictions. These chips take time to develop and you are suggesting that AMD will use designs from 2025 to put into consoles that will be released 3 years later? I'm not buying that. Also, while all of these patents and rumors look great in articles, until we have something actually released they are vaporware. I also do not expect everything to follow a linear timeline fashion. Life never works that way.

To the bolded? Yes. Because whatever GPU design AMD has for 2025 (or 2026), will have been in the works for years prior to that, and Sony & Microsoft would be aware of those designs to some level. Like someone above said, just look at the RDNA2 turnaround; the consoles released the same year as those GPUs hit the market.

Even for a worst-case scenario, the 10th-gen consoles should be able to leverage technological features, and hit somewhere within high-mid to mid-high perf-wise, whatever AMD GPUs are on the market by 2025 (or 2026, if RDNA5 is a bit later down the line).

I don't see RTX 60x series boards being released before a PS6. That's way too fast turn around for such complicated processes. You are assuming an exact 2yr new product cycle timeline with nothing added in for entropy in the world and I just don't see that happening even if that was the past.

It really comes down to timing for RTX 40x series (we know those will start launching later this year) and RTX 50x series (probably 2025 at earliest). So at the worst-case scenario, RTX 60x series would start releasing within the same year as PS6 & Series X2 launching (assuming they are 2028).

That's even with considering entropic issues.

I can assure you that even a RTX-6x won't be enough to render today's CG visuals in real time. We have a LONG way to go for that. And I mean LONG.

Again IMO I think it depends on what demands the graphical assets call for, and game budget & development time. RTX 60x series won't need to be powerful enough to do today's CGI in real-time, because the top-end AAA games will have the budgets and time afforded to them to where artistic liberties can be used alongside technical prowess to "fake" visual fidelity on par with the CGI we have in movies today, in real-time visuals.

Especially from certain studios I'm sure, like Naughty Dog, Guerrilla Games, Sony Santa Monica, maybe The Coalition, Rockstar etc. Ask someone back in 2013 if they ever thought something like TLOU2 would be able to look as good as it does or even run on a base PS4 and they'd probably had said no, and yet it ended up happening.

So you guys use the 2yr cycle of upgraded GPUs in order to predict the consoles power range. You assume that if there is a 6yr gap, that Nvidia will have put out 3 generations of boards by that time and then you take the previous generation boards to assume the consoles will be using that. Thereby maintaining only a 1 generation behind tech and assuming that costs will significantly drop for previous gen whenever something new comes out. Essentially when a 4x00 series board comes out, the 3x00 series board should be discounted over 50-70% less simply because it's a new gen?

Well from what I understand, GPU manufacturers like AMD and Nvidia have obscene profit margins on their GPUs, which makes sense. We never actually learn what the actual production costs for the GPUs are, but we can take guesses.

We also know that the pricing of some components, like RAM, are easier to figure out and have a certain ebb & flow, and general curve to price reduction over time depending on the type of RAM, the capacity, etc. I'm willing to admit that the 2-year cadence could end up a 3-year cadence for certain GPU generations going forward, but yes, I generally expect the 10th-gen consoles would be no further than 1 generation behind in terms of certain specific GPU features.

They won't be able to compete with the top end or maybe even the high mid-end of whatever AMD or Nvidia GPUs are out on the market by the time they launch, but they should be able to compete with whatever's mid-low to high-low on the market from those companies by that point. Maybe low-low in the worst-case scenario.

Which, relatively speaking, should put them around whatever was in the upper low-end range of the GPUs the gen prior on the market before their release. And that's before considering other design features of 10th-gen systems that could be capitalized on (HBM-PIM memory, PIM & PNM-based architectures, more advanced data flow & locality management for I/O in the systems, etc.) that may not have the same presence in the PC space, helping console performance that much further.

If that's the case, that's why most people were off on their predictions this generation. Nothing in life is that linear and that steady. Every new generation is completely different in both timeline and scale than the previous one. Limits are reached with particular designs, costs can rise exponentially and manufacturing isn't linear either.

But a lot of the things I'm suggesting 10th-gen consoles should consider design-wise, aren't based on following rigid patterns lacking flexibility. In fact a lot of the tech in particular are things fitting with addressing remaining bottlenecks in enclosed and embedded systems, to tighten performance, and offering leniency to where some of the dependencies relied on too much this gen (small node availability among them) can be reduced to some degree for the next console generation.

I genuinely think at least some of these technological features have a very strong shot of making it into at least one company's 10th-gen systems. Take HBM for example; Microsoft are already on record stating they considered it for Series X and S, but JEDEC took too long for approval (I'm assuming for standardization of HBM2E?), though that could've been a cover for the associated costs or maybe any HBM talk was in regards for Series X units in Azure server clusters.

But due to those complications, they eventually settled on GDDR6. I'm sure Sony also considered HBM-based memories in some form for PS5 but for their own reasons also settled on GDDR6. So I wouldn't be surprised if one or both of them chose HBM as the memory standard for their 10th-gen platforms, and if they're willing to do that, they might as well be willing to consider some HBM-PIM type memory for even better performance metrics at only slightly more costs which could be balanced out by saving on costs on some other performance metric of the system in isolation.
 
  • Informative
Reactions: mansoor1980
OP
OP
D

Deleted member 13

Guest
We'll see. I'm willing to put money that each generation isn't linear. I would be very surprised to see a PS6 console outperform a 3090 of today and rival a 4090 instead. Like I said, 2x00 series boards are significantly less powerful than 3x00 boards. The only way I see a PS6 performing on a 4x or 5x level is if Nvidia/AMD stick to just 'more of' stuff to create a linear progression and not an exponential progression curve.
 
  • Informative
Reactions: mansoor1980