Sony has some sort of internal GDC, where tech people from from different studios make presentations about different developer focused (not for gamers) gamedev tech topics. In this case, 5 months ago they shared a video that apparently nobody spotted back then, with devs from Polyphony, Bend and Haven.
The Polyphony segment talks about about rendering techniques in Gran Turismo 7 for VR, RT and sky simulation. The Bend talk is about screen space shadows, a technique to add extra fine detail with shadows that simulates extra depth.
The most interesting talk is from Haven, which is about some experiment they are doing to use machine learning ("AI") to speed up the 3D asset creation. Creating 3D assets is very time consuming and GaaS require a ton of 3D assets, so they have a very small team experimenting with the creation of "text to 3D" ML tool to help their artists reduce this time, making a machine learning tool that creates let's say a rough 3D asset draft to use it as a base for the artist to later work on it. In the case of Fairgame$ they'll focus a lot on character customization and will include a lot of masks, so they started to experiment generating 3D aseet masks.
They're working on it and still in early stages, right now to generate each mask draft takes like 15 minutes so it's still to slow to use it in their workflow. And as of now it only generates a rough 3D mesh and its albedo texture and normals map, still no metallic or roughness textures created by this.
The Polyphony segment talks about about rendering techniques in Gran Turismo 7 for VR, RT and sky simulation. The Bend talk is about screen space shadows, a technique to add extra fine detail with shadows that simulates extra depth.
The most interesting talk is from Haven, which is about some experiment they are doing to use machine learning ("AI") to speed up the 3D asset creation. Creating 3D assets is very time consuming and GaaS require a ton of 3D assets, so they have a very small team experimenting with the creation of "text to 3D" ML tool to help their artists reduce this time, making a machine learning tool that creates let's say a rough 3D asset draft to use it as a base for the artist to later work on it. In the case of Fairgame$ they'll focus a lot on character customization and will include a lot of masks, so they started to experiment generating 3D aseet masks.
They're working on it and still in early stages, right now to generate each mask draft takes like 15 minutes so it's still to slow to use it in their workflow. And as of now it only generates a rough 3D mesh and its albedo texture and normals map, still no metallic or roughness textures created by this.