🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

DirectX12 adds a Ray Tracing API

Started by
40 comments, last by NikiTo 6 years, 3 months ago
8 hours ago, swiftcoder said:

Keep in mind that the only GPU at this time which can run all this natively is a $3k MSRP Titan V. Might want to hold off for a bit.

What about running a compute fallback? As far as I understood neither AMD nor Intel has the drivers now (plus it requires Shader Model 6.0 - which pretty much means that only AMD Vega supports it).

And I assume there is still no information about 2080 or 2070 (which will arrive "soon")?

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Advertisement
11 minutes ago, Vilem Otte said:

What about running a compute fallback?

Reading between the lines, hitting useable quality/performance with the compute fallback on current-gen hardware sounds pretty far fetched. A good preview for the SDK, but not something you'll be shipping games with.

At least on the NVidia side RTX is pretty much guaranteed to show up on their next wave of Volta parts (presumably in the Summer/Fall timeframe).

On the AMD side we're unlikely to see new GPUs much before the end of the year, and those will be Vega derivatives. If DXR requires an new architecture revision... We could be talking mid-2019 before we see real hardware support on their side.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

13 hours ago, JoeJ said:

One major problem of rays is they have no area, so you always need too much of them to approximate something. For robust object visibility you need one ray per pixel.

As well as fixed function ray vs triangle intersection, they also support HLSL intersection shaders for implicit geometry, so it might be possible to implement cone-tracing yourself with these. 

From what I've read so far it seems that the acceleration structure will be all handled by the API/vendor with great optimization for dynamic objects. Did anyone see something that suggest that we will still have to build that structure ourself?

On 20/03/2018 at 3:17 AM, FreneticPonE said:

So, yeah performance is definitely not realtime for today and probably not tomorrow and next gen either. Really don't understand why DirectX needs it's own raytracing API in the first place. 

As a side-node, AMD is telling a lot about real-time raytracing too at that moment.

This also remembers me what Imagination did some years ago. But this was not that loudly clamed. And I don't know what happend to this since Apple let them...

https://www.ea.com/seed/news/seed-gdc-2018-presentation-slides-shiny-pixels

very interesting, especially to see screen space reflections and path tracing as reference

On 3/20/2018 at 8:12 AM, Vilem Otte said:

(plus it requires Shader Model 6.0 - which pretty much means that only AMD Vega supports it).

There are pre-Vega AMD GPU's that support SM6.0/DXIL, they just don't have have non-experimental driver support yet (you need to enable developer mode and specify that you want to enable the experimental shader models feature in D3D12). It certainly works on the RX 460 that I have as my secondary GPU in my home PC.

why all of a sudden, if MS and NV speaks about it, it becomes "finally", and all the hype, when this has been a subject ever since the quake 3 demo, 12 years ago : https://www.youtube.com/watch?v=bpNZt3yDXno
And I'm not even talking about heaven seven (http://www.pouet.net/prod.php?which=5), 18 years ago.
DXRT has mercilessly copied all the OpenRL SDK, available here: https://community.imgtec.com/developers/powervr/openrl-sdk/
And it was already hyped too: https://www.extremetech.com/extreme/161074-the-future-of-ray-tracing-reviewed-caustics-r2500-accelerator-finally-moves-us-towards-real-time-ray-tracing
See any similarity in the vocabulary at the time ?

So, is this a fanboy effect or history revisionism ?
Even Embree has been doing RTRT for years on CPU only, as long as you keep it first bounce. And if you check what Epic has to say about it, (here: GDC on youtube) you'll see they use a cluster of 4 tesla v100 with hyperlink and they were not able to include global illumination. They can just afford 2 rays per effect on a 80k$ hardware.

I would put the horses back in the stable, but well, hype is contagious...

 

6 hours ago, Lightness1024 said:

why all of a sudden, if MS and NV speaks about it, it becomes "finally", and all the hype, when this has been a subject ever since the quake 3 demo, 12 years ago : https://www.youtube.com/watch?v=bpNZt3yDXno
And I'm not even talking about heaven seven (http://www.pouet.net/prod.php?which=5), 18 years ago.
DXRT has mercilessly copied all the OpenRL SDK, available here: https://community.imgtec.com/developers/powervr/openrl-sdk/
And it was already hyped too: https://www.extremetech.com/extreme/161074-the-future-of-ray-tracing-reviewed-caustics-r2500-accelerator-finally-moves-us-towards-real-time-ray-tracing
See any similarity in the vocabulary at the time ?

So, is this a fanboy effect or history revisionism ?
Even Embree has been doing RTRT for years on CPU only, as long as you keep it first bounce. And if you check what Epic has to say about it, (here: GDC on youtube) you'll see they use a cluster of 4 tesla v100 with hyperlink and they were not able to include global illumination. They can just afford 2 rays per effect on a 80k$ hardware.

I would put the horses back in the stable, but well, hype is contagious...

 

Fully agree.

6 hours ago, Lightness1024 said:

why all of a sudden

For the simple reason that it seems feasible now. It's often not just about having some tech demo, like Quake 3, but about making it useful and accessible in a real world scenario. You can do RT with NVidia OptiX for quite a while, and it works reasonable, but until now, you could not just add it to everyday rendering code.

OpenRL is no different than Optix, you write your dedicated camera/shading etc. shader and watch the system giving you some results after a while, but not from your HLSL/GLSL code ad-hoc.

 

That's the big difference. Of course, It's hard to predict whether this gonna be a fail like geometry shaders or these several incarnations of tesselation (that mostly got removed), yet, from a graphics programmer point of view, it's a feature much more valuable than having to add 16k, HDR16 @300Hz rendering and 99% of the users don't even really know what to look at even if you show them the results side by side.

 

Now, the question is, how fast will it run. Path tracing usually converges with a rate of 4:1 of sample-count vs noise.

This topic is closed to new replies.

Advertisement