4
\$\begingroup\$

When I read about how to get the direction vector in raycasting, for example on this site: http://www.daimi.au.dk/~trier/?page_id=98

They first render the mesh with front face culling and then with back face culling. And then subtract the backface from the front to get at a direction vector for each pixel.

But is this not to much work to get the direction vector, is it not more simpler and faster to just take the vertex position(in world coordinates) and subtract the camera position in the fragment shader to get the direction vector? This should give the exact same answer but we skip the backface and frontface rendering.

\$\endgroup\$

1 Answer 1

4
\$\begingroup\$

The issue here is that there is no "vertex position" to compare against the camera position; the article you mention is talking about a raytracing implementation which doesn't feed model vertices and triangles to a shader in the usual way; instead, it would feed the GPU just one big 2D quad to cover the whole screen, and the rendering of model data would then be performed within the pixel shader applied to that single quad. (In this case, the model data is being provided as a large translucent 3D texture for the ray-tracing to sample through)

The approach described basically seems to be a clever method of generating the direction for a ray through every pixel of the screen, which ordinarily requires some trigonometry per pixel to generate. By using this approach, it seems to be possible to get those same rays using some pre-calculated renders and a simple vector subtraction per pixel. Of course, I haven't actually tried this approach to verify that it works, but the approach seems plausible to me.

\$\endgroup\$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.