Depth as distance to camera plane in GLSL

15,325

Solution 1

You're really trying to do this the hard way. Simply transform things to camera space, and work from there.

varying float distToCamera;

void main()
{
    vec4 cs_position = glModelViewMatrix * gl_Vertex;
    distToCamera = -cs_position.z;
    gl_Position = gl_ProjectionMatrix * cs_position;
}

In camera space (the space where everything is relative to the position/orientation of the camera), the planar distance to a vertex is just the negative of the Z coordinate (higher negative Z is farther away).

So your fragment shader doesn't even need eyePosition; the "depth" comes directly from the vertex shader.

Solution 2

W component after projection contains the orthogonal depth into the scene. You don't need to use a separate modelview and projection matrices for this:

varying float distToCamera;

void main()
{
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    distToCamera = gl_Position.w;
}

Solution 3

What you are hinting at is orthographic projection. You're currently using perspective projection. Actually you don't have the true distance from the pixel to the camera, but rather the z position of the pixel in the frustum. On the rendered texture the final z depth in the range of [-1,1] is rendered, which describes its z component in NDC space.

As you understand, all your points are projected 'towards' (really onto the near plane) the camera using a perspective projection. What you want is to project them orthographically towards the near plane. This link describes both projection matrices in detail, and the final result of the matrices are at the bottom. Your shaders should be able to handle the new projection matrix just fine. Just make sure your MVP matrix is calculated as suggested above.

Note orthographic projection probably does not represent the depth of your rendered scene though. If you are rendering your scene onto the screen with a perspective projection, and you want to depth of each pixel, you should use the same perspective projection accordingly. So rendering your depth using orthographic projection would only be useful if your scene uses the same projection, or if some algorithm needs depth information unrelated to the scene as seen on the screen.

Furthermore I suggest taking a look at core OpenGL profiles (3.x) as you seem to be using deprecated functionality (gl_Vertex, gl_ModelViewProjectionMatrix and alike). It is slightly more work to set up all the buffers and shaders yourself, but it pays off in the end.

EDIT

Actually, after your comment, I understand what you want. It wasn't clear you wanted to render them in the same call, for this I suggest something like this in your fragment shader:

uniform mat4 orthographicMatrix;
varying vec3 position;

void main(void) {
        vec4 clipSpace = orthographicMatrix * vec4(position, 1.0);
        gl_FragColor = vec4(clipSpace.zzz, 1.0);
}

Note you don't have to do the w divide as the orthographic projection is linear already (thus w is set to 1).

Share:
15,325
Lucian
Author by

Lucian

Updated on June 21, 2022

Comments

  • Lucian
    Lucian almost 2 years

    I have a pair of GLSL shaders that give me the depth map of the objects in my scene. What I get now is the distance from each pixel to the camera. What I need is to get the distance from the pixel to the camera plane. Let me illustrate with a little drawing

       *          |--*
      /           |
     /            |
    C-----*       C-----*
     \            |
      \           |
       *          |--*
    

    The 3 asterisks are pixels and the C is the camera. The lines from the asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.

    There must be a way to do this by using some projection matrix, but I'm stumped.

    Here are the shaders I'm using. Note that eyePosition is camera_position_object_space.

    Vertex Shader:

    void main() {
        position = gl_Vertex.xyz;
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    }
    

    Pixel Shader:

    uniform vec3 eyePosition;
    varying vec3 position;
    
    
    void main(void) {
            vec3 temp = vec3(1.0,1.0,1.0);
            float depth = (length(eyePosition - position*temp) - 1.0) / 49.0;
            gl_FragColor = vec4(depth, depth, depth, 1.0);
    }
    
    • Ani
      Ani about 11 years
      What kind of projection are you using? If you use an orthographic projection when rendering the depth map, you should get what you need without any calculations at all.
    • Lucian
      Lucian about 11 years
      I'm doing perspective projection when rendering, but I need the depth values to come from the orthographic projection.
    • pailhead
      pailhead almost 6 years
      The diagram on the left is not correct i think. The distances are on a plane. The line with dashes should have about half the dashes.
  • Lucian
    Lucian about 11 years
    Thanks for the detailed response. I am trying to get the depth corresponding to orthogonal projection while rendering according to perspective projection. This may seem odd but it's what I have to do.
  • Invalid
    Invalid about 11 years
    Oh I see, I think I understand what you meant. Edited my post.
  • Lucian
    Lucian about 11 years
    You're right, I was kind of doing it the hard way. Also, I assume -cs.z is -cs_position.z. Thanks for the code snippet!
  • RecursiveExceptionException
    RecursiveExceptionException almost 7 years
    Does it? Isn't it gl_Position.z / gl_Position.w?
  • Caleb Miller
    Caleb Miller about 2 years
    Awesome - this works for me! Here I thought I had to use two matrices because gl_Position.z / gl_Position.w wasn't working (as another comment suggested doing), but it turns out the w component is indeed the orthogonal depth and I can keep the performance of a single matrix multiplication. I am clearly still lacking intuition of homogenous coordinates. Thanks for the tip!! (9 years later, lol)