How to calculate Tangent and Binormal?

50,851

Solution 1

The relevant input data to your problem are the texture coordinates. Tangent and Binormal are vectors locally parallel to the object's surface. And in the case of normal mapping they're describing the local orientation of the normal texture.

So you have to calculate the direction (in the model's space) in which the texturing vectors point. Say you have a triangle ABC, with texture coordinates HKL. This gives us vectors:

D = B-A
E = C-A

F = K-H
G = L-H

Now we want to express D and E in terms of tangent space T, U, i.e.

D = F.s * T + F.t * U
E = G.s * T + G.t * U

This is a system of linear equations with 6 unknowns and 6 equations, it can be written as

| D.x D.y D.z |   | F.s F.t | | T.x T.y T.z |
|             | = |         | |             |
| E.x E.y E.z |   | G.s G.t | | U.x U.y U.z |

Inverting the FG matrix yields

| T.x T.y T.z |           1         |  G.t  -F.t | | D.x D.y D.z |
|             | = ----------------- |            | |             |
| U.x U.y U.z |   F.s G.t - F.t G.s | -G.s   F.s | | E.x E.y E.z |

Together with the vertex normal T and U form a local space basis, called the tangent space, described by the matrix

| T.x U.x N.x |
| T.y U.y N.y |
| T.z U.z N.z |

Transforming from tangent space into object space. To do lighting calculations one needs the inverse of this. With a little bit of exercise one finds:

T' = T - (N·T) N
U' = U - (N·U) N - (T'·U) T'

Normalizing the vectors T' and U', calling them tangent and binormal we obtain the matrix transforming from object into tangent space, where we do the lighting:

| T'.x T'.y T'.z |
| U'.x U'.y U'.z |
| N.x  N.y  N.z  |

We store T' and U' them together with the vertex normal as a part of the model's geometry (as vertex attributes), so that we can use them in the shader for lighting calculations. I repeat: You don't determine tangent and binormal in the shader, you precompute them and store them as part of the model's geometry (just like normals).

(The notation between the vertical bars above are all matrices, never determinants, which normally use vertical bars instead of brackets in their notation.)

Solution 2

Generally, you have 2 ways of generating the TBN matrix: off-line and on-line.

  • On-line = right in the fragment shader using derivative instructions. Those derivations give you a flat TBN basis for each point of a polygon. In order to get a smooth one we have to re-orthogonalize it based on a given (smooth) vertex normal. This procedure is even more heavy on GPU than initial TBN extraction.

    // compute derivations of the world position
    vec3 p_dx = dFdx(pw_i);
    vec3 p_dy = dFdy(pw_i);
    // compute derivations of the texture coordinate
    vec2 tc_dx = dFdx(tc_i);
    vec2 tc_dy = dFdy(tc_i);
    // compute initial tangent and bi-tangent
    vec3 t = normalize( tc_dy.y * p_dx - tc_dx.y * p_dy );
    vec3 b = normalize( tc_dy.x * p_dx - tc_dx.x * p_dy ); // sign inversion
    // get new tangent from a given mesh normal
    vec3 n = normalize(n_obj_i);
    vec3 x = cross(n, t);
    t = cross(x, n);
    t = normalize(t);
    // get updated bi-tangent
    x = cross(b, n);
    b = cross(n, x);
    b = normalize(b);
    mat3 tbn = mat3(t, b, n);
    
  • Off-line = prepare tangent as a vertex attribute. This is more difficult to get because it will not just add another vertex attrib but also will require to re-compose all other attributes. Moreover, it will not 100% give you a better performance as you'll get an additional cost of storing/passing/animating(!) vector3 vertex attribute.

The math is described in many places (google it), including the @datenwolf post.

The problem here is that 2 vertices may have the same normal and texture coordinate but different tangents. That means you can not just add a vertex attribute to a vertex, you'll need to split the vertex into 2 and specify different tangents for the clones.

The best way to get unique tangent (and other attribs) per vertex is to do it as early as possible = in the exporter. There on the stage of sorting pure vertices by attributes you'll just need to add the tangent vector to the sorting key.

As a radical solution to the problem consider using quaternions. A single quaternion (vec4) can successfully represent tangential space of a pre-defined handiness. It's easy to keep orthonormal (including passing to the fragment shader), store and extract normal if needed. More info on the KRI wiki.

Solution 3

Based on the answer from kvark, I would like to add more thoughts.

If you are in need of an orthonormalized tangent space matrix you have to do some work any way. Even if you add tangent and binormal attributes, they will be interpolated during the shader stages and at the end they are neither normalized nor they are normal to each another.

Let's assume that we have a normalized normalvector n, and we have the tangent t and the binormalb or we can calculate them from the derivations as follows:

// derivations of the fragment position
vec3 pos_dx = dFdx( fragPos );
vec3 pos_dy = dFdy( fragPos );
// derivations of the texture coordinate
vec2 texC_dx = dFdx( texCoord );
vec2 texC_dy = dFdy( texCoord );
// tangent vector and binormal vector
vec3 t = texC_dy.y * pos_dx - texC_dx.y * pos_dy;
vec3 b = texC_dx.x * pos_dy - texC_dy.x * pos_dx;

Of course an orthonormalized tangent space matrix can be calcualted by using the cross product, but this would only work for right-hand systems. If a matrix was mirrored (left-hand system) it will turn to a right hand system:

t = cross( cross( n, t ), t ); // orthonormalization of the tangent vector
b = cross( n, t );             // orthonormalization of the binormal vector 
                               //   may invert the binormal vector
mat3 tbn = mat3( normalize(t), normalize(b), n );

In the code snippet above the binormal vector is reversed if the tangent space is a left-handed system. To avoid this, the hard way must be gone:

t = cross( cross( n, t ), t ); // orthonormalization of the tangent vector
b = cross( b, cross( b, n ) ); // orthonormalization of the binormal vectors to the normal vector 
b = cross( cross( t, b ), t ); // orthonormalization of the binormal vectors to the tangent vector
mat3 tbn = mat3( normalize(t), normalize(b), n );

A common way to orthogonalize any matrix is the Gram–Schmidt process:

t = t - n * dot( t, n ); // orthonormalization ot the tangent vectors
b = b - n * dot( b, n ); // orthonormalization of the binormal vectors to the normal vector 
b = b - t * dot( b, t ); // orthonormalization of the binormal vectors to the tangent vector
mat3 tbn = mat3( normalize(t), normalize(b), n );

Another possibility is to use the determinant of the 2*2 matrix, which results from the derivations of the texture coordinates texC_dx, texC_dy, to take the direction of the binormal vector into account. The idea is that the determinant of a orthogonal matrix is 1 and the determined one of a orthogonal mirror matrix -1.

The determinant can eihter be calcualted by the GLSL function determinant( mat2( texC_dx, texC_dy ) or it can be calcualated by it formula texC_dx.x * texC_dy.y - texC_dy.x * texC_dx.y.

For the calculation of the orthonormalized tangent space matrix, the binormal vector is no longer required and the calculation of the unit vector (normalize) of the binormal vector can be evaded.

float texDet = texC_dx.x * texC_dy.y - texC_dy.x * texC_dx.y;
vec3 t = texC_dy.y * pos_dx - texC_dx.y * pos_dy;
t      = normalize( t - n * dot( t, n ) );
vec3 b = cross( n, t );                      // b is normlized because n and t are orthonormalized unit vectors
mat3 tbn = mat3( t, sign( texDet ) * b, n ); // take in account the direction of the binormal vector
Share:
50,851

Related videos on Youtube

user464230
Author by

user464230

Updated on April 03, 2021

Comments

  • user464230
    user464230 about 3 years

    Talking about bump mapping, specular highlight and these kind of things in OpenGL Shading Language (GLSL)

    I have:

    • An array of vertices (e.g. {0.2,0.5,0.1, 0.2,0.4,0.5, ...})
    • An array of normals (e.g. {0.0,0.0,1.0, 0.0,1.0,0.0, ...})
    • The position of a point light in world space (e.g. {0.0,1.0,-5.0})
    • The position of the viewer in world space (e.g. {0.0,0.0,0.0}) (assume the viewer is in the center of the world)

    Now, how can I calculate the Binormal and Tangent for each vertex? I mean, what is the formula to calculate the Binormals, what I have to use based on those informations? And about the tangent?

    I'll construct the TBN Matrix anyway, so if you know a formula to construct the matrix directly based on those informations will be nice!

    Oh, yeh, I have the texture coordinates too, if needed. And as I'm talking about GLSL, would be nice a per-vertex solution, I mean, one which doesn't need to access more than one vertex information at a time.

    ---- Update -----

    I found this solution:

    vec3 tangent;
    vec3 binormal;
    
    vec3 c1 = cross(a_normal, vec3(0.0, 0.0, 1.0));
    vec3 c2 = cross(a_normal, vec3(0.0, 1.0, 0.0));
    
    if (length(c1)>length(c2))
    {
        tangent = c1;
    }
    else
    {
        tangent = c2;
    }
    
    tangent = normalize(tangent);
    
    binormal = cross(v_nglNormal, tangent);
    binormal = normalize(binormal);
    

    But I don't know if it is 100% correct.

    • nkint
      nkint about 13 years
      mayebe a question for gamedev.stackexchange.com ?
    • datenwolf
      datenwolf about 13 years
      @user464230: No, it's okay here, too. 3D graphics isn't limited to games.
    • datenwolf
      datenwolf about 13 years
      That "solution" assumes you'll apply your normal map texture on a planar face with the normal = x. This will not work if applied to an arbitrary model. What you really need to do is solve the system of equations I gave you down there for each face and use the mean values for the vertices. If you want to do serious 3D programming you'll have to learn, how to translate linear algebra - like I showed below - into source code.
    • user464230
      user464230 about 13 years
      "If you want to do serious 3D programming you'll have to learn, how to translate linear algebra - like I showed below - into source code."... Man, really, You don't meet me!!! Don't tell me "you'll to learn"... You don't have any idea of what I've done... Take care with your words!
    • datenwolf
      datenwolf about 13 years
      @user464230: No, I've no knowledge of what you've actually done. But you asked about how to calculate tangent and binormal. So what you can expect is a mathematical description of the process. If you just want some source code you can copy and paste, well, there's plenty of it out there. But to understand how to use it effectively you must understand what it does on a mathematical level. Which requires you to learn to read and understand linear algebra. The way you ask your question(s) and commented on my post clearly shows me, that you've not learned how to properly read math, yet.
    • Mike Weir
      Mike Weir over 9 years
      Is the condition necessary in the sample code? And what's v_nglNormal versus a_normal?
    • RecursiveExceptionException
      RecursiveExceptionException almost 8 years
      From the code it looks like its implemented in glsl. For the love of god precalculate it
  • datenwolf
    datenwolf about 13 years
    @user464230: It is not very elegant. It's a brute force approach utilizing the abundant computing power of a modern GPU, kind of the "if I don't grok the math, do it the tedious way"-method. @kvark already told that it will perform worse.
  • kvark
    kvark about 13 years
    @datenwolf. I didn't say the GLSL approach is slower, even if it is :). I like it for the universality: you don't need to do your own exporter (or mesh attributes re-computation), you don't need to care about tangents during skeletal animations. It just works.
  • datenwolf
    datenwolf about 13 years
    @kvark: You write "it's more heavy on the GPU", and yes it is. Extracting the partial derivates is really hard work for the GPU (it boils down to that for each fragment the computations of the neighbours have to be emulated, or it must wait for them (not always with a result if the fragment gets killed or another code branch is executed). Yes it just works, but at the price of reduced fill rate.
  • kvark
    kvark about 13 years
    @datenwolf. Citing myself "This procedure is even more heavy on GPU than initial TBN extraction" - I meant not what you read :). Nevertheless, I agree with your sentence.
  • datenwolf
    datenwolf about 13 years
    @kvark: Given the fact the TBN precalculation method could be used on fixed function register combiner pipeline hardware (TNT, GeForce2) it is really very lightweight. I have still a copy that paper on normal mapping on the TNT with this nice sentence "performing a per-pixel computation of tangent space is computationally too complex". How the times have changed :)
  • user464230
    user464230 about 13 years
    @kvark Hey man, I saw your engine and your Quaternion approach seems great! I don't want to make all that quaternion operations directly in the GPU, but the idea seems very good. Can you tell me more about? Please? How you create these quaternions with light and camera?
  • datenwolf
    datenwolf about 13 years
    @user464230: Those quaternios are just a more compact representation of the TBN matrix. Tangent space can be understood as the local transformation required to align (rotate) the surface local tangent space with the (global) object space. I.e. the TBN matrix is a rotation matrix. Now, rotations can be expressed as a quaternion, too. So the way to get this tangent quaternion (@kvark: nice idea BTW!) is to determine the TBN matrix and derive its equivalent quaternion, which is a eigenvalue problem. Wikipedia has the math: en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
  • datenwolf
    datenwolf about 13 years
    For clarification: The TBN matrix may form just a complete base, but not orthogonally and in most texturing cases this is the case, in which case a quaternion delivers not enough information. OTOH nonorthogonality will be noticable for stongly distorted textures, only, so assuming a orthogonormalized TBN should be fine.
  • kvark
    kvark about 13 years
    @user464230. I'm glad you liked it. I'll tell you about it with pleasure, but bear in mind that KRI wiki probably contains most of the answers. For light and camera quaternions are created in a same way as for any other spatial node. The difference is that these objects require projection to be performed after spatial transformation. I'm doing projections manually (through simple GLSL library functions like project/unproject, of cause).
  • Gottfried
    Gottfried over 11 years
    Your matrix inversion is wrong. The inverse of a matrix is not a scalar but a matrix. So should be (1/fsgt-ftgs)((gt, -ft)(-gs, gs))
  • datenwolf
    datenwolf over 11 years
    @Gottfried: Where do you see a scalar there? What you see there is the determinant method for inverting a matrix. The denominator in the scaling factor is the determinant of the matrix [F G] (F, G are column vectors). You could use Gauss-Jordan elimination as well, yielding the very same result.
  • datenwolf
    datenwolf over 11 years
    BTW, you can find the very same derivation as I did it here in most 3D programming textbooks
  • Gottfried
    Gottfried over 11 years
    I know that you are doing inversion via determinant, but if you look at your post you'll see that you've forgotten the matrix part. You only divide by the determinant which is a scalar. Compare e.g. with link I hope I could clear this up and I'm sure it was just an oversight.
  • datenwolf
    datenwolf over 11 years
    @Gottfried: Ah, I was looking at the wrong side of the equation. Yes, I totally missed that. Thanks
  • fishfood
    fishfood over 10 years
    @datenwolf: your answer starts with the assumption that you know three points on a triangle. I agree that your point holds for static models. But when simulating water, tangent and bitangent vectors change over time and therefore have to be calculated in the shader. So how can tangent and bitangent be computed when you don't have access to other vertex position data (e.g. in the pixel shader) ?
  • datenwolf
    datenwolf over 10 years
    @fishfood: Shader operations are inherently localized. If you're modifying the mesh on the CPU (say for water), then it's most efficient to perform that tangent space calculation in that step as well. In theory you could do it in the fragment shader, but with a certain performance penality. And of course it makes sense to look at the actual problem from a larger distance. For example for some wavy surface you'd use a Fourier kind of approach to animate it. But having the Fourier coefficients at hand you can give the tangent space for a point directly without evaluating helper points.
  • datenwolf
    datenwolf over 10 years
    @fishfood: Anyway, if you're animating a mesh on the CPU, you do the tangent space calculation then and there as well. For a given vertex you already have the data in the cache so you have great locality. And if you want to use the GPU for animating the mesh, you'd do it in a compute shader, where you also can do the tangent space calculation easier and more efficient, than when rendering.
  • fishfood
    fishfood over 10 years
    modifying the mesh (especially when dealing with tessellation, occurs on the gpu, not the cpu!)
  • datenwolf
    datenwolf over 10 years
    @fishfood: I don't want to loose myself in technophilosophical discussions here. But the fact is, that you can modify the mesh just as well on the CPU and then upload the updated data to the GPU. And depending on the actual application this may (or may not) give the better performance. If your program is largely GPU bound, for example because you're using a lot of fill rate, but the CPU has a lot of computing time left over (very likely the case with today's multicore CPU systems), then performing mesh modification on the CPU will yield better performance.
  • datenwolf
    datenwolf over 10 years
    @fishfood: If you think of the vertex shader as "mesh modification", well, it's not, or practically speaking, any transformation which is made on vertex normals, also applies to the rest of the tangent space base vectors (of which the normal is one). The programmable tesselation stage also doesn't modify the mesh but refines it. By using transformation feedback you can in fact perform mesh modification. But even then you'd precalculate the changed tangent space at the time of modifying the mesh, instead of doing it as an afterthought.
  • Rabbid76
    Rabbid76 over 6 years
    @AlexGreen Thanks a lot, fixed
  • felipeek
    felipeek over 3 years
    @datenwolf sorry for reviving this thread, but I wanted to ask how would you pre-compute the tangents without basically duplicating all vertices of a mesh? What I mean is that usually each vertex of a mesh is used to generate N triangles, and the tangent vector bounded to that vertex is different depending on the triangle that is being rendered... So I don't see other way of pre-computing the vector offline without ensuring each vertex builds only one triangle
  • datenwolf
    datenwolf over 3 years
    @felipeek in the case where the triangulated mesh is an approximation of a smooth surface with a smoothly embedded tangent space, you may assume that the tangent space basis is the (weighted) average of the bases of the adjacent triangles. – In essence this is the same approximation/assumption we make, when we're calculating vertex normals by averaging the normals of the adjacent triangles; as a matter of fact the normal is one base vector of the tangent space basis, so extending this to the other base vectors is well founded.