Home Technical Talk

Per-pixel lighting equations on a smooth shaded mesh

I'm having some trouble with math behind per-pixel lighting when using normal maps.

This is what I understand so far (please correct me if I'm mistaken):
In the case of a single triangle, the T,B,N vectors for each vertex are the same, and is equal to the T,B,N vectors of the face. To light the face, the light vector is transformed into tangent space (by the matrix defined by the T,B,N vectors of the face), and then the standard Np dot L light equation is applied. The normal Np of any point on the triangle is defined in tangent space by the given normal map.


This is what I don't quite get:
In the case of a a smooth shaded mesh, the T,B,N vectors for each vertex are averaged across each connected face. This means any given triangle will have 3 different light vectors, each in their own unique tangent space. How do you light a point on a triangle in this case? Calculate Np dot L for each light vector, and then average it? Or Interpolate the light vectors (like with interpolating vertex normals in phong shading) to get the light vector at that specific point?


There are some resources online about the subject, but most seem to simplify the case into face/surface T,B,N vectors instead of per-vertex T,B,N vectors. Others mention that a light vector for each vertex is used in per-pixel lighting, but the actual calculations are not given.

Replies

  • keres
    Options
    Offline / Send Message
    keres polycounter lvl 12
    Smooth shaded meshes where all vertices are in the same smooth group just means that the vertex normals are averaged, not that they take on the averaged normal of the face. If all three vertex normals are pointing in the opposite direction of the light, the "smoothed" vertex normal will be to. It will not orient to the averaged surface normals upon smoothing. You are correct saying that every triangle will have three different light vectors.

    If you have a look at an ASE model file, the normals are listed with the face indices. DirectX or OpenGL will require a new, unique vertex if any component of a vertex is different (so you can have a single vertex with two normals, but DX/GL will have to create two unique vertices unwelded.) DX/GL never use face normals. If you want faceted (hard) shading, you have exactly three vertices for every triangle. Not vertices are shared. I say this for clarity, but you probably already knew it.

    The light vector for the surface, when lit just by vertex normals, is lerp'd depending on the barycentric coordinates of any given point on the surface. Thus, the normal at the very center of the triangle is the averaged normal of all three vertex normals.

    Shaders take care of the issue of finding the normal of any given point on a triangle. If you want to figure out how to do this, find the barycentric/uv coordinates on the triangle and lerp the vertex normals for the result.

    Tangent, binormal, and normal vectors for per-pixel lighting I'm a little unsure about. I believe what the online resources are saying is that the TBN vectors are not considered until you're done messing with vertices and vertex normals (i.e. you're in the pixel shader now.) It would be easier to think of the TBN vectors to be on a per-fragment/pixel basis and not a per-triangle basis (at least if I am understanding this correctly.)
  • TimS
    Options
    Offline / Send Message
    Thanks for the clarification and additional info, keres. I do appreciate it.

    I came across some old slides by nVidia:
    http://www.cs.cmu.edu/~djames/15-462/Fall03/notes/CassEveritt-MathematicsOfPerPixelLighting.pdf

    On page 35 it describes how the per-vertex T,B,N vectors can be used.
    In the previous case, we considered transforming
    the light into the surface-local space of each vertex
    and interpolating it for the per-pixel light vector --
    this is what we would do for GeForce2

    For GeForce3, we can interpolate the 3x3 matrix
    over the surface and transform the normals by it
    – for this case if the tangent and binormal are not
    well-behaved, other anomalous behavior will
    result
    • Normal “twisting”
    • Incorrect bump scale/smoothing
    • The interpolated matrix should be “nearly
    orthonormal”
  • Kurt Russell Fan Club
    Options
    Offline / Send Message
    Kurt Russell Fan Club polycounter lvl 9
    From the sounds of it you're a bit stuck on how shaders work and the difference between vertex and pixel shaders.

    Vertex shaders are applied to each vertex. They can be used to set values for each vertex (including things like vertex colours, UVs, position, vertex normal). They don't know about triangles or pixels. Because there's only one calculation per vertex, vertex shaders are usually fast, especially with low poly meshes.

    Pixel shaders are applied to each pixel. Values that we calculate in the vertex shader per vertex (UVs, vertex colours, position, vertex normal) can be sent to the pixel shader. In this case, all of these values are interpolated between the triangle's three vertices. The point at the barycentric centre of the triangle will be sent the average of the three vertex colours, the average UV position, the average vertex normal. Pixel shaders don't know about vertices and they also don't know about triangles. They only know about their inputs, which are the blended results of vertex shader outputs. Because they're calculated per pixel, the more pixels we draw to the screen (a big mesh or high resolution or lots of overdraw) the more their speed will affect you.

    The only difference between a smooth shaded mesh and a free-floating triangle is that usually the smooth shaded mesh will have vertex normals pointing all over the place and the three vertex normals of a triangle will often point in the same direction. But a smooth shaded flat plane will have vertex normals pointing all in the same direction, and you can manually set the normals of a free floating triangle to point wherever you want.

    In either case, smoothed mesh or single triangle, the vertex shader only sees and operates on a single vertex, regardless of what else is going on around it. And the pixel shader will only see a single set of values, which have been calculated by blending the outputs of the vertex shader, from the three vertices that make up the pixel's triangle.

    Edit: so back to your original question, if you're looking at a light vector variable in the vertex shader, the light vector is the vector of the vertex. If you're looking at a light vector variable in the pixel shader, that's the light vector for the pixel. This also applies to normals and vertex colours and everything else. So if an equation says take the surface normal and the light vector and dot them, if you do that calculation in the vertex shader then you'll have gouraud lighting (lighting applied per vertex) and if you do it in the pixel shader you'll have phong lighting (lighting per pixel). You don't need to worry about blending between vertices at all.

    I hope this makes sense :)
  • TimS
    Options
    Offline / Send Message
    Your post does make sense, and clears some things up. :thumbup:

    Initially, I viewed the problem of lighting a triangle in the most general way (i.e. Given a light vector, 3 vertices that define a triangle in space, and a normal map, how do you find the color value of every point in camera space that makes up the triangle?).

    There's quite a bit of vertex shader and pixel/fragment shader code examples online, but not a lot of code on what the rasterizer does (i.e. interpolated data is sent to the pixel shader, but how is that data interpolated before it is sent?).

    Perhaps it was obvious/intuitive , but I didn't know that baycentric interpolation was used across the surface of the triangle, until keres pointed that out. And from the nVidia slides, if the per-pixel N dot L calc is done in tangent space, an interpolated light vector is used.
  • CrazyButcher
    Options
    Offline / Send Message
    CrazyButcher polycounter lvl 18
    if you want to get more into details

    implementation on a rasterization (perspective texture mapping series)
    http://chrishecker.com/Miscellaneous_Technical_Articles

    hardware overview
    http://www.cs.virginia.edu/~gfx/papers/pdfs/59_HowThingsWork.pdf

    more hardcore on the hardware
    http://c0de517e.blogspot.com/2008/04/gpu-part-1.html

    ---

    typically one would do lighting in world-space on pixel level. Passing a world transformed TBN matrix to the pixel-shader and then transform the normalmap's normal into world space. You also pass the position.
    The benefit is that you can now loop over lights inside the pixel-shader. Whilst if you would compute the "tolight" vector per-vertex, you would require one interpolant for every light source, which doesn't work so well.

    Your TBN matrix will be an interpolation of the three vertices' TBNs involved. That's why it's so crucial that normalmap baker's and realtime renderer's TBN stuff must match, otherwise you get shading errors, sometimes more and sometimes less obvious.
Sign In or Register to comment.