PDA

View Full Version : Normal Map Woes - UDK and Performance - How to make one properly


Ace-Angel
09-06-2011, 02:23 PM
Hey to all,

I'm having some issues with UDK, and been banging my head for a while now with no avail to a solution, but here is goes. Also, apologize if this should be in the UDK section, but I felt that is better be served here, since it includes other questions which aren't related to UDK per say, just generally.

I want to know what is the proper way to bake a normal map for UDK and have it on an optimized mesh? I have been scouring the internet for the past few days/week for this sole purpose and none of it makes sense so far, so excuse me and my long post.

Short Version for those people who want to get the gist of it:

1) - 90* smoothing groups or single smoothing groups, control loops, UV island splits, and edges, what, when why is the question.

Basically, too many conflicting opinions (EI: Some say you need to keep it 90*, others say UV island splits are more important) so question is what gives, and how can I make the best use of each one without stressing my normal maps too much.

2) - Using photos/textures to make normals, but what is acceptable about them?

Opinions go saying 'You should model everything" vs. "You should use what is most straightforward", question is, is there any adverse reason to using textures for Normal maps? Do they not transfer correctly, especially for dirt maps?

3) - Compressing my textures and their channels, how come some channels end up suffering more while others don't?

My blue channels chunk, while others are fine, and uncompressed normals seems to work best.

4) - Mesh triangulations and tangent basis, no need to ask me, because I don't know the idea behind them, explain please.

First Issue: Splitting Vertices and UV's

One of the first things I learned was that one-smoothing group is what you need for your models, and the reason for that it's simply more performance savvy, since you're not creating extra information with split veritces.

Also, that if you choose a smoothing-group, you need to stick with it, both when baking, and importing in a game engine, they need to be the same.

This is also true in programs like Max, if I put a single smooth group in a program like Max on large heavy mesh pieces, my performance in my viewport jumps by several frames or even doubles while hard edges groups at 90* slow it down.

Now here is my problem from the previous statement, if one smoothing group is needed then how does splitting 90* work and/or UV island split but keep the 'detail' information, on say a chamfered high poly model with still being performance savvy?

Many say that a 90* split is needed to avoid any weird transition effects right off the bat, but then there is the UV island split which in some cases it also needed. I couldn't find any information about this, but would you need to create both 90* and UV splits TOGETHER on the same mesh, if the case calls for it, or do you limit yourself from one to another?

To top it off, we also have the support method, which is universally praised, but unless you have the proper loops going about in the first case, it's can be a long process, but that's not the issue for me, the issue is, when put to the next two, where does it stand?

Also, again, confliciting messages I'm getting is that extra UV splits around the smoothing groups breaks are fine and don't add up to extra splits, but extra splits on one smoothing groups do add up, but this confused me even more since the step of exporting it all to the game engine need to be harmonized, and I'm not sure if it would be performance savvy at all.

However what really confuses me is how does this information transfer to game engines during the final stages of exportation? Wouldn't having a model with extra splits be against the very idea of saving performance? Could I bake my model with 90* splits and UV breaks, but then apply one smoothing group in the game engine, and it would look fine, even when the normal map was applied? Would it look correct?

I guess to summarize my first point, how can one know which method does what when compared to the others? I'm having a hard time understanding this, since I'm getting so many conflicting opinions and statements about which one should be used for UDK, that I feel like I'm suffocating. I also spend some thing testing this method, but it only confused me more since I couldn't see how this affects performance or if the generaly information I'm transering is 'correct enough'.


Second Issue: Image Based Normal Maps

One of things I do alot, is when I'm creating tile based images, I go directly for textures sources, rather then model ones. For example, if I have large 'panel' like buildings, or pavement tiles, I use an image, and tools like ShaderMap or CrazyBump to make out the normal maps, if the texture is inorganic, I take the time to create black and white information for correct creation of normal map.

Problem is, again, I have been getting mixed messages on said point, so just generally, what is the best way to go about this? Is there any adverse effect in not modeling everything? I would assume the time constraints on someone modeling dirt vs. someone using textures should be no brainier, but apparently, there are entire debates about 'model everything', vs 'just use the best solution', so what the deal here?

Also, apparently, texture based normals are the bane of half the issues, since if you have a model with many hard edges, you need to define said edges with smoothing groups, and not normals, since picture based normals are not 'correct' when it comes to defining edges.

Solution would be to bake out a large enough normal map which has the said 'hard edge' detail in, but as I said from my previous point, this is confusing me alot. Would I have to then 'overaly' the detail map on my large map for the tile effect? Wouldn't this eat away at my details?


Third Issue: Compressions and Channels

One of the thing that bothers me is the compression that comes with normal maps, so in the case of engines like UDK, you can choose to have them uncompressed. Generally, this is a good idea on demo reel pieces since it doesn't produce artifacts.

However, I recently read up that you can down-scale the size of a normal map to half and have it uncompressed and it would be more beneficial then a compressed, and my question to that is, is it really? I find it hard to understand what the math is behind it.

What takes my confusion one step further is is that different channels are compressed different, and example of what I mean is that I was using my RGB channels are temporary solutions for Spec and Gloss maps, but it came to my attention my blue channels were causing sever artifacts, vs my other channels.

My blue channels, literally, produced large chunky square artifacts. I had to get rid of them by blurring them with the game engine, but this raises the question to me on how this makes any sense! What if someone like me had 4 different 'types' of parts on my model (rubber, metal, skin and lights) on my model, and I wanted to use my Gloss map as my mask map, for said areas?

I would like to note I use Targa image files, but many people keep on on referring (about textures compressions) in DDS format lingo, as UV8, DXT numeric and other jumbo, which last I looked, didn't see any of it in Targa formats.


Fourth Issue: Formats for the Mesh and Triangulation

OK, last question I promise, and this so far has been the bane of my existence.

So far, each game engine or baking solution has been done with certain files formats required for said engine or baker, case in point UDK and xNormal.

Now the first issue is the format, which plays a large role on what it does, from recent discussions I got the feeling that FBX is the best solution since it keep the vertex weights and format size on triangulation correctly apparently.

Which brings me to my second point, how is this data interpreted? We have two methods of triangulating a mesh or correcting the normals weights on a mesh, either manually or automatically, my question is, how does this work?

An example of what I mean is that the FBX format is said to be the best format both for engine exportation and normal baking since it keeps the correct weight of normals when compared to other formats. In the case of UDK, apparently it's better then ASE and for xNormal, it's better then OBJ. All this because it keeps the correction done by the user better when exporting, but is it?

I'm really confused as to how a model format can keep the triangulation orientation and normal weights better, and generally, what this contributes, which is also a problem, since some claim that triangulating you mesh on the low poly, and using that to bake from your high poly is the correct way to go since it causes less issues and is faster.

Again, all it has to do is with the format apparently, and I'm not sure what to make out of this.

Sorry for the long post, I'm really frustrated with normals and would like some help by the pro's in the area since I'm getting so many conflicting noise on said subject matter, it's vexing.

AlecMoody
09-06-2011, 02:59 PM
You should be using UV/smoothing splits to control shading errors. Splits cost verts but so does everything else in the model and you just need to build that into your workflow/budgeting. Ubervert count for max makes tracking verts easy and if you are concerned with optimization you should use it. If unreal rendered normal maps in a way that matched any baking software then you could use fewer splits. But it doesn't- so you have to chose between shading errors and vertices.

Your goal with adding these splits is to make the normal map do as little work as possible. Meaning- smoothing in the low poly that create gradients in your normal maps- flat surfaces on your low poly should render out as 1258,128,256

Don't reassign smoothing in any way after you bake (especially in unreal).

cptSwing
09-06-2011, 04:16 PM
Isn't supporting gemotry used primarily to fix skewed detail on normal maps? i don't see this as an issue of a certain tangent basis or rendering engine, but more of a natural consequence of an averaged baking cage (rays hitting the greeble detail at an angle). Correct me if i'm wrong, though!

As for perfomance, since both UV shell borders and smoothing groups/hard edges cause vertex splits, it isn't more costly to have both if they correspond. i usually just lay out UV's quick and dirty, convert shells to smoothing groups and then check out the model for any needed supporting loops or possibly different uv breaks.

As far as i know, FBX just uses max's internal triangulation (invisible to the user). It also exports your tangent basis as of a couple of updates ago.

Quack!
09-06-2011, 10:27 PM
Third Issue: Compressions and Channels

One of the thing that bothers me is the compression that comes with normal maps, so in the case of engines like UDK, you can choose to have them uncompressed. Generally, this is a good idea on demo reel pieces since it doesn't produce artifacts.

However, I recently read up that you can down-scale the size of a normal map to half and have it uncompressed and it would be more beneficial then a compressed, and my question to that is, is it really? I find it hard to understand what the math is behind it.

What takes my confusion one step further is is that different channels are compressed different, and example of what I mean is that I was using my RGB channels are temporary solutions for Spec and Gloss maps, but it came to my attention my blue channels were causing sever artifacts, vs my other channels.

My blue channels, literally, produced large chunky square artifacts. I had to get rid of them by blurring them with the game engine, but this raises the question to me on how this makes any sense! What if someone like me had 4 different 'types' of parts on my model (rubber, metal, skin and lights) on my model, and I wanted to use my Gloss map as my mask map, for said areas?

I would like to note I use Targa image files, but many people keep on on referring (about textures compressions) in DDS format lingo, as UV8, DXT numeric and other jumbo, which last I looked, didn't see any of it in Targa formats.

http://udn.epicgames.com/Three/NormalMapFormats.html

UDK imports Targa, then compresses them to DXT on import. The "DeferCompression" checkbox is a good way to take beauty shots but can't be used in game.

Also, setting the normal map setting to NormalMapUncompressed does help quite a bit on hard surface models where you need to have nice smooth surfaces. You will have to just try this setting with your models to see if it benefits you. I have tried this with a first person weapon that I completed and NormalMapUncompressed at 1024x2 was leaps and bounds better looking then a compressed Normal Map at 2048x2.

Ace-Angel
09-06-2011, 10:38 PM
You should be using UV/smoothing splits to control shading errors. Splits cost verts but so does everything else in the model and you just need to build that into your workflow/budgeting. Ubervert count for max makes tracking verts easy and if you are concerned with optimization you should use it. If unreal rendered normal maps in a way that matched any baking software then you could use fewer splits. But it doesn't- so you have to chose between shading errors and vertices.

Your goal with adding these splits is to make the normal map do as little work as possible. Meaning- smoothing in the low poly that create gradients in your normal maps- flat surfaces on your low poly should render out as 1258,128,256


I understand, which is unfortunate, especially for UDK, these shading errors are what drives me insane! Thanks for the info, didn't know that normal stressing must be taken into account and flat panels must be neutral normal blue.

Would it be beneficial if I beveled/chamfered my 90* edges? as to lessen the shading issues? Especially on hard to hide visible parts?

From what I gathered, a new school of thought is that poly's cost less in terms of performance then large materials and texture sizes, not sure if true, just curious to correcting the shading error's poly-wise would be a batter investment then normals.


Don't reassign smoothing in any way after you bake (especially in unreal).

OK, understood, thanks!

Isn't supporting gemotry used primarily to fix skewed detail on normal maps? i don't see this as an issue of a certain tangent basis or rendering engine, but more of a natural consequence of an averaged baking cage (rays hitting the greeble detail at an angle). Correct me if i'm wrong, though!

That is what I though too also, but I have seen people use cages only, get correct bakes without the need for for any support loops, even on 'cylindrical' shapes which have loads of fancy smancy lines, which only confused me even more because I thought you needed them.

I think it's called 'flushing', they take any 90* edge, and essentially position the flat surfaces just barely ontop of they're supposed to be. From what I gathered, the closer the plane/face of the cage is to the position of the mesh, the less warping you get from your bakes.

Again, I'm not sure if it's really needed or not, all I know is that I'm more confused.

As for perfomance, since both UV shell borders and smoothing groups/hard edges cause vertex splits, it isn't more costly to have both if they correspond. i usually just lay out UV's quick and dirty, convert shells to smoothing groups and then check out the model for any needed supporting loops or possibly different uv breaks.

So as long as they stay on the same group/UV split, there is no extra cost? Nice, thanks.

Also, how do you stop your normals from breaking when adding support loops? I always make a copy of my mesh, one with and one without support loops, I use the supported one to bake, but when I take out the loops or apply the normal map to my original mesh, the normals break, I get high loss of information (EI: My mesh looks like it has shading problems where it's a flat surface).

As far as i know, FBX just uses max's internal triangulation (invisible to the user). It also exports your tangent basis as of a couple of updates ago.

Ah, understood, although I'm not sure how tangent basis translates to exportation, is it related to the U and V orientation of the normals?

Ace-Angel
09-06-2011, 10:42 PM
http://udn.epicgames.com/Three/NormalMapFormats.html

UDK imports Targa, then compresses them to DXT on import. The "DeferCompression" checkbox is a good way to take beauty shots but can't be used in game.

Also, setting the normal map setting to NormalMapUncompressed does help quite a bit on hard surface models where you need to have nice smooth surfaces. You will have to just try this setting with your models to see if it benefits you. I have tried this with a first person weapon that I completed and NormalMapUncompressed at 1024x2 was leaps and bounds better looking then a compressed Normal Map at 2048x2.

Ah that, makes sense, and I understand now, half size uncompressed is the key, is there a reason it's not used more often?

Vailias
09-07-2011, 01:09 AM
DXT has a number of different compression modes. Its not a single encoder algorithm, but a small family of them. Some of the types get better quality than others at the expense of file size/ram footprint. The normal map compression option is a type of DXT compression which should be less aggressive so as to leave the normal data intact.

My guess on your blue channel issue is that there isn't a lot of differing data in your blue channel so its getting represented in larger chunks in the compressor.

As far as normal mapping goes, it sorta depends on implementation, but generally speaking you want flat surfaces to be 127,127,127, rather than 127,127,255, as the data is used as an offset to the interpolated normal rather than explicitly. So offsetting it with a +1 in Z may cause unwanted shading errors.

The reason the half size uncompressed vs doublesized compressed, is how the DXT algorithm works. Basic DXT compression samples groups of pixels in 4x4 blocks, and basically creates a small lookup table and some color values. The compression ratio depends on the particular implementation, and it is lossy regardless of which type you go with. So a half size texture (1/4 area) will have the same memory footprint as the original texture with dxt compression.

So why is this better? Because your normal data is explicit, rather than effectively being re-sampled by the engine and introducing errors. Depending on the asset and its map the compression artifacts may not be very noticeable due to the content of the map, and so having MORE available data points, with some margin of error winds up with a more detailed looking mesh. In others the errors strongly take away from the asset, again due to the data content of the map.


The reason being able to export tangents with the model is important, is due to how models are shaded in the first place.

Normally on import Unreal will recalculate the mesh, and either recalculate its normals based on vertex ordering, or use normals embedded in the vertex data, and by extension it will either calculate or use the tangent and binormal data.

The normal, tangent, and binormal form essentially a coordinate space per vertex of the object. Each vector is orthogonal to the other (perpendicular to) but as you can imagine, there are infinitely many ways for a tangent to project from a normal. Having consistent tangent data helps keep consistent shading, as to transform any vector, like say the light vector, into tangent space, the space has to be defined by a matrix representing the orientation of its basic directions (normal, tangent, binormal) in some mutually agreed upon parent space. World space.

So if you let your tangents be re-calculated from what you have in your DCC application, your normal mapping in engine may/will suffer.

... and yes this is on the developing syllabus for that tech-art math tutorial set.. just gotta finish comic-con.

Xoliul
09-07-2011, 04:18 AM
My guess on your blue channel issue is that there isn't a lot of differing data in your blue channel so its getting represented in larger chunks in the compressor.


It's because of a thing called swizzling + blue channel discarding.
The blue channel is left out because it can be mathematically reconstructed from just the Red and Green channels: b = sqrt(1 - (r*r + g*g)).
With DXT compression (and some JPG algorithms) the Red channel is compressed the most, so the move the actual red channel data out of it and into the blue channel "swizzling" the actual values as to put the important values in the more precise channels.
The whole issue should stem from the slight loss of precision when reconstructing the blue channel from lossy compressed data.

A trick you could try (only for beauty, not for actual ingame use and performance) is to import your texture at double the actual size, but with NormalMapUncompressed.

Btw Ace-Angel, you really need to try and condense your questions into fewer words, a lot more people would bother to reply and help you. Right now it's pretty much tl;dr...

Ace-Angel
09-07-2011, 11:07 AM
-Thanks Valias, that makes alot of sense, I think I got it now, and cheers with CC :)

One questions

-Xoliul, believe me mate, I tried, but I didn't know how to compress the questions. The first section I wrote should be clue enough as to why I needed it to be so long, there is soo much conflicting statements, sometimes being made in the same Wiki, which don't address previous statements (90* splits and UV island splits are one such case, I don't think I ever read a single paragraph telling me if they should work together or it's case dependent (or even mesh dependent)).

I'll take your advice, quote condense my previous questions and see how that goes :)

Also, curious, I didn't know about NormalMapUncompressed would be beneficial for other maps, nice to know!

I guess my problem is I always watch tutorials which don't go into details about how normals maps themselves are an artform that need to be learned from and taken care with. They always make a model, bake and call it a day, sometimes with great results in UDK, but when I try it like that, I get atleast 25 shading issues on one mesh, and even nasty lines which shouldn't be there.

Cheers guys.

tristamus
09-07-2011, 09:05 PM
I guess my problem is I always watch tutorials which don't go into details about how normals maps themselves are an artform that need to be learned from and taken care with. They always make a model, bake and call it a day, sometimes with great results in UDK, but when I try it like that, I get atleast 25 shading issues on one mesh, and even nasty lines which shouldn't be there.

Cheers guys.

The joys of learning :D Yes, normal maps definitely seem like a very intricate art form sometimes lol...!

This thread is actually pretty helpful. A compilation of ideas.

Ace-Angel
09-09-2011, 12:09 AM
Hehe, so true.

Found this video for dummies like me, should be useful, because it actually helped me understand how Max is calculating them: http://www.youtube.com/watch?v=PMgjVJogIbc&feature=player_embedded

http://www.youtube.com/watch?v=O_7tHqgqpvw