Home Technical Talk

Hardware tessellation and topology

polycounter lvl 15
Offline / Send Message
Swizzle polycounter lvl 15
Since tessellation seems to be a pretty hot topic right now with the Samaritan tech demo and the March UDK, realized I had some questions about it. That probably means other people do as well.

Specifically, what does tessellation mean for modelers?

Obviously it's adding more geometry to what's already there, but how do the algorithms behave when subdividing meshes?

Is it necessary to add support loops to sharp edges on your low-poly model, or is this something that the displacement map can usually take care of?

Will tessellation start to make low-poly modeling look more like high-poly stuff so that things subdivide nicely?

If any of the Epic folks or any people from other studios and projects using tessellation would care to chime in, that'd be great. I've never worked with tessellation before and I'm sure there are several other people around who've never touched the stuff either.

Replies

  • Mark Dygert
    Options
    Offline / Send Message
    As far as I know and its very limited knowledge but it can work off of texture maps, so basically you have a low, and you feed it either a normal map or something like it and it tessellates as it needs to.

    Being curious myself and doing a quick search I found this:
    http://blogs.msdn.com/b/chuckw/archive/2010/07/19/direct3d-11-tessellation.aspx
    Looks like lots to read and it's probably not all that artist friendly and I'm sure it will probably kick off a new round of tools and tech to go with it. But we're talking about extremely high end PC's and might not show up on the next round of consoles... /sadface
  • oglu
    Options
    Offline / Send Message
    oglu polycount lvl 666
    just take a look at the rock giant demo... there is a wireframe mode to check out the tesselation...
    http://www.stonegiant.se/
    [ame]http://www.youtube.com/watch?v=5TLlichIP9Y[/ame]
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    From something I read a long time ago I remember that dx11 tessellation is done with a "tessellation program" similar to vertex and fragment programs in shaders. It goes in between the vertex and frag program I guess in there you tell it how much to tessellate and what you want to do with the extra geometry created.

    From seeing dx11 tessellation screenshots the tessellated geometry does not modify the shape of the mesh by default, so you would use a displacement map (or vector displacement map) rendered the same way you render normal maps. You then sample that in the tessellation program and offset the verts accordingly.

    So from an artists view it would generally be the same as using parallax mapping, but would just look much better.

    From a technical standpoint it also allows much more than just displacing surfaces. Heres a volumetric lighting implementation using hardware tessellation:
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.5056&rep=rep1&type=pdf
  • Bigjohn
    Options
    Offline / Send Message
    Bigjohn polycounter lvl 11
    Also looks like it's completely killing the fps. It basically gets cut in half.
  • r_fletch_r
    Options
    Offline / Send Message
    r_fletch_r polycounter lvl 9
    From a technical standpoint it also allows much more than just displacing surfaces.

    shouldnt it be possible to generate mass amounts of procedural geometry like trees, hair/ feathers? seems like a bit of a holy grail feature.
  • claydough
    Options
    Offline / Send Message
    claydough polycounter lvl 10
    Sure would be nice if there was a thread without any noise dismissing performance and practical application with current hardware.
    Assuming down the line that something Maxwell-like will be in a console within 4 years and your dedicating research and development into
    pipeline possiblities and discoveries right now.
    Please sign me up fer that thread. If only to concept proof our own stone giants
    for the next 3 years. sounds like 3 years of fun to me.

    Even if not ready for prime time, shouldn't the wiki evolve simultaneously with the artist as the tech matures?

    There seems to be next to nothing ( really nothing yet ) in way of a pipeline description.
    It appears you can not just turn on tessellation and realize ideal complexity throughout all geometry.
    The variation the surface takes at differing levels of tessellation is kind of scary.
    ( can as much be controlled by edge density map? Trying to wrap my head around how Tessellation is used to control LOD.
    Seems there should be parallel handling.
    Otherwise if bare low topology is used ( without traditional contributing levels of complexity ) "control" of the silhouette would be chaotic.
    Is this what is happening when the silhouette "dances" around as u up the tess level.
    All I have tried so far is to dissect demo assets in the DirectX SDK and Unigine Heaven.
    Don't know yet if UDK works in the same way. It appears from previous versions it might:
    Decided to tackle tessellation and began research lately...
    Seems very little in so far as,
    an artist centric pipeline and techniques.
    ( if any "involved" pipeline discussion already exists i'd sure appreciate a point in that direction )

    What I have tried so far... ( and an often used strategy with new tech )
    is to just substitute my assets with whitepaper demo sample files and viddy-ing the results.

    However,
    when trying the same with the MSDN june directX SDK for 2010...

    It appears that such a pipeline may not be as straightforward as many have assumed.
    As Density based and LOD Distance methods will make adaptive strategies "fast".
    The mapping to support these technologies ( within the SDK samples @ least ) involve two parts.
    • a "height" map consisting of a normal map for such information. and
    Saint_NM_height.jpg
    • an edge definition map. mapping edge integrity consisting of red on black ( the edge "density" in red )
    Saint_NM_height_density.jpg













    In similar pipelines ( unigine?/youtube-ed? ) I have noted similar workflows where the red map seem to contain both height and edge information?? )


    Not sure if the samples in the SDK support subdivision surface displacement? vector displacement? pn-triangle? However...
    This method seems to be reinforced by the DirectX 11 Tessellation features recently exposed in UDK.
    where:

    Quote:
    Ok, I am judging this based on 5 minutes of quick inspection as I haven't had a chance to find out about this feature officially.
    So take it with a grain of salt and rest assured we will have complete documentation of it in the near future on UDN.

    That input takes in a 2-component vector I believe (X and Y, or R and G...however you want to look at it).
    The first component defines the tessellation factor for the edges and the second component defines the tessellation factor for the interior.
    It should also require the D3D11TessellationMode property of the base Material node (in the D3D11 section)
    to be set to something other than MTM_NoTessellation. Most importantly, it only works if your video card supports DirectX 11.
    In which case I imagine this might be a good direction to start?
    If I knew that the edge density map was something that could easily be "baked" and subsequently previewed within
    a baking application's dx viewer. ( along with all it's tesselated glory ). perhaps in xNormal 4?

    um... please?

    Seems like there should be lots of room for discovery and I imagine in the end much of the pipeline will really be discovered/evolved here?

    In which case the sooner the better?
  • EarthQuake
    Options
    Offline / Send Message
    The easiest thing to do would simply be to think of it as a basic displacement, or think of it as a "real" parallax shader, instead of faking the look its actually sub-dividing and displacing the geometry.

    Sure you can get more complex with directional maps and such, but for the basic implementation all you would need is a displacement map, which contains the difference between your high and low.

    [edit] Claydough: this isn't specifically a reply to you, just for anyone who is struggling with the concept.

    As far as combating completely random silhouettes, I think that starts with content. If your art content has a bunch of random noise in it, Its going to result in messy displacement. This is true of a basic parrallax shader as well, I tend to blur my displacement map a bit, so only the larger forms come through, this way you'd have less chance of a random pixel giving a totally random height value.

    To get real fine detail out of this sort of tech, you need a 1:1 pixel to quad ratio(or higher) which isn't a good plan to depend on. I think the better use of this sort of tech is broad shapes, and leaving the noise type texture in the basic 2d maps, diff/spec/normal/etc.
  • TDub
    Options
    Offline / Send Message
    TDub polycounter lvl 13
    I just hope that they start producing more documentation on the subject.. After working with Unigine for about 8 months, it was always a very tedious thing to achieve, without some horrible error appearing. Though I'm sure it is getting more fine tuned.

    As far as topology, I don't think it should really change the work flow of character artists (except maybe creating a new bake, being height or displacement? depending on the software/engine.) Unless you want to go for procedural changes like in the new UDK demo. Thats beyond me. :poly122: And for environment stuff it was usually just a matter of sub-ding your model a bit.
  • teaandcigarettes
    Options
    Offline / Send Message
    teaandcigarettes polycounter lvl 12
    Thank's for all the info so far guys. I love reading about this stuff.

    One thing bothers me however. What about using a tangent space normal map and a displacement map at the same time? Wouldn't tesselation destroy the vertex normal data used by the normal map?
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    It probably works the same way as vertex programs where if you modify the vertex position it does nothing to the vertex normal, and you would need to manually modify the vertex normal if needed. So you still need a normal map to get proper shading, unless you were doing something crazy and editing the normals along with the vertex positions.
  • EarthQuake
    Options
    Offline / Send Message
    You'd lock the vertex normals per-tesselation, the normals wouldn't be re-normalized after the displacement was added, so in theory, it wouldnt be an issue.

    You could skip the normal map and smooth the normals again in the shader, but I imagine this would be very slow.
  • commander_keen
    Options
    Offline / Send Message
    commander_keen polycounter lvl 18
    well the point is that you dont need to recalculate the normals because the baked tangent space normals will work correctly.
  • ScoobyDoofus
    Options
    Offline / Send Message
    ScoobyDoofus polycounter lvl 19
    I'm keeping my eye on this topic. I've downloaded the new UDK, but my fancy-pants Quadro card doesn't do DX11. So until I get capable hardware, its going to remain research / theoretical for me.

    I would imagine that in order to achieve optimal results you'd need some kind of vector displacement map, as a standard disp map will only push verts in the direction of he surface normal right?
  • nekked
    Options
    Offline / Send Message
    amazing new nvidia tesselation vid

    http://www.vimeo.com/17593021

    I really don't see the ingame/untesselated mesh being much different than the ones we make today for most characters. But I think meshes with multple disp maps like in the video would need a more generic, evenly quaded, base mesh, since any part can morph to many different shapes, long thin/weird poly's wouldn't tesselate well, or better put, they'd need to overtesselate in order to get it to diplace well. My brain hurts....
  • HAL
    Options
    Offline / Send Message
    HAL polycounter lvl 13
    What I have asked myself for quite some time now, would something like realtime sub-d hardsurface stuff with control loops work or does that conflict with the recalculate normals stuff mentioned earlier?
  • cman2k
    Options
    Offline / Send Message
    cman2k polycounter lvl 17
    I think part of the reason this is confusing is because everyone is treating "tessellation" as a single set feature, when really it is just a shader function and can be used in different ways. Developers are creating different tessellation algorithms that work in different ways and are better or worse for different applications. That leads to a lot of confusion as to how it 'works', because it's working differently for everyone.

    Here is a really great PDF from Nvidia that discusses a lot of these techniques, and is especially good at listing the pros and cons of each of the techniques. I think this gives a pretty good idea of the types of problems we could be looking at facing, but it's just as important to note that "tessellation" is not a singular hardware feature that only works one way....I think we'll see different algorithms evolve that serve different purposes well as we move forward.

    Sweet PDF on this stuff -> http://www.nvidia.com/content/PDF/GDC2011/John_McDonald.pdf

    Looking at the problems, they seem very similar to problems you might see when subdividing or tessellating stuff in 3dsmax. Have you ever tried tessellating a textured lowpoly model in max? and seen lots of problems? Same type of stuff. Want to test displacement mapping? Very easy to do in max....just try it and see how well it works for you. I think you'll see a lot of similar problems that you will end up seeing from the various tesselation algorithms.
  • NordicNinja
    Options
    Offline / Send Message
    nekked wrote: »
    amazing new nvidia tesselation vid

    http://www.vimeo.com/17593021

    ^ This is the coolest thing I've seen in a long while. Very cool! :thumbup:

    At first I was all :|
    Then I was all :O
  • Bigjohn
    Options
    Offline / Send Message
    Bigjohn polycounter lvl 11
    That video was awesome.

    I got a few questions though. First, how do you get texture coordinates for the new vertices? When you displace a surface, does it just use whatever texture was already there? That would mean that for a displacement that goes well beyond the silhouette of the original object (like the spikes in the demo), you would need a pretty high-rez diffuse to support all those details.

    Also, were there 3 different base-meshes in that demo? That skeleton one looks vastly different to the point where I don't think a displacement map could do that.
  • cman2k
    Options
    Offline / Send Message
    cman2k polycounter lvl 17
    Yeah it could...it's just going in the opposite direction, making the mesh extremely skinny. I'm fairly certain that is all done with a single mesh.
  • Snader
    Options
    Offline / Send Message
    Snader polycounter lvl 15
    R_fletch_r - In theory, yes, but I reckon you'll need a whole lot of tesselations to get a decent amount of polygons to dynamically grow an entire tree - and I think hair would be even more expensive (I think using splines will be feasible in the close future, though). It could, however be used in combination with a simple basemesh, off of the top of my head this should work:
    -basemesh with a couple of branches
    -noise displacement to make trees and branches wobbly
    -tesselation to get rid of the
    But most of the work would really be done by a displacement map. It also sounds fairly restrictive. But hey I'm no pro at technical details and coding kick-assery.


    BigJohn - one part to solve that is to have multiple unwraps, so you can redistribute detail to where you need higher texel density. Because as you can see in the demo they blend between textures (diffuse/spec) as well. Though I reckon for most applications (such as bumpy rocks or tree bark).

    What might be even more awesome is that you could stack displacements like one would in zbrush. So you could dynamically grow mushrooms for instance:
    -tesselate ground surface, displace center polygons upward
    -load/blend secondary unwrap that puts higher texel density on those polygons
    -tesselate and displace again, creating the 'hat' of the mushroom
    -load/blend tertiary unwrap and textures

    What I wonder, how flexible are tesselations? Are they bound to whole objects or could you tesselate 1 arm but not the other (without using floats)? Are there fixed iterations like 2x, 3x, or can you say "okay give this 50% more triangles in the places you think need them most" (or will the latter actually be less efficient than simple 2x everything)?

    Perhaps in the next-next gen we'll have tesselation-density maps? White spots get tesselated faster/higher priority than dark spots?
  • cman2k
    Options
    Offline / Send Message
    cman2k polycounter lvl 17
    From the Nvidia demo (look at the terrain wireframe in the alien video) and what I've seen, I kind of figured tesselation density was already a feature (not using a map, just putting more triangles where it needed it, like auto-detecting where the most variation is between displacement and base mesh and adding more polygons there...?)

    UDK is also changing terrain density based on distance...which I figured was done in the shader using a depth buffer or something.
  • jogshy
    Options
    Offline / Send Message
    jogshy polycounter lvl 17
    In my opinion, "tessellation for modellers" just means you should work as follows:

    1. You create a LP mesh.
    2. Subdivide the LP mesh to turn it into a HP mesh
    3. Sculpt the HP mesh.
    4. Extract a displacement / vector displacement map.

    At realtime, the LP mesh will be subdivided with the DX11's hull/domain shader and the displacement/VDM map will be applied to reconstruct the HP mesh.

    I really prefer to use vector displacement maps (VDM) because are much faster than displacement maps ( they don't require subdivision really ) and also because they allow you to create mushrooms/ears which is impossible with a scalar displacement map.

    Oh btw, you don't really need DX11 to perform tessellation. See the xN's DX10 spiked ball example ( altough DX11 will render it much faster and it will be possible to use distance-adaptive tessellation ):

    [ame]http://www.youtube.com/watch?v=SYU3YU3d7GI[/ame]

    And, yep, the UDK demo is awesome :poly142:
  • claydough
    Options
    Offline / Send Message
    claydough polycounter lvl 10
    Thanks Jogshy!
    Vector Displacement in XN sounds awesome as well.


    the link to the updated tessellation documentation in the April 2011 beta is broken...
    The correct link is:
    http://udn.epicgames.com/Three/TessellationDX11.html



    DyanmicDisplacement.jpg

    Hope to have time to dive in soon and see whats good vs what needs werk
Sign In or Register to comment.