Home Technical Talk

Physically Informed Spec Value Library

2

Replies

  • jeffdr
    Options
    Offline / Send Message
    jeffdr polycounter lvl 11
    h3r2tic wrote: »
    It is pretty confusing to provide linear RGB in the 0-255 scale, as you typically don't store linear RGB in 8-bit formats -- if you do, you suffer badly from quantization artifacts. Linear RGB is most commonly used internally in shader math, and constant parameters you provide to materials; it will be in the 0-1 range. sRGB on the other hand usually comes from textures, and will typically be in the 0-255 range.

    Yes, this. Under no circumstances should linear values be represented as 0-255, or 'hex'. Safe to record sRGB either way.

    Also the sRGB/Linear equations EQ posted that came from me were in fact incomplete. The difference is small but here are corrected links (just edit L or S to compute what you want, ignore the graphs which are a mess):

    sRGB to linear: http://www.wolframalpha.com/input/?i=Piecewise%5B+%7B%7BS%2F19.92%2CS+%3C%3D+.04045%7D%2C+%7B%28%28S%2B0.055%29%2F1.055%29%5E2.4%2CS%3E.04045%7D%7D+%5D+where+S+%3D+0.5
    linear to sRGB: http://www.wolframalpha.com/input/?i=Piecewise%5B%7B+%7B12.92*L%2CL%3C%3D.0031308%7D%2C+%7B1.055*%28L%5E%281%2F2.4%29%29+-+0.055%2CL%3E.0031308%7D+%7D%5D+where+L+%3D+0.5
  • Joopson
    Options
    Offline / Send Message
    Joopson quad damage
    That'd be a nice little program. Basically, just a drop down menu, where you select a certain material. And it shows the different properties of the material, like albedo, specular, and an approximate range of roughness. Maybe a second dropdown too, for further differentiation, like "shiny plastic" vs "rough plastic", et cetera.
    Maybe have it read from a database that tries to grab updates from a server to make sure the list is up to date as new values are added, and writes it to the hard-drive. And you could choose how it displays the data: HEX, as sRGB numbers, linear numbers, or color swatches.

    Of course, I would have no idea how to make such a thing, but I like the idea.

    And maybe include a big disclaimer about how the numbers are approximations, and represent an average, which can and does vary in real life.

    a man can dream
  • EarthQuake
    Options
    Offline / Send Message
    Not to spoil the party or anything, but a more advanced tool like this is, basically dDo 2 + megascans... except you can make textures with it too. =P
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    I just checked and it doesn't appear that UE4 takes into account the roughness for the diffuse. I'm pretty sure it remains lambertian regardless of the value. Oren-Nayar takes it into account more complex changes in the diffuse based on roughness (including masking, shadowing, and interreflections) but I don't believe that's in UE4's model right now.
  • Joopson
    Options
    Offline / Send Message
    Joopson quad damage
    EarthQuake wrote: »
    Not to spoil the party or anything, but a more advanced tool like this is, basically dDo 2 + megascans... except you can make textures with it too. =P

    My dreams are coming true!

    [edit: but seriously, I'm super excited for the quixel suite, and everything it has to offer, especially now that I've finally started using dDo and nDo2 a little more often.]
  • Xoliul
    Options
    Offline / Send Message
    Xoliul polycounter lvl 14
    Just wanna mention some things regarding this:

    I feel the 'spreadsheet approach' to PBR really isn't the way to go. A few reference values here and there are fine, but I'd never rely on that. In fact at work I actively discourage our artists from doing that, and rather try to get everybody to fully understand the rules and guidelines, so they can think for themselves. I really urge everybody to do this too, make sure you realize the difference between roughness/gloss and specular reflectance, that you know what energy conservation is, that you understand how and why a metal behaves differently, etc...
  • Bubba91873
    Options
    Offline / Send Message
    Bubba91873 polygon
    Off Topic Kindof:
    I'd like to see solid values for something like cold/hot rolled steel with military camo paint applied over it for armored vehicles. Both ww2 up to present for PBR work. WW2 vehicles particualry had very rough pitted, scratched surfaces due to fast production and not worrying about smoothing the surface to make it look pretty like todays military vehicles.

    I understand from reading that for the specular of the metal it would reflect the color of the paint I think. But roughness/gloss value escapes me.

    Xoliul has said in the past that hes researched a lot of these types of objects from going to museams and such and hopefully he and others could shed some light on this.
  • [Deleted User]
    Options
    Offline / Send Message
    [Deleted User] insane polycounter
    The user and all related content has been deleted.
  • MFN
    Options
    Offline / Send Message
    MFN
    Ok... I've recently started learning PBR and I understand how albedo, specular reflection and roughness/gloss work but I don't quite get how to use these numbers. I'm using UE4 and it's my understanding that everything is in a 0-1 linear space. How do we input these numbers on a texture sheet inside Photoshop? What's the differences between linear, sRGB and RGB? Are there specific color settings that should be changed within photoshop?
  • EarthQuake
    Options
    Offline / Send Message
    Xoliul wrote: »
    Just wanna mention some things regarding this:

    I feel the 'spreadsheet approach' to PBR really isn't the way to go. A few reference values here and there are fine, but I'd never rely on that. In fact at work I actively discourage our artists from doing that, and rather try to get everybody to fully understand the rules and guidelines, so they can think for themselves. I really urge everybody to do this too, make sure you realize the difference between roughness/gloss and specular reflectance, that you know what energy conservation is, that you understand how and why a metal behaves differently, etc...

    I don't see the two ideas as mutually exclusive in any way. I've learned a lot about material properties by looking at scan data. Eyeballing it alone, even if you understand a lot about material properties is still prone to a high degree of error.

    I think its important to conceptually understand materials and observation and research are both equally valid methods of learning more.

    I don't think its a good idea to stick rigidly to charts and graphs for all your values, and I also don't think its a good idea to stick rigidly to observation either. Both have their place.

    Edit: Perhaps I've misread your post. Are you saying you actively discourage using measured values at all, or that you actively discourage using ONLY measured values? If the former, I personally think that's a mistake. If the latter, I agree very much, I think sticking only to measured values would be just as bad as ignoring them when you can find them. Its always going to take some amount of logic and observation to extrapolate values for materials that you don't have data for, so sticking only to measured values can be a real crutch.
  • stevston89
    Options
    Offline / Send Message
    stevston89 interpolator
    @ MFN - With UE4 you will be using the metalness workflow. So the only values you really need to be strict about are your metal reflectance/ spec values. Many of those are covered for you in this link http://seblagarde.wordpress.com/. In regards to albedo you can use measured values as a guide, but you really just need to make sure you stay away from baked in lighting and colors that are too bright or dark. Also the numerical values you see are just a representation of the neutral light and dark values. They don't include color ( I think not quite clear on that). Roughness/ gloss fluctuates quite a bit so the values you see there are just a guide. You are going to mostly be eyeballing this.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    What I think would be helpful is for scan data is have a standard reference material that is captured by the setup as well so that there's a standard for comparing values, like using those color reference cards for photography. https://en.wikipedia.org/wiki/ColorChecker

    @MFN RGB are the three colors (Red, Green, and Blue) that make up a display. They each typically are described by an 8bit number, so you'd have integer values from 0 to 255 (black to full color) for each color channel (8bits give you 256 values). A value of 0,0,0 would be black and a value of 255,255,255 would be white. In a linear color space a value half way (127,127,127) would be 50% grey, right between black and white. If we're using values from 0-1 rather than integer values from 0-255 then 0,0,0 is black, 1,1,1 is white and .5,.5,.5 is 50% grey in a linear color space.

    The problem is that our monitors do not display a value of .5,.5,.5 (or integer values of 127,127,127 if we have 8bits per channel) as 50% grey. If you gave that value to your monitor it would display a darker shade of grey than 50% grey. The values are skewed towards black and you would actually have to use a value of about 149,149,149 if you wanted your monitor to display 50% grey properly.

    This skewed display space is sRGB, it has a standard gamma curve of 2.2 (describes how much it's skewed). Anything made to be displayed properly on your monitor will be in sRGB, so if you're drawing something by eye in photoshop for example your probably painting values that are correct as sRGB values. If we want to do actual mathy things with our painted values then we will need to shift them back into a linear space so that each value from 0-1 will describe where it should be so that when it's multiplied it makes sense (.5 should represent 50% of something).
  • MFN
    Options
    Offline / Send Message
    MFN
    Awesome, thanks for the answers stevston89 and Gestalt. The chart from S
  • iniside
    Options
    Offline / Send Message
    iniside polycounter lvl 6
    Gestalt wrote: »
    I just checked and it doesn't appear that UE4 takes into account the roughness for the diffuse. I'm pretty sure it remains lambertian regardless of the value. Oren-Nayar takes it into account more complex changes in the diffuse based on roughness (including masking, shadowing, and interreflections) but I don't believe that's in UE4's model right now.

    ON model is in engine, but you have to change some defines in shaders and recompile engine from source (including shaders).

    But according, to developers using ON over Lambert, was not worth performance cost for real time application.
    MFN wrote: »
    Ok... I've recently started learning PBR and I understand how albedo, specular reflection and roughness/gloss work but I don't quite get how to use these numbers. I'm using UE4 and it's my understanding that everything is in a 0-1 linear space. How do we input these numbers on a texture sheet inside Photoshop? What's the differences between linear, sRGB and RGB? Are there specific color settings that should be changed within photoshop?

    I would really discourage from using texture as direct input for Roughness and Specular.
    Instead treat those textures as masks and use Lerp node, with two scalars.
    You will be able to tweak your values at run time, instead of trying to guess ohow the hell R 200 translate to floating point...
  • Xoliul
    Options
    Offline / Send Message
    Xoliul polycounter lvl 14
    EarthQuake wrote: »
    I don't see the two ideas as mutually exclusive in any way. I've learned a lot about material properties by looking at scan data. Eyeballing it alone, even if you understand a lot about material properties is still prone to a high degree of error.

    I think its important to conceptually understand materials and observation and research are both equally valid methods of learning more.

    I don't think its a good idea to stick rigidly to charts and graphs for all your values, and I also don't think its a good idea to stick rigidly to observation either. Both have their place.

    Edit: Perhaps I've misread your post. Are you saying you actively discourage using measured values at all, or that you actively discourage using ONLY measured values? If the former, I personally think that's a mistake. If the latter, I agree very much, I think sticking only to measured values would be just as bad as ignoring them when you can find them. Its always going to take some amount of logic and observation to extrapolate values for materials that you don't have data for, so sticking only to measured values can be a real crutch.

    I mean the latter; relying only on value sheets and scans. That would render you unable to properly or correctly texture as soon as there's no values available in the chart. Of course you need observed data to be able to establish the rules, but it seems like people have difficulty extrapolating rules/guidelines even with the facts in front of them. Hence my worry.

    Just to give you some examples:
    -Scanned albedo values show that the darkest (non metal) materials only go down to about 50/255 sRGB, yet you see plenty of people drop their albedo below 30, down to 10 even.
    -These charts are including roughness values, which is a bit arbitrary by itself, let alone dependent on the exact implementation. However, if you realize the difference between roughness/gloss and specular/reflectance, people should realize that having different reflectance values for 2 degrees of roughness of the same material doesn't make any sense at all (like in the charts).

    Also, I feel like I should mention this (even though i'm perhaps not even allowed to do that); I'd really like to do share some of my knowledge, do an article, video, whatever, but I have to be super careful due to clauses in my contract. I'm trying to work something out with them however.
    So apologies for saying things are incorrect and then not contributing much.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    Xoliul wrote: »
    Also, I feel like I should mention this (even though i'm perhaps not even allowed to do that); I'd really like to do share some of my knowledge, do an article, video, whatever, but I have to be super careful due to clauses in my contract. I'm trying to work something out with them however.
    So apologies for saying things are incorrect and then not contributing much.

    I know them feels bro :|
  • cman2k
    Options
    Offline / Send Message
    cman2k polycounter lvl 17
    Hey everyone,

    Unfortunately I've decided to take down the spreadsheet and database stuff. This seemed to spread around pretty quickly, and was largely being taken at face value as legit information, when really it was just a rough first pass at congregating a bunch of disparate data that I was hoping the community would help improve over time.

    I never meant to misrepresent the data, or misinform people, so I apologize for that. I think if I personally get back to this, it will be much more carefully labeled/disclaimer'd at least. Ultimately, I fear that trying to quickly mock up something that the community would help build was being taken at face value as final/legit...and I never meant for that.

    It's pretty disheartening to me to feel like people I look up to are scolding me for misrepresenting things I never meant to misrepresent in the first place. Seems like I should have been more careful with this.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    I think there could still be use for a table with values but there would need to be some standardization for how the values are derived and what they mean (having a defined set of values for a standard black material and white material for example).

    Different people can take a picture of the same material under different lighting conditions and get different values. If someone uses a different exposures or light source then the data wouldn't be relatively accurate. It's not a simple matter either, spectral color also factors in and the cri of the light source will have an effect (some people use cfl's some use halogen and others use natural light, every light is unique). Using a color checker under the same conditions could help for calibrating.

    Roughness values could be provided but the model they're referring to should be documented with them. Overall I think it would be very helpful to have a standard library.
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    marks wrote: »
    I guess this is the conversion math you want.

    completely useless without either input fraction btw.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    that's the value you want to convert in the 0-1 range, "result" is the converted value (in the 0-1 range)
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    Thanks beautiful =]
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    So, here's a question:

    How reliable is scanned data?

    The reason i ask this question is due to an observation i've made in toolbag. I'm working on a UE4 project, and i'm not entirely happy with their skin shader (i'm working on changing that) but i also wanted to do some quick iterative work, and having to constantly reimport textures is annoying. So i modified toolbags shaders to match up to the UE4 shaders, specifically toolbag now uses the UE4 specular BRDF.

    I noticed that roughness gives a hugely different result, if all settings (lights, sliders etc.) remain the same, and the input maps (including roughness) are all the same, the final result is very different. here are some examples:

    Default toolbag:
    5YzLAP5.png

    Modified toolbag:
    LcDiVEP.png

    Default toolbag (using quixel scanned data):
    WRrRQz3.png

    Modified toolbag (using quixel scanned data):
    9oCLbSJ.png


    Now, the reason this is such an important question, is that as a surface becomes rougher, it appears duller, it's not actually less reflective it just reflects light more broadly and therefor less light reaches the eye giving the illusion of lower reflectivity. But a camera lens is tricked by this just as much as a human eye... so doesn't that mean that the scanning equipment needs to use the same BRDF math as the renderer in order for it to be physically correct?
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    I don't know exactly how they're deriving their scan data but if they're getting the normals, if they know the location of the camera and the light source, and if they're able to isolate the base color, then they should be able to figure out decent values based on that.

    I tried to do something similar a while ago but I didn't have a good enough setup for it (you need accurate normals), but if you think about it a bsdf is just a function of what values you should get given the orientation of the camera and light relative to the surface. So if for each pixel of the image they know the base color and the light location and the normal for that pixel (relative to the camera) then they know enough to get some data for a bsdf. If they can assume the entire image is of the same material then they have a data point for every pixel and new data for each new normal direction accounted for. If they can assume the bsdf is isotropic then they can do even more with less.

    This isn't to say that their data is reliable (haven't seen it yet), but given the information/maps they're getting, if their setup is right they should be able to get something useful. The exact model their making values for could be different from UE4's so that's something to keep in mind. Roughness is a pretty arbitrary thing unless you're talking about a specific model.
  • iniside
    Options
    Offline / Send Message
    iniside polycounter lvl 6
    I must ask. Could you guys tell me your reasoning behind pluging textures directly to Roughness/Specular and trying to map how 0-255 space relate to 0.0000000-1.0000000 space, instead of using them as masks for lerp ?

    I honestly can't see any benefit, and you efficiently cut out a way to tweak your inputs at run time.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    Because that's needlessly adding math to the shader, making it more expensive to render. Which can add up to quite a bit when it's done on every shader in your game.
  • EarthQuake
    Options
    Offline / Send Message
    So, here's a question:

    How reliable is scanned data?

    The reason i ask this question is due to an observation i've made in toolbag. I'm working on a UE4 project, and i'm not entirely happy with their skin shader (i'm working on changing that) but i also wanted to do some quick iterative work, and having to constantly reimport textures is annoying. So i modified toolbags shaders to match up to the UE4 shaders, specifically toolbag now uses the UE4 specular BRDF.

    I noticed that roughness gives a hugely different result, if all settings (lights, sliders etc.) remain the same, and the input maps (including roughness) are all the same, the final result is very different. here are some examples:

    Default toolbag:


    Modified toolbag:


    Default toolbag (using quixel scanned data):


    Modified toolbag (using quixel scanned data):



    Now, the reason this is such an important question, is that as a surface becomes rougher, it appears duller, it's not actually less reflective it just reflects light more broadly and therefor less light reaches the eye giving the illusion of lower reflectivity. But a camera lens is tricked by this just as much as a human eye... so doesn't that mean that the scanning equipment needs to use the same BRDF math as the renderer in order for it to be physically correct?

    I think there are too many variables here to really come to any concrete conclusions. You've got a user created/hacked shader in TB2 trying to emulate another engine (how close is the visual result of your shader to UE4?). Afaik the Megascans content is calibrated to the default TB2 shaders. Some of the megascans content was created with early scan tech too and we plan on getting updated values at some point.

    I think roughness values will always need to be created specifically/tweaked for your target engine to some extent, because most engines/shaders map the values the a little differently. Roughness/gloss is tricky because the values can go to infinity, but most shaders map them to a more sane value to make it easier to create content for.

    You could probably tweak the base gloss value in TB2 to match up to UE4, say set gloss to around .9 or so (I think someone was telling me at GDC they set it to around 0.88 to match UE).
  • iniside
    Options
    Offline / Send Message
    iniside polycounter lvl 6
    marks wrote: »
    Because that's needlessly adding math to the shader, making it more expensive to render. Which can add up to quite a bit when it's done on every shader in your game.

    Maybe. But I see Epic is doing it in every shader anyway.
    I don't think thing one or two lerp more will do any big difference, when you will have more complex effects like water/snow accumulation for example.

    In mean time you loose huge benefit of tweaking everything in editor and doing it very fast.

    edit:
    Just checked. One lerp with mask and two scalars is two more PS instructions. I say it is meaningless for optimization.
  • EarthQuake
    Options
    Offline / Send Message
    iniside wrote: »
    Maybe. But I see Epic is doing it in every shader anyway.
    I don't think thing one or two lerp more will do any big difference, when you will have more complex effects like water/snow accumulation for example.

    In mean time you loose huge benefit of tweaking everything in editor and doing it very fast.

    I'm not sure anything or anyone in this thread has suggested there is only one way to use a set of values. What we've been discussing is base measured values. Exactly how you use that data in your shader is entirely up to you.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    iniside wrote: »
    edit:
    Just checked. One lerp with mask and two scalars is two more PS instructions. I say it is meaningless for optimization.


    2000 drawcalls in an example frame: You just added 4000 more shader instructions to your shading pass. Not insignificant on old hardware.
  • Froyok
    Options
    Offline / Send Message
    Froyok greentooth
    I'm working on a UE4 project, and i'm not entirely happy with their skin shader (i'm working on changing that) but i also wanted to do some quick iterative work, and having to constantly reimport textures is annoying. So i modified toolbags shaders to match up to the UE4 shaders, specifically toolbag now uses the UE4 specular BRDF.

    Wow, that sounds indeed like a very good change for working efficiently.
    Do you have any plan on releasing the modified shaders ?
  • iniside
    Options
    Offline / Send Message
    iniside polycounter lvl 6
    EarthQuake wrote: »
    I'm not sure anything or anyone in this thread has suggested there is only one way to use a set of values. What we've been discussing is base measured values. Exactly how you use that data in your shader is entirely up to you.

    I know, My point is you don't need accurate data on textures, when you just can set it in editor. Textures in that case acts only as mask for variation.
    2000 drawcalls in an example frame: You just added 4000 more shader instructions to your shading pass. Not insignificant on old hardware.
    Just theoretical question. Are you really going to target older hardware using engine, that is not old hardware friendly to begin with ?
  • EarthQuake
    Options
    Offline / Send Message
    iniside wrote: »
    I know, My point is you don't need accurate data on textures, when you just can set it in editor. Textures in that case acts only as mask for variation.

    But none of that negates the need for accurate data which is what the thread is about. Even if you set it in the shader, you still want an accurate value. If its a constant in a shader or an RGB value in a texture doesn't make much difference, and there are resources in this thread that list both/show how to convert.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    On the topic of whether to use your textures as alphas for lerping, that's what I'd personally do. It's really not that expensive and you can adjust things as needed, which saves time (rather than trying to set things up perfect in photoshop).

    You can also use an alpha in multiple ways. A roughness map doesn't have to have details in precisely one way so you can get more creative with tiling and combining your detail textures, and at the end set your lerp values. You can use the same alphas in multiple locations since you don't need to set them at a specific value for anything. Heck I'm even considering doing something similar for my base colors. Most good tiling textures for materials (if you were to take out the bright/dark variation) are pretty much one color or a gradient between two similar colors.
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    Froyok wrote: »
    Wow, that sounds indeed like a very good change for working efficiently.
    Do you have any plan on releasing the modified shaders ?

    Sure thing:
    - DISCLAIMER -

    Neither i, nor the good folks at Marmoset.co are responsible for anything that could happen to your computer as a result of using this shader. It is released "as is", with limited support. It will work with any existing content or materials you've created, and only changes the specular BRDF to match UE4.

    What you need to do (in this exact order):
    1. browse to: "Marmoset Toolbag 2\data\shader\mat\reflection"
    2. rename "directPhong.frag" to "directPhong_backup.frag"
    3. drop "GGX_code.frag" and "directPhong.frag" from the zip file into the folder
    4. start/restart Toolbag 2.

    If you wish to revert back, browse to the above folder, delete "directPhong.frag", and rename "directPhong_backup.frag" back to "directPhong.frag".

    http://crazyferretstudios.com/public/Toolbag_GGX.zip
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    @EQ - the shader matches UE4 100%. that said, my question could have been worded better:

    If roughness is something calculated differently in each engine, and roughness + reflectivity have to be measured together as they're so closely linked, and also bear in mind that inaccurate reflectivity readings will also affect your albedo reading, then how can you make sure that your scanned data is physically accurate in all engines.

    the short answer is: you can't.

    the longer answer is:
    thinking about it the same way as why we artists calibrate our monitors so we're all (hopefully) using the same reference standard - it's so that when other people view it, the variances are minimized.

    this is a similar situation, if everyone uses the same calibration for their scanning data, it means variances across engines are minimized.

    with that said, Ryan Hawkins (of Quixel) told me over skype that they can actually calibrate their scanners for different engines. which is pretty badass.
  • marks
    Options
    Offline / Send Message
    marks greentooth
    roughness + reflectivity have to be measured together as they're so closely linked
    Not true. Scuffed plastic still has the same reflectance value as shiny plastic, reflected light is just scattered more. The two values are pretty independent from what I understand.
    how can you make sure that your scanned data is physically accurate in all engines.

    If your scandata is measured correctly, and the rendered result isn't the same in your engine as every other PBR renderer - you've done something wrong. Standardization is the cornerstone of PBR.

    HOWEVER:
    Sure different engines can calculate roughness differently - for example UE4's is pretty perceptually linear whereas the roughness curve we're using on A:I skews more towards rough for more precision on the bottom end of the scale (as that's what suited our needs best). Different strokes for different folks...
  • Froyok
    Options
    Offline / Send Message
    Froyok greentooth
    Sure thing:
    Awesome ! Thank you very much ! <3
  • almighty_gir
    Options
    Offline / Send Message
    almighty_gir ngon master
    marks wrote: »
    Not true. Scuffed plastic still has the same reflectance value as shiny plastic, reflected light is just scattered more. The two values are pretty independent from what I understand.



    If your scandata is measured correctly, and the rendered result isn't the same in your engine as every other PBR renderer - you've done something wrong. Standardization is the cornerstone of PBR.

    HOWEVER:
    Sure different engines can calculate roughness differently - for example UE4's is pretty perceptually linear whereas the roughness curve we're using on A:I skews more towards rough for more precision on the bottom end of the scale (as that's what suited our needs best). Different strokes for different folks...

    standardisation is the cornerstone of PBR... but let's not because it doesn't suit our needs.

    :P
  • EarthQuake
    Options
    Offline / Send Message
    @EQ - the shader matches UE4 100%. that said, my question could have been worded better:

    If roughness is something calculated differently in each engine, and roughness + reflectivity have to be measured together as they're so closely linked, and also bear in mind that inaccurate reflectivity readings will also affect your albedo reading, then how can you make sure that your scanned data is physically accurate in all engines.

    the short answer is: you can't.

    the longer answer is:
    thinking about it the same way as why we artists calibrate our monitors so we're all (hopefully) using the same reference standard - it's so that when other people view it, the variances are minimized.

    this is a similar situation, if everyone uses the same calibration for their scanning data, it means variances across engines are minimized.

    with that said, Ryan Hawkins (of Quixel) told me over skype that they can actually calibrate their scanners for different engines. which is pretty badass.

    Yeah its a complex issue, and I think calibration on the content creation side like Quixel is doing is a good way to deal with it. Or you could petition the makers of all game engines to use the exact same shaders. :poly124:

    Every PBR system is only an approximation of what is realistic, there is no "one true way" to write PBR shaders, so there will always be some variance in specific implementations.

    Actually on our end (Marmoset), there is nothing stopping us from adding shader models specifically to match up with common game engines (well, nothing besides time and resources, of course). This is something we may look into at some point.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    Speaking of different models I finally got around to looking in the UE4 source for Oren-Nayar, and there's a nice file in the Shaders folder called BRDF.usf. It has a bunch of different models you can switch between for pretty much everything.

    For the diffuse model you can choose between Lambert(default), Burley, and Oren-Nayar.
    For the physical specular (microfacet distribution function) you have Blinn, Beckmann, and GGX(default).
    For geometric attenuation or shadowing you have Implicit, Neumann, Kelemen, Schlick(default), and Smith (matched to GGX).
    For fresnel you have None, Schlick(default), and Fresnel.

    The code for the models is also in the file if people are wondering how they're implemented. You could change things to your liking as well or just see what math they're doing (for their GGX roughness for example).
  • oks2024
    Options
    Offline / Send Message
    You can also check Brian Karis blog post where he listed all the equations he used when implementing PBS in UE4.
    And if you want to try those BRDFs in a lightweight environment where you can set you textures and tweak parameters, you can try my (very WIP) viewer .
  • artquest
    Options
    Offline / Send Message
    artquest polycounter lvl 13
    marks wrote: »
    Not true. Scuffed plastic still has the same reflectance value as shiny plastic, reflected light is just scattered more. The two values are pretty independent from what I understand.



    If your scandata is measured correctly, and the rendered result isn't the same in your engine as every other PBR renderer - you've done something wrong. Standardization is the cornerstone of PBR.

    HOWEVER:
    Sure different engines can calculate roughness differently - for example UE4's is pretty perceptually linear whereas the roughness curve we're using on A:I skews more towards rough for more precision on the bottom end of the scale (as that's what suited our needs best). Different strokes for different folks...


    While it is true that scuffed and shiney plastic both have the same reflectance value... How accurate is our method of capturing the percentage value of light reflected? For instance, look at how much difference micro geometry has on the visual look of reflectivity on skin.

    Figure 1: (a) Rendering with scanned mesostructure (4K displacement map). (b) Rendering with synthesized microstructure (16K displacement map). (c) Photograph under flash illumination.

    BigTeaser.jpg

    Referenced from here:
    http://gl.ict.usc.edu/Research/Microgeometry/

    Specular reflection is entirely view dependent and based on the micro-structure of an object. It makes little sense to me to try and scan a single viewpoint of an object and use that for all view cases.

    I guess my real question is, is it even worth it to scan specular reflectance values at all? Why not move entirely to using the index of refraction to calculate the F0 term (reflectivity) of an object?

    That being said... We don't have the resolution to do it based on geometry in games so I guess treating specular maps as a micro level cavity map approximation isn't a bad idea. (to account for light that gets trapped inside the microsurface and decays with each bounce.)
    On a side note... I imagine trading a procedural V term for micro geometry based on real surfaces will add quite a bit to the realism of any given surface.
  • Gestalt
    Options
    Offline / Send Message
    Gestalt polycounter lvl 11
    What would be nice is if scanned value libraries had comparisons to some actual photographs of the subject. At least then if someone were to use them they'd have some confidence in how the values compare in their engine of choice, maybe give them ideas for things they could adjust.
  • EarthQuake
    Options
    Offline / Send Message
    Gestalt wrote: »
    What would be nice is if scanned value libraries had comparisons to some actual photographs of the subject. At least then if someone were to use them they'd have some confidence in how the values compare in their engine of choice, maybe give them ideas for things they could adjust.

    That sounds cool but it wouldn't really work. There's too many variables there. First off you would need to maintain the exact same lighting in every shot, which means photographing everything in controlled studio lighting, that in itself is very difficult with objects that vary in size, not to mention objects that would be difficult to get into the studio in the first place. Secondly you would have to duplicate the lighting accurately in your game engine to verify accuracy.
2
Sign In or Register to comment.