The wrinkle maps are entirely engine dependent, you can do it in max by using a composite map and animating the opacity of each layer/wrinkle map. Each layer has its own blending options like Photoshop as well as a mask for each layer. Typically the opacity is wired to the same controller moving the morphs/bones around so you're not stuck animating half a dozen buried parameters.
The composite map (not material) goes in the bump slot of a standard material. If you also have spec and diffuse wrinkle maps you can wire them the same way also.
This thread goes over the max method in pretty good detail.
As for bone rotations, its probably the most unfriendly way to aniamte the face, but you normally place the root face node as far back in the head as possible to give you as much of a flat arch of motion as possible. I don't think you can key the rotation of bones that have look-at constraints applied so you end up using the animated align trick again to get an unconstrained set of bones to snap to and key to the bones that are looking at and following the points on the face.
UDK as well as most modern engines supports floating bones in addition to vertex animation for faces, so typically you don't have to deal with bone rotation based rigs. Its mostly just the extremly old engines, low tech platforms like mobile, or very simplified home-brew engines that are rotation based facial rigs.
Then there the method that Team Bondi use (which I'm not sure if its going to take off or not), where you throw out the bones and floating points, and use facial scan where you scan the facial performances in 3D and play that performance on the head.
I gag a bit when they say things like "animators don't like doing that facial animation stuff so we automated it
" well... they aren't talking to the right animators because there is an entire film industry and schools related to developing those skills and the people that do it are EXTREMELY passionate and happy to do their job. Or on their past projects the rigging and technical problems were never really cleared up so animators could move quickly and freely. If you give animators 3 bones for the face and say give me facial scan quality, of course they're going to throw their hands up and say "screw you it isn't happening". But I think they shot too far the other way with facial scan, but we'll see where it goes.
There are also other methods of capturing facial acting that give the animators much more flexibility to clean up and enhance the performance which aren't as heavy on the hardware or systems.
So... there are so many ways to get it done and they all depend on a lot of factors surrounding the game, its hard to point to a method and say this is how to do it. But hopefully this gets you rolling.