Quote: Also you said something about gpu enabled pipeline threshold. Wonder what do you mean with it ?
Even if all your geometry deformers are gpu enabled, it still has to have enough vertices to enable the gpu computation.
There was a whitepaper from autodesk about the parallel mode in maya2016, where they explain that under a certain complexity, uploading geometry to the gpu for deformation shows very little benefit, if not at all.
Threshold is not the same if you have AMD cards or NVIDIA cards.
Numbers are probably not very accurate, but I think it's 500 for AMD and 2000 for NIVIDIA.
There is an environment variable for modifying it, but we prefer not to, AD folks must have established them for a reason.
Quote: I am curious about your experince with parallel eval and the gpu mode.
So far for me it have been almost entirely a no go for various reasons.
Can you talk a bit what kind of assests and complexity you guys deal over there with ?
So far we make a first step with using it full steam, it's on the rough layout department, where they have simpler rigs. They also want to be able to play with shadows, dof, so they want to use vp2.0
We took this opportunity to remake their rigs from scratch, with a few goals:
- Rigs should be as fast to load a possible
- There may be 10-20 rigs at once evaluated realtime
- Have a real head with some reduced facial expressions (only blendShapes)
- Still have possibilty to show/hide parts of the geometries
But our animation rigs are way more complex, with many proprietary node, mainly for deformation.
some of them being problematic to write their gpu kernel for, because the gpu deformation pipeline has restrictions about the dependency graph (fanned out connections for output geometry being the most annoying)
They have to deal with less characters on screen, and they switch to gpucache display (inhouse) for rigs they don't focus on animating.
On each rig you can switch to 3 different geometry lods, and we face the evaluation of all nodes wether they are hidden or not, which become a problem for us right now. And because loading times can be long, having 3 different rigs is not something we ca impose to animators. (They have power ^^)
We usually have one set composed of arbitrary subparts.
This set can be huge, and they often use a gpu cache for them.
About props, we usually have many of them, varying in complexity from a single constraint to some elaborated deformation.
So for animation, thats an entirely different scenario, and we still are experimenting.
We still use your frameCacher, that is a life saver, animators love it!
I remember trying your gpu enabled frame cacher, but it's a few months ago, but I could'nt have shading on them, so I did not explore more.
We did consider developping one, but it was prior to maya2016 and it's parallel evaluation manager, which can make the solution tricky. AD seems to even have problems with their own vertex animation cache option under the vp2.0, which indicates it's not a trivial problem. But maybe I'd ask more to our R&D guys.
In summary, parallel mode + gpu shows real benefit on simple scenes, if you can control what ends up in the assembled scene.
If you have more questions, I'd be happy to answer, in the limits of NDA of course.