Register Latest Topics
 
 
 


Reply
  Author   Comment   Page 1 of 2      1   2   Next
combi

First taste
Registered:
Posts: 19
Reply with quote  #1 
Hi,
I beginning my journey into the maya api, and decided to start with a simple node.
The goal is to manage invisible faces on a mesh, and have the possibilty to mix different lists of faces indices to register as invisible faces.
It should allow me to manage the visilibty of parts of a mesh without modifying the shaders assignations/parameters...
 
Here is a gist of what I have so far:
 
I try to override the invisible faces on the incoming mesh, if there are any, between lines 82-88, but I can't figure why this is failing.
 
I should mention that it is one of my first ventures into maya api to create dependency nodes, and I may overlook something quite obvious.
 
Any pointer would be very welcome...
 
Thx
0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #2 
Hi Combi,

i tried it little quick and cannot get any change in the viewports.
To be honest i never noticed these methods in MFnMesh and as far as i am aware there is nothing that uses them in Maya.
Not sure this actually works, or at least if it works for what you are looking for. Do you know an existing tool or workflow in Maya that uses this technique ? We will need a starting point - a proof that this is indeed working as expected and then take it from there.
0
combi

First taste
Registered:
Posts: 19
Reply with quote  #3 
Hi Peter,
Thanks to have taken the time to look at it.

Actually, you don't notice any change, because of the lines 80 to 88 in the code.
If you comment them, it should work.

What you'll see is faces "disappearing" from being rendered in the viewport, following which "hide" attributes from the invisibleFacesMixer1 node are set.
They describe rings around the torus.

But you should also notice an extra invisible face, and this one comes from the input mesh.
I was trying to give the option to the user to let those invisible faces pass through or not, thus the lines 80 to 88.

We use this maya feature a lot in our pipeline, as we can send those "invisible faces" to our inhouse renderer, and it just does not render them.

We also use them during layout stage, to allow a camera to be "behind" a wall, but see through it, etc...

Nicolas

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #4 
Nope nothing works here.
I tried sticking bunch of commands myself, but still nothing.
Looks like i am missing something.
Can you send me a scene where this works ?
This will give me a good starting point.
Which renderer you use over there ?
Thanks.
0
combi

First taste
Registered:
Posts: 19
Reply with quote  #5 
Here is a scene working.
For the expected result, you need to comment lines 80 to 88 in the code from the gist.
It should work on legacy as well as vp2.

I tested the scene under 2014-2015-2016, so you should definitively see something...

Thanks for having a look Peter.

Nicolas

 
Attached Files
zip plugInTests_invFaces.ma.zip (22.99 KB, 14 views)

0
combi

First taste
Registered:
Posts: 19
Reply with quote  #6 
Oh, and be sure not to be in "default shading mode" in the viewport, as it bypasses the invisible faces feature of the meshes.

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #7 
Ah, i see.
Actually i do know about this.
It is a replica of the `polyHole -assignHole 1` command.
There are some problems with your code.
Will take a look today, but here is a question - do you really need this as a part of the DG ?
Usually hiding/showing data in the scene belongs to the command engine (and exposed in the GUI as buttons, dialogs, etc).
0
combi

First taste
Registered:
Posts: 19
Reply with quote  #8 
Thanks Peter,
The goal is to allow animators have a character with a single combined mesh for performance purpose, and to be able to hide say a hat, or a coat, without referenceEditing the meshes properties...which can be a real nightmare!

At first I tried to make a node that outputs componentLists, but I was unable to create this kind of attribute in the api without having a mesh input.

But if there is another way to reach the goal, without a dg node, I'm interested.

Nicolas

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #9 
I personally wouldn't do it as a DG node.
Python nodes as part of the DG slow things down. We do not want this in animation rigs.
Triggering visibility for rig components can be done with command, script, etc.
What i would do is bake face IDs in a string or intArray attribute on the top node or other predefined location in the rig and then have a little shelf button that reads that and runs the polyHole command with these IDs as an argument.
The down side is that it is not fully integrated in the rig.

Hiding polygons this way does not improve rig performance because the mesh nodes themselves are still visible and that triggers DG evaluation. If we have to have all geometry as one node then the alternative is to just cluster parts of it and then scale the clusters to 0 - it has the same effect.

The ideal solution is to split the geometry into pieces and trigger their visibility - this will speed things up.
0
combi

First taste
Registered:
Posts: 19
Reply with quote  #10 
Hi Peter,


What you said makes totally sense!
Actually, we already explored the option of setting polyHole componentList via some baked faceIds.
We would install this polyhole during the rig creation, and put face ids onto those node directly as string arguments.

But in the past we had so many troubles with referenceEdits, that we wanted to explore the "encapsulated in a DGnode" option. But we are aware this may be a wrong solution, I thought it would be a good exercise for learning on something concrete.


So, doing it as a python node is just for prototyping as well as learning/tinkering on my side.

A little background for this node:

The targeted rigs are for the rough layout department, and are derived from the animation rigs.
They are piped through scripts that decimate the geometry as the rig itself , keeping only the controlers needed for the rough layout guys, to be as minimalist as possible in regards of their needs.

Those rigs conforming strictly to the parallel mode + gpu deformers (with great success so far), it implies we don't want to break up the character modeling into many parts, because they would be below the gpu enabled pipeline threshold, so we have them all combined.

Also, the goal for this node is not to increase rig speeds, but to allow visibilty management of some parts of the character. As I said, rigs are already at very high speed, thanks to parallel+gpu. And in the end, anyway, under this mode, hiding geometries is not an option for improving performance (which is pretty annoying!!! Let's hope it changes someday).

For now, we are quite crunched by the start of new projects, but I will definitively come at it again when things settle down a little, at least to understand why it's failing.
In the mean time, if any of you as a view on why emptying the invisible faces before setting them is failing, I'm all ears! ^^

Anyhow, Peter, thanks for the time you dedicate to all the users on this forum, it's truly amazing!
(Even if I am mainly a lurker, I read each and every post, and I learn a ton of things here!)
I whish they had the same attitude on the maya beta rigging forum (they do in the modeling one, and he is doing a really great job, imho).

Nicolas

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #11 
I see that my comment was not very good. You obviously cannot see through my eyes. Sorry about that. Let me explain better.

When you run the polyHole command on an object with construction history it creates a node called polyHoleFace. It has input components attribute that define which faces are hidden.

Setup multiple group nodes that contain the IDs of the different sets of faces you want to hide at any given time.
Create a "choice" type node and connect the outComponents of the group nodes to its multi input attribute. Then connect its "selector" attribute to a control attribute or something in your rig that will change its value - effectively switching between the different inputs.
Connect the output of the "choice" node to the polyHoleFace.inputComponents.
Done.
No reference edits.

What i mentioned before about not getting better performance is still valid - the shape objects are still visible and evaluated entirely.

If you still want me to take a closer look at your code - let me know.
0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #12 
here is a screenshot

Attached Images
Click image for larger version - Name: a.png, Views: 20, Size: 42.50 KB 

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,639
Reply with quote  #13 
Thanks for the kind words btw.
Yes, the modeling tools are shaping nice. Hope the rest of the maya features will catch up too.

I am curious about your experince with parallel eval and the gpu mode.
So far for me it have been almost entirely a no go for various reasons.
Can you talk a bit what kind of assests and complexity you guys deal over there with ?

Also you said something about gpu enabled pipeline threshold. Wonder what do you mean with it ?
0
combi

First taste
Registered:
Posts: 19
Reply with quote  #14 
Sorry, I realized my previous post was unclear also.
Yes we are definitively going the polyHoleFace node route.
Each time I said polyHole, I should have really written "polyHoleFace node", as I meant the maya node, and not the mesh attribute.

But as we are looking for emulating show/hide mesh parts functionality, we don't want to use the choice node, because it would impose us to plan every possible combination, which is fragile to us.

Our actual (simplified) workflow right now is:
- List all meshes to be in the roughLayout rig. (each mesh deformed only by gpu supported nodes)
- Make a copy of them, put their vertexids/faceIds into sets.
- Save out deformer weights of non skinCluster deformers
- Copy skinWeights from original to copy
- Merge them all with skinning option turned on
- Reapply other deformers on merged geo
- reWeight merged geo thanks to the vertexIds sets.
- Create polyHoleFace node upstream
- Convert each faceIds sets membership into an attribute onto the polyHoleFace node
- Delete original geos

Result is a single mesh, deformed as the originals. And we still can show/hide parts of it like it was separated meshes. And it would be a separated command to trigger setAttrs on the polyHoleFace node according to the extra attributes we did add to it.

So right now, one of my todo tasks is to create this polyholeFace node, feeding into the upstream shapeOrig, and write onto it those faceIds lists. I'll do this in a few days, after I finished other tasks.

0
combi

First taste
Registered:
Posts: 19
Reply with quote  #15 

Quote:
Also you said something about gpu enabled pipeline threshold. Wonder what do you mean with it ?

Even if all your geometry deformers are gpu enabled, it still has to have enough vertices to enable the gpu computation.
There was a whitepaper from autodesk about the parallel mode in maya2016, where they explain that under a certain complexity, uploading geometry to the gpu for deformation shows very little benefit, if not at all.
Threshold is not the same if you have AMD cards or NVIDIA cards.
Numbers are probably not very accurate, but I think it's 500 for AMD and 2000 for NIVIDIA.

There is an environment variable for modifying it, but we prefer not to, AD folks must have established them for a reason.

Quote:
I am curious about your experince with parallel eval and the gpu mode. 
So far for me it have been almost entirely a no go for various reasons. 
Can you talk a bit what kind of assests and complexity you guys deal over there with ?


So far we make a first step with using it full steam, it's on the rough layout department, where they have simpler rigs. They also want to be able to play with shadows, dof, so they want to use vp2.0
We took this opportunity to remake their rigs from scratch, with a few goals:
- Rigs should be as fast to load a possible
- There may be 10-20 rigs at once evaluated realtime
- Have a real head with some reduced facial expressions (only blendShapes)
- Still have possibilty to show/hide parts of the geometries

But our animation rigs are way more complex, with many proprietary node, mainly for deformation.
some of them being problematic to write their gpu kernel for, because the gpu deformation pipeline has restrictions about the dependency graph (fanned out connections for output geometry being the most annoying)
They have to deal with less characters on screen, and they switch to gpucache display (inhouse) for rigs they don't focus on animating.
On each rig you can switch to 3 different geometry lods, and we face the evaluation of all nodes wether they are hidden or not, which become a problem for us right now. And because loading times can be long, having 3 different rigs is not something we ca impose to animators. (They have power ^^)

We usually have one set composed of arbitrary subparts.
This set can be huge, and they often use a gpu cache for them.
About props, we usually have many of them, varying in complexity from a single constraint to some elaborated deformation.

So for animation, thats an entirely different scenario, and we still are experimenting.

We still use your frameCacher, that is a life saver, animators love it!
I remember trying your gpu enabled frame cacher, but it's a few months ago, but I could'nt have shading on them, so I did not explore more.

We did consider developping one, but it was prior to maya2016 and it's parallel evaluation manager, which can make the solution tricky. AD seems to even have problems with their own vertex animation cache option under the vp2.0, which indicates it's not a trivial problem. But maybe I'd ask more to our R&D guys.

In summary, parallel mode + gpu shows real benefit on simple scenes, if you can control what ends up in the assembled scene.


If you have more questions, I'd be happy to answer, in the limits of NDA of course.

0
Previous Topic | Next Topic
Print
Reply

Quick Navigation: