Anyone have any insight as to how  maya's  emitters  calculate their velocity? (omni, and more  complicated-ly surface emission) 

I'm looking to replicate  this functionality  so I can create an emitter  that is capable of  emitting particles that   "inherit"  the emitter / surfaceEmitter  motion?

I can of course  track the emitters  "last position"  or  possibly  look for animation connections  and  get transforms  at different timesteps  and use that, but I'd just like to know if there's  some other magic  I'm overlooking, 

Peter, without  giving away too many of your secrets,  you're  velocity  node, I'm assuming  is just  doing  a  "trail sop" style lookup of  previous frame information somehow?  on a vertex level possibly?

any information would be greatly appreciated.
thanks :-)


Quote 0 0
You are correct. The computeVelocity guy is looking at previous/current or current/next step by cooking the requested part of the graph through MDContext at specified time.
One thing to look for is to make sure you work in world space.
This can happen by connecting worldMesh/worldSpace attributes to your inputGometry attrib and then with data handle get .as*Transformed().
This i believe will still return local space data if you pipe the MObject to any of the component iterator classes.
So the safer bet i think is to obtain the node on the other side by tracing the input connection and get the dagPath to it. Then all the MFnMesh/Curve/etc and MIt classes will respect the world space.
One thing i noticed is that if you operate on geometry data through dagPaths within a node then something goes wrong and memory gets consumed and not released after we exit the compute methods.
This is clearly noticeable if you work with heavy geometry and lots of nodes that use dagPaths to it.

Quote 0 0
Btw, you can make the particles inherit source geo velocity with SOuP without relying on the default emitters.
Another person (noizefactory) asked today in another thread how to make particles inherit point colors from some geometry and i pointed him at the very last video in the examples page and the related to it scene.
You can start from there too but instead of reading the point colors modify it to generate point velocities and read that inside the particle shape.
Quote 0 0
Awesome!  thanks!   

I hope you don't mind me asking so many questions,

I just looked thru the  devkit, and couldn't  really find a  decent example of  MDGContext using the time  parameter.

would you have  a simple snippit example  of using the MDGContext   to access
an attribute   at a specific  supplied time? 

Do I define a  context  at a time and  then  grab the block  as in the "rockingTransform"  example   using  forceCache(*(MDGContext *)&context)
and then just operate on the plug/handle that I've  got to that node?

thanks  :-)

Quote 0 0
No, i don't mind.
Basically you need few things - input time attribute (it can be of type time or double), timeOffset value and optionally you can add "steps" feature, but usually one steps before and or after is enough.

double dTime;
MPlug(oThisNode, aTime).getValue(dTime);
MTime timeA(dTime);
MTime timeB(dTime+dTimeOffeset);
MDGContext dgCtxA(timeA);
MDGContext dgCtxB(timeB);

MPlug plugInGeo(oThisNode, aInGeo);
MObject oInGeoA = plugInGeo.asMObject(dgCtxA);
MObject oInGeoB = plugInGeo.asMObject(dgCtxB);

From that point you initialize the function set that operates on the input data type.
For each point you will get two positions.
To calculate velocity you subtract one from the other. If you want you can account and for frame rate, or just give an option for global multiplier to the velocity vector.

Hope this makes sense.

Quote 0 0