Hi everybody,
  recently I was considering how hard it would be to implement a cache reader.
Currently, I'm considering this just as an exercise, but I have in mind a few specific scenarios this might be used for.

I have a decent (but yet limited) understanding of the steps I might go through to implement this but I have a few doubts, about which I'd like to hear some experts' opinions.
I hope I won't ask any silly question.

1. Should the node be an emitter or can it be a simple MPxNode?
I'm wondering what's the correct approach for the node's core.
I was thinking of a very basic MPxNode; it would read the files and output particles (and their attributes) as data arrays, directly into the particle system node (similarly to what I was doing here).
Is there any drawback in not basing the node on a MPxEmitter?

2. Should I emit 1-frame-lifespan particles? Is this what maya's native cache nodes do?
This is the most basic solution I can think of; I was wondering if there's any better (and not-to-complicated).

3. Any hint about performance optimization?
I would like to consider performances, at least to avoid pointlessly slow implementations which would be unusable in a real working scenario.
Is there any built-in solution in Maya's API, to handle what/how/when is cached in memory for a nodes' computation?
Is it something that I'd better implement myself from scratch, to fit the specif requirements of the node? Is it hard to achieve, being memory mainly managed by Maya?

4. Any existing basic example source?
Obviously, it'd be great to have an example to look at.
I'm currently looking at but it's taking a while to figure out where to look (@redpawfx if you read this, any hint is super-appreciated :) ).
But I must have chosen the wrong keywords while googling it, as I couldn't find anything useful so far.

I hope this post was not too long, and I didn't repeat concepts discussed elsewhere in this forum.
I tried searching for "cache" and found a few interesting discussions, but nothing specifically in the direction I'm aming to.

Thank you,
s  t  e
if you're the least noob in the room, you're in the wrong room
@mapofemergence on twitter
Quote 0 0
Either way will be fine.
If you do it as emitter then this will not conform the particles entirely to the cached data if other emitters are attached to them.
If you do it as a custom node - just match the attribute names, types and data - and this will be effectively your custom cache node.
Quote 0 0
Thank you Peter for the hints, and the reassurance above all :)

As you put it, it sounds fairly easy (at least to start with).
I'll try implement a sketchy python version and if that works will move to proper c++.

I'm still wondering about the performance issues (even with c++, if cache files are big), and asking myself whether I'll need to care for memory handling or not.
if you're the least noob in the room, you're in the wrong room
@mapofemergence on twitter
Quote 0 0
You will have to manage the resources - that's for sure.
There is a devkit example about how to read particle caches - cannot remember right now if it was for PDC caches or the new format, but that will give you a good idea how it is done and you can fallow suite.

Performance and small memory footprint are always good, so the more you can do there the better.
Quote 0 0
Hi,   so the closest  thing in the partio tools  to what you are looking to do is  the partioEmitter 

so you'll want to  have a look at that one.. it was not written to be  a caching system exclusively
but  written as an emitter specifically so that you could have the option to take an existing sim and "re-sim" it
to add more data  or  do something else with it after the cache ran out for example. 

There's  some  stuff in there for remapping cache attributes  to  new particle systems  attributes 
it gets a bit confused  with  lifespan issues  if you try to kill the particles after  the emitter runs...

I would not go this way if you're just trying to make a  "Reader only" type of node.  I would probably go with  a more simplistic approach  like you and  Peter have been discussing.

hope this helps a little..

Quote 0 0