• Posts 45
  • Reputation 0
  • Member Since
  • Last Active
Email
All Posts Topics Started Likes
Coverage Map for Matte Paint in Maya
Thank you for all.

As a matter of fact Peter your solution is using a vertex color so it might be easier to bake it into a texture file (if there is an UV ) rather to use the scatter node as output. Of course with point cloud gives more flexibility. My only concern is that the scatter node starting to be quite slow if I use a bigger amount of points which is necessary to describe more precisely the coverage area.

Does that proposed arrayAppend node make sense? What do you think? I'm thinking about to to start my RnD carrier and that node would be the first step :) But I don't know if it is possible to write and it would be a useful node in general.

Plars:
I can't achieve your solution from scratch. Can you attach a test scene for me?
0 0
Coverage Map for Matte Paint in Maya
There is the so called coverage map in connection with (The Foundry's) Nuke. It basically means we project a white color from the moving camera with the aspect of the rendered image onto a proxy geometry representing the 3D set, landscape or full background. It is useful to determine the area that the camera sees or covers during its movement. Based on the coverage map and the render output size we can calculate how big would be the size of the matte paint. To do that we usually create a projection camera which sees the whole coverage map.

So my intend was to create that coverage map in Maya. I used the scatter node because it had an option to scatter onto the area only what the camera saw. Unfortunately I could not manage to keep the points already scattered to the geometry. As the camera moves the scattered points are disappears and new points appears.
I tried the arrayDataContainer node because I expected it did something like that but it did not.
As I understand it works on fixed size array and only stores the value changes to that array. So we should have a node (might already have) which appends array items to an existing array.

So I used pointCloudParticle emitter to emit from the scattered points during the animation and it works fine. nParticles have the .emissionOverlapPruning attribute, so it deletes particles
in the same place which helps to decrease the particle count. At the and of the camera animation I cloud bake the particles as an initial state and delete the whole setup.
So I had an nParticle node which was basically kept the coverage map (points).

Any idea to do it full procedurally?
0 0
USD
Hi Peter

Can you tell us a bit more about that?
What does public API mean?
Is your hint that Pixar approach is too big issue for a smaller studio (20-50 people)? I would also interested about what you wrote: fully portable. So what does it mean
particularly?


pshipkov wrote: Last year i built similar system - a generic/unified data container + function set that can store and track everything across the board. It has a public API, command interface and GUI (asset browser) that work in standalone, Maya, Houdini, etc in a completely non-destructive manner.
Everybody in production plays in a single sandbox that works across multiple studio locations.
My approach is simpler than what Pixar is doing, but i believe it is also more flexible and scalable. For example we had scenes with hundreds and thousands elements and the resolve was very fast.
Among other things like caching mechanisms, representations, version control, there is a complete undo/redo stack that allows the artists to walk for and back through the history of any asset or group of assets and see interactively what happened to them from their "birth" to "this" moment.
Also i like things to be fully portable, but not rooted in particular pipeline or workflow, so one can plop it in another studio, modify/override the "source" module to interface with whatever directory structure (optional database, etc) and move on.

:)
0 0
Copier improvements
Hi SOuPers and Alex Smolenchuk

With copier node life is easy when we are talking about particle effects. Usually the main problem is that I have a cool setup but hard to render it. Expecially if we have to pass the effect accross different apps or different render engines. Copier is an easy way to generate geometry from particles (skip the instancer with baking it into geo sequence). So we can say with copier we can do particle FX render agnostic way. Since there is alembic format supported all the major 3D apps we can do render and application agnostic output from particle effects. So that is my conclusion using copier node.
Thanks for sharing that!

My idea is to further development (based on what I wrote before) a so called multi-vertex-color workflow. That means it would be nice, the have multiple rgba inputs (up to let's say 99) and outputs as well.
So for eg. I create an rgbPP (basically a vector) within the particle node and create an expression which makes color based on the speed of the particles. And I create another rgbPP and use that to ouput the age of the particles (for eg. white is the born color and black is the death). And I create another rgbPP to output the world coordinates of the particles. And so on. If I had multiple rgb input in copier and I could output those channels to a mesh as different colorSets I could use them to render separated render AOV-s (layers / passes) and in the composite package I could refine the particle effect like hell.
0 0
mergeArrays issue
Thank you Peter

It helps a lot!
0 0
mergeArrays issue
It might be that my understanding of array data is limited. Is there a good reference (maya API docs?) about what the data types mean used in connection with SOuP? Array, multi, doubleArray, dynamicArray, etc. I think we should describe it on the SOuP wiki.

Back to the current problem. There is a graph shows that I could plug arrayToArray nodes into the meregeArrays. But the instancer does not work as before without mergeArrays node.

MergeArrays_graph.png 

Another interesting thing is that the arrows from mesh2Arrays display different color (gray) then the arrows from arrayToArray despite it should be the same data type (array).

I attached the scene file as well (maya 2015 scene file).
0 0
mergeArrays issue
Hi SOuPers

I have two pieces of geometry. I use mesh2Arrays to generate position array from the vertices. My intend was to combine the two geometries' positionArrays to one array. As I tried to plug mesh2Array.positionArray to the mergeArrays the attr .inArrays[0].inArray was gray.

Mesh2Arrays_mergeArrays.png 



0 0
bake arbitrary deformations to skinCluster - the SOuP way
Hi Peter and SOuP-ers

I found that would be nice to have an option for the bake arbitrary deformations tool. It places the joints kind of a random pattern to bake the deformation. I predict that there are situations (see below) where manually placed joints would produce better result or at least more useful for further keyframe animation.

JointArrangement_Auto.png 

JointArrangement_Manual.png 

So the tool could have an option take list of joints instead of generating them.
(joint per vertex would be another extra option)

0 0
Alembic Info in JSON
Easy like 1-2-3 :)

First of all you should define to store the data shader or geometry centric way. I mean that you can store geo -> shaders or shader -> geos.

I pretend the goal is to reconnect shaders to an alembic cache. I developed shader-geometry connection method which was shader centric. The data model looks like this:

{
  "Geometries": [
    "geoName1",
    "geoName2",
    "geoName3.f[4:30]",
  ],
  "Shaders": {
    "displacementShader": null,
    "surfaceShader": "ShaderName",
    "volumeShader": null
  },
  "ShapeAttrs": {
    "attrName1" : "value1",
    "attrName2" : "value2"
  }
}


So the shader is saved as .ma file and the .json file is store the shader name and the connected geometries. I found useful to store displacement, surface and volume shaders separately. Also useful to store shape node attributes and their value because render engines like Arnold has a lot of settings on the shape node.
0 0
inverting weightPP
Yes, the effect I want to achieve is to flattening the snake's body on the ground, but also have to make the tail more stiff so that part of the body should deform less (and the head of course).

Obviously the gif animation I attached previously is not appropriate because it intersects with the ground but this deformation setup (+ a cluster deformer) works fine for a rigged animated snake the only problem is that I have to animate the boundingObject by hand.

I hope it is clear and make sense.
0 0
ArrayToTexture2D with scatter node
Awesome! Thanks.

Just for the record, here is the result:

scatterOnBoundingObj_v003.gif 

And the graph:

scatterOnBoundingObj_v003_graph.jpg
0 0
inverting weightPP
So the scene looks like this right now:

conntactMap_masked2StreamArray_arrayBlendArrayExp_v001.gif

As we can see I masked out the tail to prevent deformation on that part of the geometry. But I need to animate or constrain the boundingObject to follow the moving geometry. So I would like to use a painted vertex color as a mask.

vertexColorMask.jpg 

When I painted the geometry with Paint Color Vertex Tool it created a PolyColorVertex node. I pretend that node can be pluged into the graph. As far as I understand vertex color information can be extracted with attributTransfer and used the color checkbox to output as an array for the arrayBlend or combined with arrayExpression. But I got that warning message...
0 0
inverting weightPP
I tried to mangage what you described. I had success with the node graph below. I guess it is not the easiest way but I found that one arrayExpression was not enough because I had two input: the original deformation (based on a boundingObject) and the one intended to mask the deformation (based on a boundingObject as well). The arrayBlend can deal with two array input and also can do multiplication.

ArrayBlend.jpg

I would like to go further but I stuck. According to a standard maya workflow I could paint attribute weigth map for a deformer. My idea is that I use vertex color to mask out the deformation but I have no clue how to implement that polyColorPerVertex node to this graph. I tried different ways, usually got a waring message:
// Warning: Both arrays need to be of an equal length. //

I attached the file current version.
0 0
ArrayToTexture2D with scatter node
Thanks Peter now I undestand what you meant hard and slow way.

But :)

As far as I understand in this case the attributeTransfer passes the geometry itself which is extended with vertex color information thanks to the bounding object. However I could not find the way to implement arrayDataContainer to get the trace of the bounding object.

 scatterOnBoundingObj_v002.gif 

I would like to have all the scattered points on the path of that moving object. I attached the file.

***

I could solve my initial problem with a manually painted contact map which I referenced as a texture map (file node) and the scatter node could deal with it. It would have been nice to have full procedural setup (generating the contact map on the fly) but I guess it would have been extremly slow.
0 0
inverting weightPP
As far as I understand it should work like a 2D masking. With multiple bounding object we can achive something like plus / minus operation, but masking means multiplying. So we have a weight value on a part of the mesh and we have to multiply it with 0.

I also tried to solve this problem without success.
0 0
count post selected

Add a Website Forum to your website.