Register Latest Topics
 
 
 


Reply
  Author   Comment   Page 3 of 4      Prev   1   2   3   4   Next
arichter

First taste
Registered:
Posts: 7
Reply with quote  #31 
I have created something looking at your example which looks like global directions.
Not exactly what I wanted but we are getting there.
What would be the final touch to get object based tension directions?

By the way: Which codec is used for vertexColorsToTexture? Since XnView is the online program which can open and convert them (not Nuke, Adobe Bridge, Photoshop etc.)

Attached Images
Click image for larger version - Name: global_direction.PNG, Views: 20, Size: 449.43 KB 

0
arichter

First taste
Registered:
Posts: 7
Reply with quote  #32 
This looks better. With your explanation and the example I came to this result.
Thanks! I will have a look how much of the data I need is in there.

Attached Images
Click image for larger version - Name: global_direction.PNG, Views: 27, Size: 445.06 KB 

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,709
Reply with quote  #33 
Seems like the right data, but i display the point normals to see if they transform in "uv" space.

The vertexColorToTexture can outuput jpg, png, tiff, iff formats - image sequence.
0
Pizzaman

Asking for seconds
Registered:
Posts: 133
Reply with quote  #34 

With Arnold, you just have to enable the "exportVertexColors" option on the objectShape->ArnoldTab. Once done, you can create a aiUserDataVector node which will be able to read the values per point and use them as a texture in your shading graph : No need to bake to textures. You just need to fill the "Color Attr Name" slot of the node with the name of the colorSet containing the values.

Works like a charm with progressive rendering...

... but I think the hardest part awaits you there. You'll need to convert your vector direction to an angle rasterized in UV space to drive the angle attribute of your shader. Can't help you there, so good luck.

0
arichter

First taste
Registered:
Posts: 7
Reply with quote  #35 

Thanks for the answers!

Since I would like to process the direction vector further, it would be very helpful if I would have the (exact) mathematical definition of the vectors being outputted by "...out variable...". Could you share this definition or your procedure you use for the computation ?

So far, I can only guess that the principle deformation axis are derived by the singular-value-decomposition of the matrix which maps the triangles between static and deformed tanged-space positions. These directions (or the matrix itself) computed per triangle are than averaged to obtain the per vertex direction vector.
Is this true? Or do you use a different approach?

Best, Alex

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,709
Reply with quote  #36 
There is no triangles used in the computation.
Everything is derived from existing point attributes.

arrayExpression - calculate the overall squashAndStretch vector
pointsOnMeshInfo - convert the 3d deformation to uv space (as vector per point)
point - calculate the angle between squashAndStretch and uv offset vectors in 2d (uv) space

This scene can be simplified quite a bit, but for now we use it the way it is.

let me know if you need even more detailed explanation.
I have the feeling you are struggling with the nodal network and overall concept.
Don't worry to ask more questions - we are here to help.
0
arichter

First taste
Registered:
Posts: 7
Reply with quote  #37 

Do you know an "easy" way to convert the 3D Direction Data into 2D Direction Map in UV Space?

Since if I would use vertexToTexture it would create a 3D Direction Texture but I would need it 2D to make the image processing work with the tension map data.
Has SOuP maybe already a function for that?

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,709
Reply with quote  #38 
This topic here is exactly about that.
The example scene you were looking at does exactly that.
0
rolfcoppter

Wanting the recipes
Registered:
Posts: 63
Reply with quote  #39 
Hey guys, 

I just read through this entire thread and there is some awesome information in here! My question is what exactly is this technique used for? I understand the process but I can't figure out what the output fixes or the problem? Sorry for the noob question?

Thank You,

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,709
Reply with quote  #40 
One possible application is mentioned in the very first message of this thread.
0
Pizzaman

Asking for seconds
Registered:
Posts: 133
Reply with quote  #41 

One possible solution (thanks for the challenge by the way !) :

- Compute the "stretching" vector (length and direction)
- Displace the mesh to the stretch direction, with points projected on the tangent/binormal plane
- Use the pointsOnMeshInfo to grab the UV values of the old mesh at the position of the new one
- Calculate the difference between the new UVs and the old ones, you get the direction in UV space :)
- Create a mesh in world space from UV space (mapToMesh)
- Calculate the angle between the X direction and the mesh vertices
- Export the directions and amount of stretch in the vertices... after a little blur to avoid the crap i can't understand.

Although the theory seems to work, the way Arnold deals with anisotropy makes it a nightmare to setup.

 
Attached Files
mb testDirStretch.mb (136.20 KB, 4 views)

0
rolfcoppter

Wanting the recipes
Registered:
Posts: 63
Reply with quote  #42 
When I clicked the link to the first post of the thread it didn't seem to work for me but I did manage to get the link to work by removing part of the end or the URL. This is a lot more clear now. Thanks for that explanation Pizza man!
0
MrX

First taste
Registered:
Posts: 2
Reply with quote  #43 
Hello Everyone,

thanks for this inspiring topic and exchange. I am also very much interested in a fair solution of classifying the deformation of an object w.r.t. a rest pose. However, not necessarily in Maya such that I would like to understand the needed computations in  detail.

I very much like the recent approach by Pizzaman and appreciate his template.
Nevertheless, there is one point which gives me a hard time to redo the computation myself and that is the output of the tension node.
From the given description, I am just not able to figure out the exact definition of the tension and direction attribute.
  • Is there somewhere a clean documentation about this ?
  • Does someone know the mathematical definition of these two output values ?
  • Or is there some kind of stand art algorithm to do such computations ?
Maybe its a stupid questions since the definition of these to quantities is obvious to you guys, but since I would like to do this out of Maya I would very much appreciate if someone could help me to understand the output of the tension node better.

Bests

0
pshipkov

SOuP Jedi
Registered:
Posts: 4,709
Reply with quote  #44 
The setup we are discussing here can be simplified i think, also, we should have this as a standard option built into the tensionmap node. But that's for later.

To answer your question - the tension mode has multiple modes.
The one used does something like this:
for each point:
    find all connected points (through edges)
    compute the distance from this point to the neighbor points
    compare the result with neutral frame (as of pose, but not time)
    output a value that describes the difference between deformed and neutral frame of the geometry, the outTensionPP returns the raw values, outRgbaPP provides encoded information where if the difference is negative (geometry contracts) it outputs to one of the rgb channels, if the difference is a positive it outputs to the other rgb channel, then there is a neutral rgb channel. So if we use GRB encoding - G is the neutral, R is compress, B is stretch.

Hope this clarifies it.
0
MrX

First taste
Registered:
Posts: 2
Reply with quote  #45 
Dear pshipkov,

thank you very much for this in depth explanation of the tension node. It fills the gap which prevented me from understanding your procedure. It is really interesting to see what different approaches are possible.

Maybe some follow-up questions:
  • you write that you iterate over the points and its (connected) neighbors. Is there a particular reason for doing this ? For me it would be more natural to iterate over the triangles.
  • Is this computation executed in object or tangent space (compare post #30). If in the ladder, how you cope with the fact that the neighbors do not necessarily are all within the same plane ?
  • Once you have the distance vectors to the neighbors and compare to the neutral pose distances (I guess the absolute value of the difference) do you average them in order to get per point data ? Could we not use these difference vectors in order to extract the stretching direction in a more direct way ? I mean all we need is the singular value decomposition (svd) of the matrix mapping between two (adjacent) distances vectors (assuming quad geometry) of the deformed and the neutral pose. This svd would give us stretch direction and strength. In order to get per point data one would average these svd data from the four triangles. What do you think ?


I really appreciate the open and helpful discussion in this treat.
Bests

0
Previous Topic | Next Topic
Print
Reply

Quick Navigation: