Thursday, January 4, 2018

Heading toward a standard VFX pipeline - MaterialX

So as I discussed earlier open-source solutions in the VFX/CGI industry have huge impact. We can think of the OpenEXR or the Alembic formats for example.

Pixar's USD was a new level of the open sourced technologies beacause basically we can adopt a high profile pipeline solution originally developed one of the biggest 3D animation studio of the Globe.

MaterialX could be another milestone of the VFX/CGI industry however it is not clear for me how it will evolve. Quote from their description:
"MaterialX provides a schema for describing material networks, shader parameters, texture and material assignments, and color-space associations in a precise, application-independent, and customizable way."

If we look at USD in production viewpoint it is rather a tool for the 3D layout (whatever it means) and of course it is in connection with modelling, animation, fx. So it basically deals with geometries (maybe volumes as well) in a software agnostic way.

But MaterialX propesed to be a standard for look-development data transfer. So it should mean 2D (like textures) and 3D (materials/shaders) type of data in render engine agnostic way. If we think about in a long term it could means every rendering soultion would understand MaterialX data...we will see.

MaterialX was used on the feature film Star Wars Rogue one for example.


Friday, October 27, 2017

Photogrammetry Workflow by Unity

Comprehensive document about photogrammetry. I think we are witnesses of  the beginning of a new era of the computer graphics, because photorammetry is a way to capture surface and color at the same time. Previously we could digitalize only the surface and collect color information separately. The future of the process I think is to collect information about the surface and the material (meaning that not only RGB values).

Unity Photogrammetry Workflow

By the way I tested Nuke as a photorammetry tool and it was impressing.

Wednesday, October 4, 2017

Industry news - Maya 2018 and Nuke 11

Okay, it is oldies right now but I think it is worth to mention a couple of thing. I think Nuke and Maya now related to announce the latest release almost the same time. I can't say that huge development going on. I miss the feeling when a software release hit me and I say "waaoooo amazing! I can't wait to try the new features".


Maya 2018

It's a little bit strange that there is no official announcement video. Maya 2018 released so quietly. We can think that Autodesk knows there is no reason to do the hype. So here are what I found:



The devil lies in the details. I guess they were tired to create fancy feature videos but if we go through the user guide we can find a lot of useful stuff especially in connection with modelling. But still no breakthrough at Bifröst. There is now sing of a general procedural workflow inside Maya. The (new) render setup system seams still not working with references so it kind of a worthless.

Nuke 11

And Nuke announcement is also a bit woozy (or something like that, and the music is shitty as well).


So there is new Lens Distortion and Precomp node has a new name called LiveGroup, background rendering. What else? I guess the VFX Reference Platform is interesting for big studios for pipeline TD's but not for artist.
 I guess with Nuke is the main problem is that Nuke is a complete software. Probably the best choice for compositing for films (is there competitors? Maybe Fusion?). There is no certain direction to develop Nuke because it would overlap other Foundry related softwares like Modo and Mari. In a business point of view there is no reason to develop for example a better modelling toolkit for Nuke to do better photogrammetry and re-modelling, projecting, texturing workflow. There is Modo and Mari for that. But for an artist it is always better to use one software for related tasks saving "tons of time" to export-import, naming, versioning, convert files.



Friday, June 2, 2017

Making of Adam - The VFX

You have certainly seen this short film:



What you may haven't seen this article about the making of the vfx stuff. Quite comprehensive article (I wish I could write things something like this):

Adam – VFX in the real-time short film


Saturday, May 20, 2017

Maya Tricks - Maya Pivot

If you want to know everything about pivot points in maya you should check this video:




Maya Tricks - Measure rgba value range

Let's say we have a file texture and we want to adjust the color of that. We have plenty of options to do that. I think the most sophisticated is the ColorRemap node which is similar to Photoshop's or GIMP's Curves tool or Nuke's ColorLookup node however Maya's ColorRemap is less user friendly I think.


Maya has an option to make a node graph to recolor the picture based on its luminance. We can create the node graph simply just clicking to the Color Remap button below the file node's effects rollout.


The node graph is going to look like this.


The ramp node has a default black to white gradient. We can change the black to blue for eg. and it will change the darkest colors on the picture to blue. Picture like this:


 with a ramp like this:


going to result a picture as we can see below:



The simpliest and obvious solution to change colors is to use the file node's built-in Color Balance options.


These options have limited capabilities of course. We can use textures on Color Offset or Color Gain for eg. Color Gain multiplies the texture file's color and Color Offset adds to or substracts from the input color. Maya has nodes for the basic mathematical operations: multiplyDivied and the plusMinusAverage.

But when we adjust the color of the original texture especially when we use Color Gain (multiplication) or Color Offset (addition) we modify the color range of the picture. As far as I know maya has no tool to measure color range of a picture. It would be useful when we use the modified map to emit fluid for example. Especially true when we use animated maps. We have setRange node but there should be an option to copy the current range values to the oldMin and oldMax fields.
There might be a python (pyMel or API) trick to achieve that. Let me know if there is a method to query the color range of a (texture) node output.

I found an easy solution to visualize the color range of a picture using the SOuP plug-in. It provides the textureToArray node. With that we can sample a texture and plug into the SOuP's peak deformer. So basically we can create a displace modifier. Unfortunately the (new) textureDeformer has limited options to acieve that effect. HightField has the same weakness.

To display our colorRange with SOuP nodes we can use a polyPlane as a base geometry. We have to setup a following network:


It is important to check the accurateSampling option on textureToArray node. That will ensure to sample trough to whole network.


In this case I use a luminace sampling using the luminance node and the output in this case the alpha channel. So the peak deformer is using the luminance values to offset the vertices.
If we apply the color as a surfaceShader to the plane the result will look like this.




Of course the measure of the peak is adjustable on the peak derormer.



This setup helps to visualize the RGBA or this case the luminance values. Since we use color management in maya. We can sample in maya viewport (or everywhere else) the real (not only 0-1) color value. So this setup give us a hint which areas have the highest or lowest values.


We can get further if we query the list of y value of the vertices and get the color of the lowest and highest value by the colorAtPoint MEL (or python command).

Wednesday, April 26, 2017

Blender 2.8 Project - Eevee Roadmap

I can't wait to try Blender's new viewport render engines. Especially the so called Eevee. It stands for Extra Easy Virtual Environment Engine and basically means we can render good quality in realtime. Of course it depends of our graphics card.
I really don't understand why we should use cycle to achieve cartoon style look as it covered in this video. I guess the reason is because current viewport OpenGL render quite limited. But raytracing is not an alternative for me if we talking about cartoon style. So the new Eevee engine could be.

Check this out here

This video also covers the topic