Vampiro WIP III : Prepping Models for a Color Pass

Retaining Quality and System Integrity

As the models in this image will be used for a still frame, the topology contributing to their appearance will not require deformation for the purposes of animation. As a result, reduction of the geometry (that determines the model’s topology) can be Decimated, as opposed to being retopologized, for Realtime Viewport interactivity during texturing.


The above image is a test render to depict the textures applied to certain models for a Color Pass. Ambient Occlusion, Transparency, Sub-Surface Scattering, Glossy and Material Properties have been excluded from this Pass. Working on an image such as this in multiple Passes will reduce system load, and keep Viewport interactivity as next to Realtime as possible.

Further improvements in reducing system overheads can be achieved by referencing High Resolution geometry externally.
This can be achieved by Decimating a duplicate of the model, then using a ShrinkWrap modifier to externally re-target a Subdivided version of the Decimated Model, to the original High Resolution model created from sculpting.

If we were to have a look at that statement more practically in Blender terms and propose it in the form of a question,

How Do We Get a High Resolution Static Model to Retain it’s Detail But still Provide Realtime Viewport Interactivity for Texturing?

1.Complete Sculpting and Save Separate Files


The above image depicts the Vampiro character’s top which is created by sculpting and has resulted in approximately 4 Million triangles.


Of course, many modern computers used for Sculpting can still handle a load of this type, but bear in mind, this is only one component of many that make up the Render as a whole. When the rest of the components are added to the scene interactivity in the Viewport (such as Panning, Dollying, Tumbling the camera etc) will drop to unusable and unstable levels.

It’s also worth noting that when this character is assembled the file size, as it is physically read by Ubuntu, equates to approximately 717MB.


The implication of working with a file this large is that it will consume physical disk space rapidly particularly as a result of incremental saving, slow down realtime interactivity by consuming system RAM (which is important when texturing as multiple applications need to be open simultaneously) and finally in severe cases could result in an unstable system reverting to Swap Space in order to compute simple user requests such as Panning, Dollying or Tumbling the 3D Viewport which only make up a small part of the texturing process and exclude cloning and painting amongst various other system intensive tasks.

If Sculpting is completed on the model at this stage. The model can be saved in a separate file, this is useful in terms of not increasing the file size any further but still being able to reference the high resolution model from another optimised file.

2.Duplicate, Decimate and Apply


The above image depicts a duplicate of the sculpted model that has been saved in another separate file. This model has then had it’s geometry reduced with the Decimate modifier, which has resulted in close to 99% reduction of polygons.


However this reduction has resulted in detail loss, particularly in areas where surface curvature appears smooth in the High Resolution model. Subsequently, Blender can compensate for this loss without reducing system performance by referencing the High Resolution model externally. This has the added benefit of not increasing the optimised, working file’s size by a factor of the High Resolution file’s size and thereby allowing for incremental saving of the working file with reasonable results.

3.Link Externally


From the file containing the Decimated model, the Decimate Modifier can then be Applied.
On a new layer, the High Resolution model can then be imported into the optimised file by using the Link command. This does not physically import the model into the current scene but adds a reference to it instead. A result of this operation is that, the High Resolution file cannot be moved from it’s current location (on disk) or the Link between the two files will be broken.


Saving this file with both High and Low Resolution models in a single unit will result in a much smaller file size.


4.Subdivide and ShrinkWrap for Render-time

With the High Resolution model referenced within the current scene, you will now be able to see the model in the Outliner and subsequently use this model as a Target for the Low Resolution model.


A Multires Modifier can then be added to the Low Resolution model in order to subdivide the model at Render-time and not during the Preview stage so as to retain Realtime Viewport interactivity. It has also been an observation of mine, that this method will reduce render times significantly.
Following the Multires modifier is the Shrinkwrap Modifier that is used to target the High Resolution model.


5. Conclusion

Subsequently there is no need to work on the High Resolution model further, and nor does Blender permit further object, or sub-object level editing of the Linked Model. Such edits should be performed on the original file and the changes will be reflected in the optimised file effectively.

The result of this preparation is that the Low Resolution model can then be UV unwrapped, Textured and Rendered with Realtime performance, a reasonable geometry count and used in combination with other models and components without reducing system stability.

Flattr this!

Leave a Reply