Posted on

An Introduction to 3D with Lyndon Daniels

A Free, Software Agnostic 3D Course Launches at lyndondaniels.com

2d3d

A new course covering the fundamental constructs that bind all high-end 3D applications has just launched at lyndondaniels.com/education/3D

Although this course is geared towards newcomers to 3D content creation, advanced users that are faced with the challenge of transitioning to another 3D application could also find the software agnostic approach utilized in this course in addressing modelling, sculpting, texturing and rendering to provide a sense of familiarity within this seemingly daunting task.

quiz

The course is also equipped with 8 quizzes for you to monitor your progress, throughout the learning experience and consists of extensive documentation covering topics such as Planning a shot, Setting up a scene, Polygon Modelling, Setting up UV’s, Proxies and High Resolution Models, Texturing and Shading and finally touches on Rendering.

You’ll also find a generous list of free assets scattered throughout the course that provide practical implementations of the topics being discussed. Among these assets you’ll also find the cover image 2D and 3D files all of which are distributed under a Creative Commons license that permits nonrestrictive free reuse.

If you’ve been gearing up to delve into game development, animation or 3D art you’ll find all you need to get started right here. So jump right in and have fun learning something new :)

course

 

Flattr this!

Posted on

New Free 3D Course Coming Soon

head_main03

“An Introduction to 3D with Lyndon Daniels” is a new online course for beginners to 3D Content Creation that will be coming out soon.

The course includes extensive documentation, quizzes and downloadable assets all for free (no registration required).

This course utilizes a research-based approach and encourages Self Regulated Learning. You’ll be introduced to the main concepts that bind all modern-day, major 3D Content Creation Suites regardless of commercial or free application availability. The course utilizes the character created for the cover image as an example to guide your learning process through practical implementation and will also provide all assets used in the course for free.

You can view a short sample of the course at https://lyndondaniels.com/education/3D/

Flattr this!

Posted on

DepsGraph Granularity and Expected Bone Evaluations (in Blender)

Consistency through Usability

The current functionality of the Dependency Graph within Blender, is to evaluate an Armature in whole rather than each Bone within the scene.

Although, this works effectively for simpler rigs when utilizing IK Splines (within a rig setup) this system can become somewhat more tedious and less forgiving.

It can be argued that usability should subsequently become a more important consideration, when dealing with greater scene complexity. Learning a new system while creating a complex setup could in many ways contradict the principles of good usability. Of course, expectations of the user are always a contestable topic in scenarios such as this but in this particular case we already have an existing database of knowledge sources to draw from.

Abstraction of Low-Level Complexities

outlinerBlender like many high-end 3D packages possibly takes into consideration existing cognitive human to computer, habitual interactions when implementing features that propagate to the end user.

After all, there is no need to re-invent the wheel particularly when the outcome of the interaction is not isolated in it’s scope of application.

However, the lack of finer granularity within the DepsGraph results in the absence of each bone, that constitutes an Armature, having its own node for individual evaluation at runtime, which is fine… but the limitations of this system often become exposed to animators when existing transitional-knowledge (from other 3D applications) contradicts how complex relationships of a similar type in Blender are correctly setup.

Lets take into consideration one of the most obvious artifacts resulting from this functionality which can be seen through the Outliner. To be clear, however the Outliner is not a visualization of the DepsGraph but is nonetheless pretty much as close as you’d ever want an animator to get to that. Notwithstanding Aligorith’s proposal for a Rigging Dashboard, which would be far more ideal. However, before there is a possibility of implementing such a feature, consideration of how a user visualizes an animation rig needs to be addressed. A simple IK Spline setup as demonstrated in this file can subsequently result in 3 Armatures populating the scene. This kind of visualization might not intuitively represent what an animator expects or wants. So before visualization of the scene occurs lets have a look at why 3 armatures are necessitated.

Dependency Cycles Are More Likely To Occur In Complex Rigs

minoThere will invariably be more benefits for an animator that has the ability to transform multiple bones with a single controller associated with multiple complex relationships as opposed to the conventions of a direct acyclic graph relationship. Setting up an IK Spline with intuitive controls in Blender, attributes this sentiment.

The unfortunate side-effect of the conventional approach is that having a bone being controlled by another object which has dependencies on another bone within the original armature will likely result in a dependency cycle, and evaluation will be terminated unexpectedly. In order to rectify this multiple armatures can be constructed with the aforementioned dependencies. However, visualizing this setup in the Outliner does not clearly represent the relationships between the different Armatures let alone their relationship to the character they are controlling. Furthermore, the need to expose the complexity of this relationship to an animator, could also be unnecessary.

Abstracting the complexity of a character including it’s geometry, rig, transform and animation data under a single node within the scene’s hierarchy (certainly by means of the Outliner) has the obvious benefit of maintaining a cleaner scene structure.  Having a node for each bone within the DepsGraph could lead to a more intuitive approach for rigging a character particularly for those transitioning from other 3D packages, as this would address the dependency cycle issue.

In conclusion here are some examples of how a rig with a dependency cycle will not evaluate as expected. The image on the right has been forced to update resulting in the transforms on the bones to match the animators expectations, however the image on the left depicts what occurs at runtime. As the scene is re-evaluated when a transform is invoked the bones are forced to move to their expected positions, subsequently the keyframe data cannot be corrected by conventional means.

Of course, these issues can be avoided if the user is aware of how Blender addresses more complex dependencies utilizing an IK Spline controller, the problem however is that this approach contradicts the premises on which good usability is based on.

mino21 mino20 mino19 mino18 mino17 mino16 mino15 mino14 mino13 mino12 mino11 mino10 mino09 mino08 mino07 mino06

 

Flattr this!

Posted on

A Basic Guide To GitHub for WebDevelopers

Introduction

If you have been developing websites for a while you might have felt at some point that there could be a better way of making changes to your site in a non-destructive way or having a remote back up of your project that you could restore your local working files from in the event that something went horribly wrong.

Well, if dealing with these scenarios in a manual and somewhat laborious fashion has been irking you, then GitHub might have a solution that fits your needs.

What is GitHub?

octocat-githubGithub is first and foremost a remote hosting service with an integrated set of tools for assisting developers via a version control system.
We’ll talk a little more about version control later, but, for now what is important is that we recognize that at it’s core GitHub provides developers with a remote location to store the projects they are working on. However, this differentiates substantially from a trivial cloud storage facility in that GitHub is built on Git. This begs the question what is Git and how does it make GitHub better?

Git

gitlogoGit comprises much of the underlying functionality that GitHub presents in an easy to use Web-based interface.
Aside from the web hosting facility that GitHub offers it’s members (in free and paid versions) it also merges a harmonious relationship between local development leveraging Git functionality and extending that functionality remotely to the web using GitHub.

Tools for Developers

tools

The process of developing non-trivial Software is generally not linear, the approach will often be iterative. For example, when developing a website you might like to try styling certain elements of your site differently. However, actually seeing these changes implemented in your site might require replacing your existing CSS with the modified version while not actually knowing if you’re going to like the changes you make. This could be particularly destructive to your site especially when the changes require modifications to the site’s structure or DOM, that is to say multiple files would get effected by the change. You might subsequently make these changes nonetheless, only to realize that you preferred the site before the changes where implemented but would still like to keep some other unrelated changes you made while updating the styling. Restoring your site to it’s previous state and keeping the other changes in this case could be extraneously, time-consuming as you manually sift though file after file, one line of code after another trying to identify and decide upon what should to be kept and what needs to be reverted.

Git and subsequently GitHub implement a unique Revision Control System that addresses this problem. Simply put for the purposes of this article the Git Revision Control System (RCS) provides us with automation tools for,

  • storing, organizing and working with data locally,
  • identifying changes between different data states,
  • providing us with means of addressing these differences,
  • uploading data to GitHub,
  • and retrieving data from GitHub.

Distributed Revision Control

Consider this scenario, you are developing an application for a website and you would like to experiment with integrating some one else’s code into your app but you also don’t want to risk breaking the working version of the app. At the same time you would like to make sure that you can retain a remote backup of your working app and still permit other developers the opportunity to work on or assist you with your project’s source code, documentation etc.

Not all of the above criteria is necessary on every web app or website project, but atleast one or more will likely be applicable to any software development project. This is where Git and GitHub can help to meet your requirements.

softwarestates Each user that joins GitHub becomes a member that is responsible for maintaining their own repositories. These repositories are storage facilities for different versions of your project’s codebase.
Along with all the development tools you need for your project you will also need to have Git installed on your local development workstation.
Git is a command line tool, with a very simple set of directives to get you up and running with a Distributed Revision Control or Version Control System project. Revision Control refers to managing your project’s different versions, for example version 1 with the original styling and version 2 with the styling modified (as per the example noted above). Git and GitHub’s type of Revision Control is Distributed because each user is responsible for their own repositories, both locally and remotely. Subsequently, each version of the project will mirror it’s previous version with additional changes where necessary. This alleviates reliance on accessing remotely stored data or data that might have become corrupt locally.

Solutions for Different Software States

Git refers to the software comprising our projects as being in one of three different States, Modified, Staged or Committed.

Initially when we start a project it is in the Modified state, continuing to work on the codebase does not modify it’s Git related state i.e. the software remains in the Modified State. However, in order for Git to track changes to the project and provide us with the functionality that allows us to revert to previous versions of the project (among other features) we need to make a commit, which is simply a directive issued to Git to tell it to track the current status of the project. But, before making a commit we need to tell Git which files we would like to add to the commit. Files that are added to the que to be committed, will be analysed for changes and placed in the Staging Area.

states

Lets Get Started…

When our software is in it’s primary state we have just began working on our project, subsequently we will be working locally and our project will remain in the Modified State. Once you are ready to make a commit, and it could be recommended that you start committing from the very beginning of the development process and regularly thereafter, navigate to the root directory of your project. eg.

cd myProject

Then initialize it for usage with Git.

git init

At this stage all you’ve done is started the development process and set up your project for usage with Git, which has subsequently created a .git sub-directory in your projects root folder. This directory is where git stores all of the pertinent information relating to the different versions of your project.

Assuming you already have files in your project’s root directory, you might want to add them to the Git Staging Area so that they can later be committed.

To add all existing files and those within sub-directories of your main project to the Staging Area, simply run the following command,

git add .

This command will examine the contents of your project and check for files that are in a different state to those represented within a committed state. If this is the first time you are running the command, you will subsequently have no committed data. Therefore all the files within your project will be added to the Staging Area, ready to be committed.

To make a commit, simply run the following command,

git commit -m 'Type Your Message Here'

Making a commit must follow the sequence of directives as they have been stipulated above. In the git commit command the -m flag has been used followed by a string. Replace the text within the string with something relevant to your commit for example ‘ver 0.01’

At this stage you can run a status check to ensure that the commit was successful.

git status

Conclusion

In this post we looked at what Git and GitHub are and how they can assist us with building websites and other software. We also setup a local repository and made our first commit. When I next return to this topic we’ll have a look at how to examine the differences between versions of the same file and how to resolve those differences. We’ll also look at branching and merging and finally uploading our project to GitHub.

Flattr this!

Posted on

Vampiro WIP VII: Fire and Smoke in Cycles

Creating a Cycles Fire and Smoke Shader

Currently (as of Blender 2.71) the Cycles Rendering Engine is lacking a default fire and smoke Shader when utilizing Volumetrics Rendering. As a result this post covers a simple setup to initialize a Shader that can control various, useful properties determining the look of rendered fire and smoke. This methodology is based on .

Properties of the Shader

  • The Shader can logically be divided into a component that addresses Smoke and another addressing Fire, both of which are composited onto a single volumetric entity.
  • At it’s simplest level both the Density of the Fire and Smoke can be controlled individually with a single value representing each respective component.
  • The color of both components can be adjusted individually.
  • Fire has the ability to illuminate surrounding elements within a scene, utilizing Cycle’s physically accurate renderer.

setp1Rend

Scene Setup

emiter

Select the Default Cube go to

Object -> Quick Effects -> Quick Smoke

A Domain is created around the Cube, which subsequently becomes the Emitter object in the simulation.

Scale the Domain up to encompass a larger area, big enough to engulf the volume of the Fire and Smoke effect.

Select the Emitter.

In the Physics View under Flow Type choose Fire and Smoke.

setp2Rend

Shader Setup

play

Playing through the animation at this point should render fire and smoke in the 3D View.

Switch to the Cycles Rendering Engine, usually located at the top of the screen in Blender’s Info View.

usenodes

Select The Domain and in it’s Material Panel under Surface click the Use Nodes button.

setp3Rend

Volume Shading

Blender would have created several default nodes (from the previous step) to render a default solid surface type. This is not applicable for volumetric rendering and subsequently needs to be adjusted.

nodes01

With the Domain selected, in the Node Editor View delete the Diffuse BSDF Node.

Add 3 new Shader Nodes: Volume Absorption, Volume Scatter and Add.

Composite the two Volume Shaders within the Add Shader, which is subsequently output to the Volume Channel of the Material Output Node.

render3d

Switching the 3D View to Rendered, will reveal the effects of utilizing the Domain object’s Volume Channel for Material rendering.

setp4Rend

Smoke Mapping

As you would have noticed the Domain renders as a volume but the Shader is currently still mapped to the original coordinates of the Cube object. In order to map the Shader to the Smoke and Fire Volume, add an Attribute Node (found under the Input Nodes Group) and set it’s Name field to “density”.

density

Connect the Nodes Output Factor (Fac) to both Volume Nodes’ Density Inputs.

play02

Playing through the animation from the first frame will now render the Volume as expected.

setp5Rend

Creating and Mapping Fire

As the fire will Emit light an Emission Shader Node will be used to simulate this effect.

Create another Add Shader Node and an Emission Shader Node.

flame

Connect the Output of the Emission Shader Node to the Input of the new Add Shader Node. Disconnect the Output of the Add Shader Node created in Step 3 (to the Material Output Node) and Composite this Output within the remaining Channel of the Newly created Add Shader Node. Subsequently Connect the Output of the Newly created Add Shader Node to the Volume Channel of the Material Output Node.

comp

This will revert the Domain to erroneous mapping. Creating the correct mapping for the Fire is a similar process to that previously covered with regards to Smoke.

Create another Attribute Node and enter “flame” in it’s Name input field. Connect this Nodes Factor Channel Output to the Emission Node’s Strength Input Channel.

Fire and Smoke can now be controlled as two separate entities within the same Shader.

setp6Rend

 Color , Density and Intensity

Tweaking settings through the 3D View at this stage will be useful to determine an approximation of what the final rendering will look like as a quick preview. However, for best results use Blender’s Render option (F12 on the Keyboard by Default) to get an accurate representation of the final render.

Color

colorRamp2

Control the color of the fire by adding a ColorRamp Node.

render01

Connect the “flame” Attribute Node’s Color Output to the ColorRamp Node’s Fac input and connect the ColorRamp’s Color Output to the Emission Node’s Color Input.

The Smoke’s color can be controlled with a similar setup.

Intensity

Use a Gamma Node to control the intensity of the Fire, by intercepting the color throughput between the ColorRamp and the Emission Nodes.

gamaSetup

gamma

Controlling Density

The Density of either the Fire or the Smoke can be controlled with a Brightness/Contrast Node by adjusting the Contrast value.

contrastbirght

For the Smoke Connect the Fac Output of the “density” Attribute Node to the Color input channel of the Bright/Contrast Node.

Connect the Color output channel of the Bright/Contrast Node to the corresponding Density Outputs of the Volume Nodes.

contrst0

 

 

 

 

 

 

Conclusion

Many more adjustments could be added to the Fire and Smoke to complement various rendering styles. This setup for a Fire and Smoke Shader in Cycles provides a basic component upon which many additional Nodes can be implemented, experimentation is the key in this case.

Download the file used in this post here

Flattr this!

Posted on

Minotaur XVI: Control Rig

Creating Controls for an Animation Rig

Control rigs are used to abstract the complexities of an Animation rig, which in turn transforms a Pose rig. By simplifying the controls of a character’s rig the animator benefits from learning how to use the rig quickly while Blocking Out a Sequence and applying this knowledge when Refining Poses with controls that follow the same logical set-up.

Unified Logic for Animation Controls

It’s important that an animator does not spend alot of time figuring out how to effectively use a rig to target Key Poses for a specific character, particularly given that each character’s rig will need to differentiate (to some degree) for believable Mesh Deformation.

As a result a unified logical approach when determining how to set-up a character’s controls can assist the animator with understanding how to pose a character from Blocking Out to Refining/Finessing Sequences. It is therefore more significant for a rig to bias a single, specific transformation for animation purposes rather than utilize a combination of animatable Translations, Rotations and Scales that will likely keep the animator guessing when becoming accustomed to a rig.

minotaurLD

One Control for all Controllers

The visual representation of the Control rig is the animator’s first point of contact when creating animation for a character. It is therefore essential that the controls do not require an explanation, a tutorial or a verbose software manual. The intent of each Control should be clearly conveyed in its appearance and that appearance should be consistent and unobtrusive for the animator.

  • Isolating controls into groups with Layers that logically cascade into more detailed, smaller groupings can be used to communicate rig structure and hierarchy clearly to an animator.
  • Simply utilizing the location of a Control can communicate Controller Functionality to an animator, i.e. what part of the mesh is effected by that specific control. This can create a sense of familiarity for any animator that is aware of how a marionette works.
  • Relying on representing control sets with artistic icons can easily become subjectively interoperated by one animator to the next and subsequently should be avoided.

A single Control Type on a rig that is Translation-biased is effectively the only explanation an animator should require.

A Controller for a Translation-biased Rig

Animation Rigs consist of IK controllers, Constraints and Dummy bones. They are used to drive the Rotations of the Bones that comprise the Pose rig, which in turn causes the character’s geometry to deform.
Animating with IK (Inverse Kinematics) is preferable for many animators as multiple Bones can be controlled with a single Controller and (more importantly) it is easier to establish contact points with a surface during animation.
However, the bones used in an IK chain can often end up obscuring a mesh’s visibility. This is also often the case with most rigs consisting of general Animation Constraints, such as Tracking and Property Duplication Constraints which are essential parts within any medium to complex Animation Rig.

A Control Rig built on top of an Animation Rig can be used to reduce the viewport clutter that prevails when setting Constraints for an Animation rig.

A Translation-biased Control Rig communicates to the animator that the rig favours Translations as opposed to other transforms for animation purposes.

selectdselectdNot

The Following section outlines the setup procedure required in Blender to reproduce the main Control used throughout the Minotaur’s Rig. When the Armature is not selected the Control’s visibility is virtually non-existent, keeping the animator’s view of the character unobstructed. Subsequently the animator is not forced to hide the rig during Playblasts or OpenGL Preview Renders.
As a rig should never be exclusively Translation-based this setup does not effect animating between IK and FK as the animator would normally expect the ability to overwrite Animation and Control Rig precedence when necessary.

Setup for a Translation-biased, Unobtrusive and Persistent Control for Control Rigs

setp1RendcrossSetup consists of two simple controls that make up the Custom Shapes required throughout the rig. The shapes resemble a Plus and Minus sign. They should have very low poly counts, between 2 to 6 faces. Adjacent planes work best as the screen-space they require will be minimal.

The Minus shape is used to point to the area that is effected barby the Translation, while the Plus is used as the Selection Handle and should be the main component the animator will subsequently need to interact with. Keyframes for Translation will typically be set on the Selection Handle (AKA Plus shape).

setp2Rendsetup03An animation rig can be setup as per usual, making use of IK’s, constraints, expressions and drivers. This example is using a two chain IK constraint. The component selected in the image will become the Selection Handle, it will therefore not require a Parent.

setp3Rendsetup04With the Target bone selected change it’s Display Property to the custom Selection Handle (Plus shape). The option to use a Custom Shape in place of a Bone’s default representation can be found in the Bone’s Object Properties View, under the Display section.

setp4Rendsetup02Now it’s time to add the Bone that will be used to point to the area effected by the transform. In Edit Mode, duplicate the Bone used as the Selection Handle. Then Parent it to the Bone that Deforms the area effected by the Transform. Use the “Connected” Parenting option.

NB. Currently Blender (ver 2.70) requires that the setup must follow the order of Steps as noted in this guide. Having the Bone that was created in Step 4 (Minus Shape) connected prior to setting up an IK Constraint will often yield undesirable Rotations in the chain.

setp5Rendsetup05Display the duplicate Bone as the Minus Shape. Use the same techniques noted in Step 3 to set the new Bone’s Display Properties to the Custom Shape resembling a Minus sign.

setp6Rendsetup06Make the Minus sign point and stretch to the Selection Handle. Add a Stretch To Constraint to the Minus Bone and target it to the Selection Handle (Plus Bone). Adjust the Rest Length Property and set Volume Preservation to None.

setp7RendFinally move the Bones cluttering the view to other Layers.

Use the Properties View to move Bones that are not needed for animation purposes out of sight.

The final Setup should only require that the Control Rig and Deformable Mesh are visible for the animator.

 

Flattr this!

Posted on

Vampiro WIP VI : Masking in Cycles

Creating Semi-Transparent, SSS, Veiny Wing-Skin

The Cycles Rendering Engine in Blender has many unique qualities that tend to set it apart from other Non-Realtime 3D Renderers such as the ability to visualize a Render within the 3D Viewport, an intuitive vitalization of a Material Node Network that also extends into Post-processing and a carefully considered balance between Biased and Non-biased rendering characteristics. However, what really seems to be most intriguing about Material/Shader set-ups in Cycles is probably one of it’s most communally underrated features, that being, it’s unique approach to isolating Surface Geometry for Shader set-ups.

wingsTranspTest0050percent03

Masks in Cycles serve the purpose of isolating parts of an image during Compositing or, as in the case of the image above, isolating parts of the geometry that make up a model’s surface.
If you have ever worked with Layer Masks in Photoshop or the GIMP, the concept might be easy to imagine. Nonetheless, the process of setting up a Mask in Cycles is so trivial that it warrants very little explanation. This could account for why the simplicity of this approach which disguises a full-featured arsenal of infinite, pixel-accurate, Shader combinations seems only to be reserved to passing asides in numerous online tutorials discussing Shader setups.

maskSetup

The above image depicts a typical set-up using masking with Cycles Nodes. Although this set-up results in a simple Shader that fades from one Diffuse Color to another, the simplicity and level of control necessitated when generating Shaders with Cycles for more complex Networks are principally similar.
For Example, the following image depicts how this masking technique can be used to isolate the veins on the Vampiro’s wings which require a different Shader to the rest of the wing.

wingsTranspVeinSSS01xcf

In this case a Mask is used to balance a seemingly paradoxical requirement,

  • In order to separate the Veins from the rest of the wing a different color is used to boost their presence.
  • However, the deviation from the rest of the wing’s main color palette has the side-effect of causing the veins to look unintegrated in their default state.

As a result masking provides an ideal solution which allows for blending one Shader with another and controlling the effect with varying levels of grey, while maintaining modular editability within the Shading Network.

VampiroWingSkinRT_Veins

Although this mask might take some time to create, it’s application within the Shading Network allows a great deal of control over the final result and can be used in varying combinations within other sub-shader networks.
As mentioned earlier the simplicity of this masking technique seamlessly underlies the complexity of it’s ability to create a relationship between potentially unrelated physical properties such as Transparency and SSS (Sub-surface Scattering), Glossiness and NPR (Non-Photo-Realistic renders) to name a few of the infinite possibilities of Shading Network combinations. Subsequently it is worth bearing in mind that although the current Shading Network being discussed combines two SSS Shaders, blending Shaders of similar type is certainly not an inherent limitation of the technology, but used here simply as an approach to an aesthetic.
In this case the mask’s prominent white areas will boost the presence of the vein’s SSS Shader and the smaller veins (in varying scales of grey) serve the purpose of reintegrating the veins back into the SSS Shader consisting of the wing’s main color palette.

wingnetwork

The above image depicts a single Texture node that is used for the Color input on both SSS Shaders, however the color is modified for the veins.
A Mask is then used to create a blend between the differentiating color palettes and physical properties of the two Shaders via a Mix Shader. This unique Shader, in itself, provides an output that can subsequently be used for combining the resultant Shader within other sub-shader networks.

In conclusion the benefit of this method, besides the simplicity in approach, is an editable Mask that both boosts and blends the offset color from and into the main texture colors. Furthermore, editing the color of the Veins through the RGB Curves Node does not require external software and as the process is internal there is no tedious reintegration required within the Shading Network.

Flattr this!

Posted on

Vampiro WIP V : Material Nodes

Creating a Generic, Night-time Skin Shader

The obvious advantage of a Generic Shader is in the ability to re-use the Shader but still maintain applicable customization properties, such that a variety of effects can be achieved from a single Shader to simulate various natural phenomena.

sssDamselFull

Blender Copositing Cover Image_0The Node Editor in Blender provides an efficient interface for creating Material Shaders and Post-Processing Renders.
In this article I will addresses the techniques used to create a Skin Shader that can easily be transferred from one model to another, modified and retain consistency for properties related to the scene’s entirety.
The above image demonstrates a Generic Skin Shader applied to a character that will be rendered within a night-time scene. The same Skin Shader has been applied to the characters head and limbs, while masking (within the Material Node Network) provides the ability to customize unique properties on a per model/component basis.
This Generic Skin Shader consists of properties that make it unique to this image such as, a blue glow that will invariably prevail in most highlights due to the final output being a low light scene captured at night. Factors such as this make this Shader’s scope of usage limited when applied outside of Night-time renders, but factoring properties such as this into the development of this Shader will extend a consistent property across characters and ultimately serve the purpose of unifying the look of skin at night.
There are several main components of this Shader that should be editable as individual sets which consist of single or multi-Node groups. For example, the Normal Map Group consists of a single Image Texture Node as opposed to the Sub-Surface Scattering Group which consists of several SSS Nodes mixed with other Shader Nodes to produce the final result.
Relevant groups are listed below,

  • Color Texture
  • Normal Map
  • Ambient Occlusion
  • Glossiness
  • Reflection
  • Sub-Surface Scattering
  • Masking

Material Node Editor for a Base Shader

nodelayout02

Primarily, a Base Shader should remain simple but flexible.
The above image indicates how the Shader is easily broken up into smaller manageable groups that can be re-combined in ways that are applicable to the specific needs of the model. For example, the order that the groups are mixed into the Shader does not have to remain consistent as the Fresnel group used to simulate reflections can typically be mixed into the network towards the end for a less subtle effect as it occurs at night.

In this particular case simple yet prominent changes elicited within Groupings consisting of Texture Maps (at the beginning of the Network) will yield the most visible and relevant results. The creation of the Normal, Color and Ambient Occlusion Texture Maps per each model, as noted in previous posts, are reduced to Single Nodes within the depicted Node Network. With these maps in place, applying the Generic Skin Shader to another model is simply a case of duplicating the Shader and swapping out the applicable Maps (Color, Normal, AO) for that of the target model’s unique texture maps. The benefit of this method is in yielding a very quick setup for a Rendering that will produce an almost accurate simulation of the final result.

sssDamselFullECU

In Blender terms this is simply a case of selecting the target model you would like to transfer the Shader to, then selecting the Shader from the 1.  Browse Material to be Linked list and clicking the 2. Add Material button in the Material Properties View.

addmat

With the Base Shader copied to the new model/component replacing the applicable textures which in this case will be Color, Normal and AO can be achieved by opening the Node Editor View, locating the Texture Node and clicking the Single User Copy button in order to create a new data block that inherits the previous Node’s settings. Then browse to locate the applicable Texture file for the target model/component.
This simply ensures that the changes you make within this Node are not reflected in previously used Shaders.

addmat2

Flattr this!

Posted on

Vampiro WIP IV: High Frequency Detailing

Creating Normal Maps

High Frequency Detailing differentiates from High Resolution Sculpting as the process will generally require a completed Medium to High Resolution Sculpt in which the model’s topological structure can be retained. As High Resolution Sculpting is used to produce details that require geometric displacement (such as wrinkling and lower level detail), High Frequency Detailing serves the purpose of displacing light rays (such as with pores in a character’s skin) and therefore makes the effect more subtle as it will only be visible in sufficiently illuminated areas. Retaining a models topological structure while creating High Frequency Details, should also preserve the underlying High Resolution Sculpt detail and the model’s UV Layout.

bump

The image above depicts High Frequency Detail (HFD) applied at a character’s skin level. Creating this level of detail has been achieved by sculpting with a modifier that retains the geometry loops that comprise the model’s topology, despite geometric subdivision. As the model’s High Resolution Sculpting should be completed at this stage, the purpose of using this modifier is multi-part as it will not only preserve details from High Resolution Sculpting (HRS) but also preserve UV layouts that were performed on the underlying geometry. It is therefore significant that geometry displacement deeper than wrinkling, is not targeted for High Frequency Detailing as this will potentially result in distortion of UV’s and HRS detail loss.

pores2

In the case of Blender, model topology preservation while sculpting can be achieved with a Multires Modifier.
Closer inspection of the detailing in the above image will reveal excessively boosted details in some areas (such as around the eyes and under the nose). It is important that details are boosted in these regions during High Frequency Detail Sculpting as much of the detail produced at this level is prone to becoming washed out either by a lack of sufficient illumination or Sub-Surface Scattering of light rays during render-time. Boosting details at this point increases the chances of their visibility in the final render, as a result it is therefore recommended that this technique is used sparingly in combination with the Mulitres and Normal Map method for HFD as noted below.

Multires in Conjunction with Normal Mapping for Rendering HFD

The creation of a set of custom made brushes for sculpting HFD should be determined by the requirements of the specific models. Nonetheless, there are certain brushes I’ve found to be consistent for many human-based characters such as skin pore brushes, cracked lip brushes etc. Below are several custom brushes used for sculpting this character’s High Frequency Details, you are free to download and use these brushes in your own projects permissible by CC0 (Creative Commons Zero) licensing.

damselHeadSkindamselHeadSkinEyelid

Skin on Forehead and wrinkles around eyes (above) with finger skin (below).

fingersKnucklefingersMainNoise

Detail context (below)

vampiroColor00_sml

facePasses

When Sculpting of High Frequency Detail is completed a Normal Map can then be Baked from the result.

A Normal map should, however, not replace sculpted HFD but provide an additional level of control for HFD which is ultimately controlled with the Multires modifier in conjunction with the baked Normal Map.
When the Normal Map is applied to the appropriate texture channel (typically the Normal Map Channel) of the model’s material, and with the model’s modifier stack retaining the HFD in a Multires modifier, an additional level of control can be used to balance the model’s render times with details required for the final shot.
In other words using this method provides an additional degree of control over the model’s Level Of Detail (LOD), whereby mixing the results can decrease render times (by reduction of Multires subdivisions at render-time) for areas where this detailing does not prevail in the final render.
Use of the Multires modifier at this stage is also beneficial in terms of keeping viewport interactivity at realtime speeds as the modifier’s Preview setting can be set to 0.
Normal mapping can also render significantly faster than the excessively, sub-divided geometry required for HFD.

This set-up is particularly useful during animation when the same model is required for Wide to Extreme Close-up’s (ECU) but also useful when the effects of the Multires Modifier, Normal Mapping and High Resolution Sculpting are mixed for static imagery.

Flattr this!

Posted on

Vampiro WIP III : Prepping Models for a Color Pass

Retaining Quality and System Integrity

As the models in this image will be used for a still frame, the topology contributing to their appearance will not require deformation for the purposes of animation. As a result, reduction of the geometry (that determines the model’s topology) can be Decimated, as opposed to being retopologized, for Realtime Viewport interactivity during texturing.

ColorPassVWIMPIII

The above image is a test render to depict the textures applied to certain models for a Color Pass. Ambient Occlusion, Transparency, Sub-Surface Scattering, Glossy and Material Properties have been excluded from this Pass. Working on an image such as this in multiple Passes will reduce system load, and keep Viewport interactivity as next to Realtime as possible.

Further improvements in reducing system overheads can be achieved by referencing High Resolution geometry externally.
This can be achieved by Decimating a duplicate of the model, then using a ShrinkWrap modifier to externally re-target a Subdivided version of the Decimated Model, to the original High Resolution model created from sculpting.

If we were to have a look at that statement more practically in Blender terms and propose it in the form of a question,

How Do We Get a High Resolution Static Model to Retain it’s Detail But still Provide Realtime Viewport Interactivity for Texturing?

1.Complete Sculpting and Save Separate Files

vamptopHR

The above image depicts the Vampiro character’s top which is created by sculpting and has resulted in approximately 4 Million triangles.

counttriHR

Of course, many modern computers used for Sculpting can still handle a load of this type, but bear in mind, this is only one component of many that make up the Render as a whole. When the rest of the components are added to the scene interactivity in the Viewport (such as Panning, Dollying, Tumbling the camera etc) will drop to unusable and unstable levels.

It’s also worth noting that when this character is assembled the file size, as it is physically read by Ubuntu, equates to approximately 717MB.

savesHR

The implication of working with a file this large is that it will consume physical disk space rapidly particularly as a result of incremental saving, slow down realtime interactivity by consuming system RAM (which is important when texturing as multiple applications need to be open simultaneously) and finally in severe cases could result in an unstable system reverting to Swap Space in order to compute simple user requests such as Panning, Dollying or Tumbling the 3D Viewport which only make up a small part of the texturing process and exclude cloning and painting amongst various other system intensive tasks.

If Sculpting is completed on the model at this stage. The model can be saved in a separate file, this is useful in terms of not increasing the file size any further but still being able to reference the high resolution model from another optimised file.

2.Duplicate, Decimate and Apply

vamptopLR

The above image depicts a duplicate of the sculpted model that has been saved in another separate file. This model has then had it’s geometry reduced with the Decimate modifier, which has resulted in close to 99% reduction of polygons.

counttriLR

However this reduction has resulted in detail loss, particularly in areas where surface curvature appears smooth in the High Resolution model. Subsequently, Blender can compensate for this loss without reducing system performance by referencing the High Resolution model externally. This has the added benefit of not increasing the optimised, working file’s size by a factor of the High Resolution file’s size and thereby allowing for incremental saving of the working file with reasonable results.

3.Link Externally

link

From the file containing the Decimated model, the Decimate Modifier can then be Applied.
On a new layer, the High Resolution model can then be imported into the optimised file by using the Link command. This does not physically import the model into the current scene but adds a reference to it instead. A result of this operation is that, the High Resolution file cannot be moved from it’s current location (on disk) or the Link between the two files will be broken.

HRLRBlend

Saving this file with both High and Low Resolution models in a single unit will result in a much smaller file size.

savesLR

4.Subdivide and ShrinkWrap for Render-time

With the High Resolution model referenced within the current scene, you will now be able to see the model in the Outliner and subsequently use this model as a Target for the Low Resolution model.

modifier

A Multires Modifier can then be added to the Low Resolution model in order to subdivide the model at Render-time and not during the Preview stage so as to retain Realtime Viewport interactivity. It has also been an observation of mine, that this method will reduce render times significantly.
Following the Multires modifier is the Shrinkwrap Modifier that is used to target the High Resolution model.

files

5. Conclusion

Subsequently there is no need to work on the High Resolution model further, and nor does Blender permit further object, or sub-object level editing of the Linked Model. Such edits should be performed on the original file and the changes will be reflected in the optimised file effectively.

The result of this preparation is that the Low Resolution model can then be UV unwrapped, Textured and Rendered with Realtime performance, a reasonable geometry count and used in combination with other models and components without reducing system stability.

Flattr this!