Posted on

Minotaur XVI: Control Rig

Creating Controls for an Animation Rig

Control rigs are used to abstract the complexities of an Animation rig, which in turn transforms a Pose rig. By simplifying the controls of a character’s rig the animator benefits from learning how to use the rig quickly while Blocking Out a Sequence and applying this knowledge when Refining Poses with controls that follow the same logical set-up.

Unified Logic for Animation Controls

It’s important that an animator does not spend alot of time figuring out how to effectively use a rig to target Key Poses for a specific character, particularly given that each character’s rig will need to differentiate (to some degree) for believable Mesh Deformation.

As a result a unified logical approach when determining how to set-up a character’s controls can assist the animator with understanding how to pose a character from Blocking Out to Refining/Finessing Sequences. It is therefore more significant for a rig to bias a single, specific transformation for animation purposes rather than utilize a combination of animatable Translations, Rotations and Scales that will likely keep the animator guessing when becoming accustomed to a rig.

minotaurLD

One Control for all Controllers

The visual representation of the Control rig is the animator’s first point of contact when creating animation for a character. It is therefore essential that the controls do not require an explanation, a tutorial or a verbose software manual. The intent of each Control should be clearly conveyed in its appearance and that appearance should be consistent and unobtrusive for the animator.

  • Isolating controls into groups with Layers that logically cascade into more detailed, smaller groupings can be used to communicate rig structure and hierarchy clearly to an animator.
  • Simply utilizing the location of a Control can communicate Controller Functionality to an animator, i.e. what part of the mesh is effected by that specific control. This can create a sense of familiarity for any animator that is aware of how a marionette works.
  • Relying on representing control sets with artistic icons can easily become subjectively interoperated by one animator to the next and subsequently should be avoided.

A single Control Type on a rig that is Translation-biased is effectively the only explanation an animator should require.

A Controller for a Translation-biased Rig

Animation Rigs consist of IK controllers, Constraints and Dummy bones. They are used to drive the Rotations of the Bones that comprise the Pose rig, which in turn causes the character’s geometry to deform.
Animating with IK (Inverse Kinematics) is preferable for many animators as multiple Bones can be controlled with a single Controller and (more importantly) it is easier to establish contact points with a surface during animation.
However, the bones used in an IK chain can often end up obscuring a mesh’s visibility. This is also often the case with most rigs consisting of general Animation Constraints, such as Tracking and Property Duplication Constraints which are essential parts within any medium to complex Animation Rig.

A Control Rig built on top of an Animation Rig can be used to reduce the viewport clutter that prevails when setting Constraints for an Animation rig.

A Translation-biased Control Rig communicates to the animator that the rig favours Translations as opposed to other transforms for animation purposes.

selectdselectdNot

The Following section outlines the setup procedure required in Blender to reproduce the main Control used throughout the Minotaur’s Rig. When the Armature is not selected the Control’s visibility is virtually non-existent, keeping the animator’s view of the character unobstructed. Subsequently the animator is not forced to hide the rig during Playblasts or OpenGL Preview Renders.
As a rig should never be exclusively Translation-based this setup does not effect animating between IK and FK as the animator would normally expect the ability to overwrite Animation and Control Rig precedence when necessary.

Setup for a Translation-biased, Unobtrusive and Persistent Control for Control Rigs

setp1RendcrossSetup consists of two simple controls that make up the Custom Shapes required throughout the rig. The shapes resemble a Plus and Minus sign. They should have very low poly counts, between 2 to 6 faces. Adjacent planes work best as the screen-space they require will be minimal.

The Minus shape is used to point to the area that is effected barby the Translation, while the Plus is used as the Selection Handle and should be the main component the animator will subsequently need to interact with. Keyframes for Translation will typically be set on the Selection Handle (AKA Plus shape).

setp2Rendsetup03An animation rig can be setup as per usual, making use of IK’s, constraints, expressions and drivers. This example is using a two chain IK constraint. The component selected in the image will become the Selection Handle, it will therefore not require a Parent.

setp3Rendsetup04With the Target bone selected change it’s Display Property to the custom Selection Handle (Plus shape). The option to use a Custom Shape in place of a Bone’s default representation can be found in the Bone’s Object Properties View, under the Display section.

setp4Rendsetup02Now it’s time to add the Bone that will be used to point to the area effected by the transform. In Edit Mode, duplicate the Bone used as the Selection Handle. Then Parent it to the Bone that Deforms the area effected by the Transform. Use the “Connected” Parenting option.

NB. Currently Blender (ver 2.70) requires that the setup must follow the order of Steps as noted in this guide. Having the Bone that was created in Step 4 (Minus Shape) connected prior to setting up an IK Constraint will often yield undesirable Rotations in the chain.

setp5Rendsetup05Display the duplicate Bone as the Minus Shape. Use the same techniques noted in Step 3 to set the new Bone’s Display Properties to the Custom Shape resembling a Minus sign.

setp6Rendsetup06Make the Minus sign point and stretch to the Selection Handle. Add a Stretch To Constraint to the Minus Bone and target it to the Selection Handle (Plus Bone). Adjust the Rest Length Property and set Volume Preservation to None.

setp7RendFinally move the Bones cluttering the view to other Layers.

Use the Properties View to move Bones that are not needed for animation purposes out of sight.

The final Setup should only require that the Control Rig and Deformable Mesh are visible for the animator.

 

Flattr this!

Posted on

Minotaur XV : Animation Rigs

Animation Rig: Categorization

Up until now we have been working with the Minotaur’s pose rig in order to test how the geometry deforms during weight painting and skinning. Pose rigs are not intended for animation as they will generally be comprised of overtly simplified controls, for example my Minotaur’s pose rig is controlled only by FK rotations.

AniRig00

Animation rigs are also intended to “pose” a character from one keyframe to the next, while your 3D package interpolates the difference between each pose and creates what is known as tweens or inbetweens, which are simply frames that smooth the movement of the character or object from one keyframe to the next. However, animation rigs differentiate substantially to pose rigs particularly in their setup which will usually involve the addition of controller bones. These bones do not have to be a special software specific type of bone, but can be the same type of bones used throughout the rest of the rig, the main purpose of controller bones is to simplify the process of achieving a pose by manipulating chains of bones with a single controller. Controllers can also be other types of non-bone objects depending on the software you use, such as locators, empties or even polygon geometry.

AniRigchains

When I set out to create an animation rig there are several objectives I try to always keep in mind:

  • All components of the skinned model should, at their most basic level, be controllable with FK (Forward Kinematics).
  • Use IK (Inverse Kinematics) chains and Constraints to control multiple FK bones, other IK chains and avoid base level transforms.
  • An intuitive rig will be driven with translations and very few other transform types (rotate, scale).**

**However, when working with certain games engines a combination of translation and rotation controls may work out to be a more effective (especially when it comes to re-targetting animations which may involve an FK rig as a controller).

Base FK Rig 

Animation rigs can start looking pretty complicated very soon in the creation process, as a result I like to divide my characters rig’s into atleast two main categories which subsequently fall under the same armature/skeleton hierarchy.

The first category consists of the base components of the rig. These are generally all deformer bones, that when rotated cause the skinned model to deform. Only the parent of this set of bones should be translatable and these bones are not intended to be animated directly.

The second major category consists of all controllers (bones and non-bones). This includes all components which are intended for animation and those that are not intended for animation and are also not a part of the base FK category.

Grouping components like this helps reduce clutter in the viewport and keeps the rig looking simple. Further categorization of controller components is also necessary, but will differentiate from one rig to the next. Creating these categories does not necessarily  make use of a specific software feature, but can be done by placing groups of bones in the same bone layers in Blender or creating multiple character sets in Maya. The purpose is simply to reduce the amount of bones and controllers that are visible in the viewport at any time. As skeletons can generally be displayed in X-ray mode in many 3D animation packages, the character’s geometry might get obscured by bones. Hiding bones in the viewport and only working on specific sets at a time can prevent occluding the geometry in the viewport.

AniRigMulti

To further reduce viewport clutter bones can be proxied by other objects. Bear in mind however that bone proxies are simply a visual aid in setting up armatures and should ultimately assist in simplifying the design of your rig and not complicating it with obscure symbols that, even you as the creator, can tend to forget the meaning of over time.

Flattr this!

Posted on

Minotaur XIV : FK and Pose Rigs

FK Rigging

If you have been following the other posts on the Minotaur you might have noticed that this is not the first post that mentions rigging. In fact the very first post for the Minotaur “Minotaur” mentions rigging and another post on Skinning “Minotaur XI” also mentions rigging. As you can imagine rigging is not essentially reserved for the purposes of animating a character, but is often used for that purpose.

In the first post on the Minotaur a rig was used to pose large portions of the mesh by deforming the mesh in such a way as to avoid geometry intersecting itself, ultimately this rig was used as a tool for modelling.

In the post on skinning, a rig was used to pose the final version of the modelled character for a wide action shot. A rig like this is not suitable for animation and is intentionally kept simple as once the character is in the desired pose, the deformation is baked into the geometry and the rig is discarded. Deformations that usually would be done with weight painting (and would be visible for that particular pose) can then be added with lattice deformers, sculpting and modelling tools.

In the following posts we will be discussing creating a Forward Kinematics Rig, a Controller Rig and skinning the Minotaur to the FK rig all for the purpose of creating animations.

The above video demonstrates the Minotaur attached to a FK rig and posed for a turntable. The render is of the realtime model (<7k polys) as it is seen from the 3D viewport, and has incomplete textures.

One of the most important technical qualities of a character that needs to be setup for animation, is to have multiple levels of detail of which at least one of the mid to low levels provides a close to final output representation of the character’s deformed geometry while still maintaining realtime playback in the 3D viewport. Relying on non-realtime rendered viewport previews (also known as Playblasts) can hinder the process of creating animations significantly, and maintaining a responsive 3D environment in which you create your animations is crucial.

The above video is a demonstration of the Minotaur moving from one pose to another driven by an FK rig, while scrubbing the playback head in Blender. Note the characters realtime responsiveness to the timeline (at the bottom of the frame) as the mouse moves back and forth.

When the Minotaur was bound to it’s armature which comprised solely of an FK rig almost every bone in the rig was enabled for deformation. The controller rig is then built on top of the FK rig once weight painting has reached a reasonable representation of the what the finished product will look like.

As mentioned the FK rig is then posed in various ways to test how the mesh deforms, it is in these poses that weight painting occurs. It should never be necessary to paint weights on a character in it’s default/bind pose.

Although the term weight painting implies a superficial task that is related to the surface of the mesh, I prefer to think of weight painting as an extension of the modelling process. It is true that weight painting is performed only on the surface or “skin” of the mesh but the objective of the task is to modify the volume of skin, muscle tissue, flesh etc that is effected by the bones that are rotated to create that deformation. As a result we are simulating the deformation of a volume by means of a tool that addresses the surface of the model, which effectively displaces vertices by moving them towards or away from the area of deformation. In Blender, we paint vertices as red if we would like them to be more effected by a bone’s deformation, in Maya we would paint them as white. Regardless of what software you use the principle remains the same, we are effectively modelling what we would like the areas surrounding the armature’s/skeleton’s joints to look like when those bones are rotated into a position other than their rest positions. We do this so that we can ensure that every time the bone is rotated into a particular position that the volume of geometry surrounding that bone and it’s joints will fold, wrinkle and deform the same way each time.

This character has two layers of FK bones, one used to deform the Minotaur and another used to deform the Minotaur’s armour.

A rig like this is far too complex and cumbersome to animate with the simple rotations and a single translatable parent that FK would allow for, so in order to make the animation process more intuitive a controller rig will be created from the FK rig.

 

Flattr this!

Posted on

Minotaur XIII : Texturing, Materials and Uv’s

Texturing

The Minotaur consists of several textures and materials that are composited together, this render depicts the current painting status of the Minotaur’s color texture channel.

This above image took approximately 6 minutes to render using the Blender Internal (BI) Renderer. This includes 3 Sub-Surface Scattering (SSS) passes (each with it’s own material), a color map, a normal map and 28K polys subdivided 3 times at render-time. Although there is still a lot of work that needs to be addressed, particularly regarding the specularity/ reflections pass and completion of texture painting for color and normal maps, I find the render times and quality from BI to be very reasonable and certainly something I am pleased to work with.

Materials

The main reason of having multiple materials composited for the Minotaur is so that three layers of sub-surface scattering can be addressed independently. These layers represent the epidermal skin layer, the subdermal skin layer and the back scattering layer.

The Epidermal skin layer is the outer most layer of skin and as such will tend to be the layer that most prominently shows the current texture map that is being painted in the previous rendering.

The Subdermal layer, is used to represent the fatty tissue that exists under the epidermal layer. It’s texture map will differentiate most significantly in that it will also include the color of the Minotaur’s veins. The material’s primary function is to create the impression of the character having volume, as opposed to appearing like an empty shell.

The Back Scatter layer, is the SSS layer that is most discernible as it adds a reddish tinge simulating blood vessels within the Minotaur’s body. This will be particularly noticeable in areas where the Minotaur’s volume is significantly less so that it is easier for light to pass through it, such as in his ears.

The following two images demonstrate the three materials composited together, with SSS properties. The first image is of a low resolution render followed by the same material and lighting setup on a high resolution model.

As you can see the SSS material properties effect the renderings with significant differences based on the mesh density. This is yet another benefit of using actual geometry displacement, and not relying on normal or bump textures for surface variation (as would be the case in the first of the two renderings). Fortunately, rendering high density geometry that is subdivided at render time (see Minotaur XI Skinning post for more details) is a feasible option in Blender.

Multiple UV Layouts

As I mentioned in a previous post this character will require multiple UV layouts so that more texture space can be allocated to certain areas of the model that will be featured in some close-up shots.

One of the downsides of multiple UV layouts at this stage is having to re-bake the characters normals with the new UV layout. Although this is not a problem for me as I save my working files incrementally, it does mean having to revisit previously saved files which some people might find to be problematic (depending on your file saving habits). As the characters UV’s are adequately layed out, I will only need to add one additional UV layout.

The following image shows my current progress with the color texture channel. As you can see I prefer to work on one side of the character as I lay down a base for the details that follow, then mirror and modify this base in an image editor before painting more detail and variation in the texture. I use composites of photographs layed down in an image editor (the GIMP in this case) then export the image as a PNG and paint over it with the clone and paint texture tools in Blender.

This texture is approximately 10% completed in this image. I hope to have more posts of this map as it develops.

Flattr this!

Posted on

Minotaur XII : Optimizing a Poly Count for Rendering

Optimization Testing

The above video is the first multi-angle, animation test with the Minotaur character.

Synopsis

The model and armour take about 4.5min/fr to render. Using the rendering technique outlined in my previous post on skinning (“Minotaur XI”) the Minotaur in this animation is subdivided 3 times at render-time, then targeted to a high resolution sculpt which was baked at level 03 to displace the subdivided geometry (see “Minotaur XI” for details). The high-res sculpt is saved externally and linked to the current render file, this has reduced the file size for this particular character from approximately 0.5GB to 100MB. A smaller file size also helps to clear up some RAM usage, which has been reduced from 8GB (RAM) + 3.5GB (Swap Space) to a current usage of 4.1GB at render time and 2.1GB when loaded (Blender startup uses about 1.3GB of RAM on my system, NB. the values noted are gross totals). This reduction in RAM usage accounts for the reduced render time which was previously 30min/fr to the current time of 4.5min/fr.

Testing Criteria

Only Two separate passes of 1) character and 2) armour were used in this render. No texture maps have been completed yet, as this render is mainly used to gather data on three main categories:

  • how geometry is being displaced at render-time over the entire mesh
  • how normal mapping effects the displaced geometry
  • and render timings on optimized models.

Armour Geometry Targeting Displacement and Normal Mapping

Several basic renders were also created testing the same criteria in the armour, the results follow.

The above image is of the Minotaur’s right shoulder guard. The lighting is particularly “unflattering” in these images as certain areas of the geometry are highlighted for consideration. Any areas that indicate stretching of the normal map will need to be addressed with multiple UV layouts, but this will likely only be addressed at a much later stage when the camera has been locked down for the final shots.

The above image is a shot of the right shoulder guard from the back of the character. It’s evident from this test that geometry displacement did not recess the polygons comprising the holes in the strap adequately, as was the case in the sculpt data. Custom transparency maps will need to be used to compensate for this lack of displacement on the character’s armour straps.

The above image is of the lower body area with the toga armour between the legs. The sculpt data on this geometry was exceptionally high and as a result is a serious consideration with regards to detail loss during optimization. However, the geometry displaced considerably well when the Simple algorithm for calculating subdivisions was chosen (as opposed to the standard Catmull-Clark method). Subsequently the toga armour straps only required a single level of subdivision (the lowest for all the characters components). The Toga is also planned to be a hairy surface in the final render, so a large amount of detail would have been waisted with more subdivisions.

Flattr this!

Posted on

Minotaur XI : Proxy Model Setup

Skinning

Skinning is the process of attaching geometry to an underlying skeleton or armature. We then use the armature to pose the model and simplify the process of animating it. There were several issues that had to be considered when skinning the Minotaur character, of which the main issue was inconsistent normals. This post will cover a method I discovered on how to correct this issue without having to resort to a lattice/cage deformer and still allowing for the (very important) proxy object method of rigging.

What’s so bad about inconsistent normals?

As all polygons will have two surfaces, the software you are using will need to know which of those two surfaces is pointing away from the mesh. Geometry’s Normals should be perpendicular to the surface of the polygon pointing away from the outer surface of the mesh. This is not only important for skinning but also for sculpting.

Easy Fix

The best way to ensure consistent normals in a 3D package is to Apply or Bake all transforms of the model, then select the model’s faces, normals or vertices (depending on your 3D software) and recalculate the direction of the selected components normals.
In Blender this is really easy as ctrl-n will automate this process for your selection. In other software you might need to turn on “show surface normals” to ensure that the objects normals are pointing in the correct direction, and if not select the erroneous component and choose “flip normal”. This will reverse the direction of the selected component’s normal.

You might come across inconsistent normals when sculpting with a tool that translates geometry away from the surface or into the surface of the mesh. For example, creating a stroke along the surface of your mesh (with a sculpt tool such as “Draw”) could start as concave but end as being convex. In this case the normals are possibly inconsistent and need to be addressed.

If this problem is not addressed and the same model is used in skinning, that model is likely to suffer from poor deformations.

Problems Arising From The Modifier Stack

Although this problem might be trivial at times and fixing it is simply a matter of performing the steps outlined above (see Easy Fix), sometimes the above method will not be practical as it does not respect an objects modifier stack. If you are using one or more modifiers in your objects stack such as multi-resolution or other modifier that deforms the object at a component level, reversing the normals of the base mesh will effect everything in the stack above it.

Secondly, if you are using a multi-resolution mesh you will probably know that working with a mesh at it’s highest level of subdivision is simply not practical. The problem is that in order to recalculate the object’s normals you need to bake the modifiers into the object’s stack before you can recalculate the normals, and recalculating the normals of a high resolution mesh baked from a multi-resolution modifier is not practical and sometimes not even possible (for lack of system resources).

Normal and Displacement Maps Method

If you have come across this problem, one of the most common solutions seems to be to bake out a normal and displacement map from the sculpt data although, I found that this method produces results that are in some ways vastly different from renderings that include the highest level of sculpt data. However, as you can see in the image below, the results are not completely unusable, but warranted too much of a compromise on quality to be used as a sole solution.

The above image demonstrates the results of this method. It is a single subdivision from the sculpt data baked into the mesh, this means that the mesh being used for the final render is a realtime mesh. Since the multires has been applied/baked into the mesh the normals can be recalculated safely. The model then has a Normal map and a Displacement map (which were both previously baked from the highest level of sculpt data) applied to it. A subdivision surface modifier is then applied to the model and it’s levels are increased for render time (the viewport model remains realtime).

Cons

As you can see the results are not too bad, but substantial detail is lost in the lower body and the outline of the model is exceptionally smooth in an undesirable way.

Pro’s

The main benefit of this method is that it computes optimally for realtime viewport interactivity and has a relatively short render time. If your character is not a focal point and nothing closer than a wide shot is required you’ll probably be able to achieve useful results from this method.

Weight Transfer and Mesh Deform Method

One of the methods I am familiar with in dealing with this problem is to create two models a realtime model and another high resolution model from the sculpt data. The realtime model is then skinned and animated, the high resolution model is then bound to an identical or the same rig before render time and the weights of the realtime model are transferred to the high res model. As a result the only process intensive task performed on the high resolution model is rendering. No manual production tasks need be performed on the high resolution model, which would be impractical. This tool set has existed in Maya since version 6.5 (if I’m not mistaken).
I was expecting to use this method in Blender, however it slowly became undeniably apparent that Blender does not (as of version 2.63) have a transfer weights option that matches the usability that I’d previously been accustomed to.

Fortunately, this issue is being addressed by user 2d23d and you can read about it at this post on blenderartists.org
The addon looks very promising and I sincerely hope it continues to be developed, as at present it is unable to address exceptionally high levels of geometry. Which made it unusable as a solution in this particular situation.
Other methods are suggested in the above thread such as the usage of a “mesh deform” modifier, which I think was added to Blender during the Apricot project which resulted in the open source game Yo Frankie!
Unfortunately, the mesh deform modifier proved to be the most cumbersome and difficult method (particularly as weight transfer only takes a couple of minutes in Maya). Creating the deformation cage took a total of 10 hours, and the results were unfortunately still unusable. I would recommend that anybody attempting to use this method creates a cage from scratch and do not try and convert a base mesh into a cage, especially if your model is not symmetrical or has a lot of sharp edges.

If I had been able to apply the mesh transfer method I would have ended with a result similar to the one below.

The above image is a rendering of the actual sculpted model at it’s highest level of resolution. As you can imagine this is a mammoth sized polygonal model, for example the cracks in the skin are geometry and not a normal map. Looking at this rendering and the final render below it’s difficult to tell them apart, however the most notable difference is in the characters tail which you can see faintly behind the character’s left calf. The highest level sculpt render (above) shows protrusions extending from the end of the tail, the same protrusions created from the realtime model (below) do not extend as far. This can, however, be corrected by using a level 05 sculpt for the shrinkwrap target (method explained below) and increasing the subdivision surface levels at render time. But in this case it would not warrant the additional time that would be added onto the render as the end of the tail will mainly be covered in fur ;)

ShrinkWrap Method

The method that I finally settled on turned out to be exceptionally simple and takes only a few minutes to setup.

  1. Bake a Normal Map from the highest level of sculpt data (and a Displacement Map if desired).
  2. Create a Realtime model and a High Resolution Model to be used as a reference at render time. The High res model does not need to be baked at the highest level of sculpt data. I chose level 3 of 5 because at level 3 all indentations in the mesh are visible therefore breaking up the smooth outline problem mentioned in earlier render tests (see Normal and Displacement Maps Method).
  3. After ensuring that the model’s normals are facing the correct direction, for both models (see Easy Fix). Place the models on different layers and hide the high res model so as to speed up viewport interactivity and apply the Normal and Displacement maps to the realtime model as per usual.
  4. Select the realtime model and apply a Subdivision Surface Modifier (so as to increase the poly count at render time as this will be the final model used for rendering), then add a Shrinkwrap modifier and target it to the high res model. Order is important here, as the surface needs to be subdivided at render time before it can be shrinkwrapped to the high res model.
  5. Bind/Parent the realtime model to an armature, with the armature modifier completing the stack in the setup. Once again, order must be respected this is so that the armature deforms a representation of the high res model (by means of the other two modifiers) at render time.

As you can see using this method the high res model remains hidden and out of the way as it requires no manual process intensive work such as weight painting, binding (to cages or armatures) and no proxy mesh data transfer is required either. The longest part of this setup is baking the Normal and Displacement maps.

The above image is the final result, as you can see the highest level of sculpt detail is retained, for example the bumpy and folded knee area in this render can be compared with the initial rendering for the Normal and Displacement Maps Method where the knees are extremely smooth. Also note that the models outline is no longer smooth, either.

Another benefit of using this method as opposed to rendering high resolution geometry from a viewport (such as in the weight transfer method), is reduced render times. This model takes approximately 5 minutes to render compared to the image shown in the “Weight Transfer and Cage Deform Method” section that takes approximately 30 minutes to render (as my system has to resort to swap space in an effort to cache 8GB (RAM) + 3.6GB (Swap Space) of polygonal data).

PS. You might have noticed that the Minotaur suddenly has some color but I’ve not mentioned anything about it until now. That’s because I’ve started texturing him, but as you can see this is not yet completed. So you can expect a post on materials, textures and rigging soon!

Flattr this!

Posted on

Minotaur X : Sculpting Armour

Armour

This image is of the highest level multi-res sculpt which, at level 5, is about 1,4 million polys for this model. Since this is just a test render mainly to see what the character currently looks like when wearing the armour, the armour is rendered at a low resolution. The Minotaur’s lower-body draping-clothing goes up to level 7, it is rendered here at level 4 in the front to level 1 at the back. The depth of field effect that this creates is merely coincidental and no post production is done on this image (as it is for testing purposes only).

The Minotaur sculpt data is about 95% complete in this picture, and attention needs to be directed towards the hands, mouth area (where removal of symmetry has begun). The final stage of the the Minotaur’s sculpt can only be completed when the armour sits flush against the characters skin, which would subsequently cause indentations in the flesh and possible abrasions.

The highest level sculpt is not created with symmetry, but in order to emphasise the effect lower level sculpts often need to be readdressed so as not to create the impression of the highest level data looking like it’s been painted on the model, but is more solidly integrated into the characters anatomy.

Flattr this!

Posted on

Minotaur VIII : Sculpting and System Stability

Sculpting

I generally lay out UV’s before sculpting commences…

This may not be an ideal solution in some cases as vertex translation of the base mesh is likely to occur, when the higher level sculpt data is propagated to the base level. If you have a texture map applied using the current UV layout, stretching will occur, as a result of vertices being translated.

However, I won’t apply a texture map to the model until the sculpt can be applied to the base level mesh, in order to avoid this side-effect. So why not just do the UV layout after sculpting is done, and you’re ready for texturing?

Well you could, but I prefer having an idea of what my UV layout will look like before completing the sculpt, I can then bake out normal map tests while sculpting. This gives me a clear indication of whether my UV’s are spaced out enough to provide adequite room for sculpt details at a reasonable texture size. My UV’s can then accordingly be tweaked again before arriving at a final layout.

System stability is also a big issue for me.

The system I am using to create this model has a limited amount of RAM only 8GB. Although this might be adequate in many circumstances, it will require that I adopt a different approach for the level of skin detail that is needed with this model. Basically, it will mean having to cut the model up into smaller components or “dice” the model in order to sculpt micro-level details and maintain a workable, interactive 3D environment.

Typically, this might mean having to separate the head, the torso, lower body etc all into separate files but I don’t like doing this because it could result in hard edges in areas where the components of the model would have been separated. Instead I keep the model unified and use the system’s resources for rendering the model in the realtime, openGL viewport, which is particularly important for sculpting. In this case performance might be compromised at the highest multi-resolution (or subdivision) level, in which case this can be counter-acted by hiding enormous amounts of geometry and concentrating only on small portions of the mesh. Of course, this is the purpose of the highest subdivision level, to create micro-level details so there is no problem with hiding two thirds of a model, thereby reducing the viewport vertex count from 100’s of millions to 1 million or less. This hiding (or masking) re-establishes realtime viewport interaction.

You might be aware that in order for such a high level mesh to be usable it will need to be baked to produce a normal map. However, normal map baking is a product of rendering and rendering requires additional RAM. The minotaur at it’s highest subdivision level is using about 2GB to 3GB of total RAM (depending on OS configuration) to open and display the file, rendering the model in this state is not an option as the amount of RAM required will increase by three to four times that amount. Which would exceed my system’s available RAM at which point Swap space (or virtual memory) will be used. This will make the system unstable as other software and services try to compete for available resources. Keeping your 3D program’s RAM usage below 50% of your total system’s RAM, and not exceeding this will provide a much more stable environment, where crashing during a rendering and wasting time in the process can be avoided.

  • With the models UV’s layed out, I am free to jump back into edit mode once all highest level sculpting is completed.
  • In edit mode I can delete entire portions of the model such as everything but the head, return to object mode and render a normal map for the head without compromising system stability as the amount of object data has been substantially reduced by dicing.
  • Since the UV’s are already in place I can repeat this process for the other model components, arms, legs, torso etc until I have several high resolution maps with the model’s components already in their correct positions.
  • As long as all the maps are rendered with the same image aspect ratio and pixel aspect ratio, the files can easily be imported into a single multilayer document and exported as a single high resolution normal map, that retains the model’s micro-level details and can then be applied to the original model which can then be collapsed to the base for furthering production.

As you can see using this method the model’s vertex order is retained, no additional vertices are added or merged (which would have subsequently modified the UV layout) and you have the benefit of working in a stable 3D environment.

SIDE LOWER BODY Level 2 sculpt

BACK LOWER BODY Level 2 Sculpt

FRONT LOWER BODY Level 2 Sculpt

FRONT MINOTAUR Level 2 Sculpt

And here you can see a start to a higher level sculpt

RENDERED MINOTAUR FRONT Sculpt Level 5

The final image is the start of the current highest level sculpt, as you can see veins are starting to occur at this level, and so to are pores which will only become clearer in later renders.

Flattr this!

Posted on

Minotaur VII : Laying out UV’s

UV Layout

Students I’ve worked with have often asked me why the term UV is used? And the answer I give them is the same answer I was given 15 years ago when I was learning about UV’s for the first time. “The term UV is used because it is not XYZ”

As ambiguous as that may sound, it is probably the most fitting description for UV’s that I’ve ever heard. As with the term XYZ in 3D UV’s also relate to dimensions, but as you can imagine U and V relate to only two dimensions. Although it is not a given rule, most 3D applications will use the U dimension to represent width and V to represent height. But U and V dimensions are not simply 1 to 1 pixel matches relating to the width and height of a bitmap image. They are in fact used to quantify “texture space” which comprises of 2 dimensions. As you are aware textures are 2 dimensional bitmaps, procedural maps or other map types that are wrapped around a 3D object. UV’s provide the crucial method by which texturing and mapping artists translate these 2 dimensional maps into a 3D environment.

In much the same way that a vertex represents a point on a 3D model in 3 dimensional XYZ space, a UV represents a point on a 3D model translated into a 2 dimensional texture space.

When you view a Bitmap used as a texture for a model in a 3D application’s UV editor it will be forced to fit into a square shaped editing area, this area is often referred to as 0 to 1 texture space. This simply means that the square area is used to measure a starting value of 0 (at the UV texture space origin) to a value of 1 in both the horizontal and vertical axes in floating point numbers. As the amount being measured is the same in both dimensions (i.e. 0 to 1) the area forms a square shape, and is as such refereed to as 0 to 1 texture space. The bitmap that you create to use as a texture and subsequently (with the aid of UV’s) intend to wrap around your 3D model, must fit within this texture space. Various 3D applications have different methods for achieving this, and as such it is important that you try to avoid letting your 3D software decide how to make your bitmap fit into this square space. The most obvious method of achieving this is to create bitmaps that are square, in other words the bitmaps width must match it’s height. Furthermore, in order to make efficient usage of your machine on which the rendering (real-time or pre-rendered) of which these textures will be done the dimensions should be to the power of 2’s for example 16 x 16, 32 x 32, 64 x 64, 128 x 128, 256 x 256, 512 x 512, 1024 x 1024, 2048 x 2048, 4096 x 4096 etc Using bitmaps that have dimensions that are to the power of 2 will also be particularly useful for graphics displays that use the mipmap technique for displaying textures.

You can read more about UV mapping in my “Understanding UV’s” page.

UV unwrapping should only be attempted after the modeling phase is completed.  Edge loops need to represent the models target topology, as the key points to creating a good UV layout is in creating a layout that:

  1. matches the models form as close as possible
  2. does not have overlapping UV’s
  3. minimizes stretching
  4. uses texture space as efficiently as possible

UV editing has come a long way since it’s first implementations. Certain areas of the mesh need to be isolated as they will require extra detail or form separate pieces of the same model. One of the major advancements in UV editing is automatic unwrapping. The first implementations that I used of this was in a 3D Studio MAX plugin called “Pelt”. As you can see the map on the left of the above image is starting to resemble what the pelt of this minotaur might look like. This is what is meant by, “the UV layout should be as true to the original model’s form as possible”. From this layout we can tell by looking at it that it is from a character that has two legs, arms with fingers and a torso. These isolated UV components floating around in texture space are called UV shells.

The red lines that flow through the minotaur on the right represent the seams along which the UV shells will be separated to form the flat shells you see on the left. In other words the outer edges of the shells are the red lines you see in the 3D model.

The above image shows the UV shells layed out to match the models form as close as possible, but the shells are outside of 0 to 1 texture space (represented here as the light grey square area). These shells then need to be proportionately scaled to utilize the 0 to 1 texture size as efficiently as possible.

The completed UV layout above. The shells are as follows corresponding to the appropriate side of the model (i.e. left ear on the left side, right hoof on the right side) starting from the bottom shells ears and hooves, tail in the middle, and the main shell consisting of limbs, torso and neck, the head is to the left (and is the second largest island), rows of teeth molars (lower jaw, upper jaw) on either side of the main shell. The buckle cavity (mouth interior) in the upper left corner, with major canines on the top of the layout and the tongue in the middle.

The red area on the right is an empty space for the eyes which will eventually be joined to this mesh. But only after sculpting is completed, this is to reduce the amount of geometry that will be subdivided when sculpting the face.

 

Flattr this!