Maya Shaders

Tuesday, March 07, 2006

more examples


FinallY!





Friday, January 13, 2006

Smeagol !!!

Crimson Red Gollum
Jade Cool Blue Gollum

Oily and not so oily (after prata) Smeagol

Tuesday, January 03, 2006

Water





Some new images generated with water shader.

Tuesday, December 06, 2005

Wet Reptile



Frog Skin's finally done.
Yes! Finally it's done!
Comments?

Wednesday, September 28, 2005

Stripes


The Stripes function.

This node implicitly gets (x,y,z) coordinates from Maya API and calculates , together with Noise as its randomness, draws a stripey effect on its surfaces.

The stripes are basically a sinsusoidal function with a hashed input of "turbulence" and octaves.

//turbulence function
double noise3::turbulence(double x, double y, double z, double octaves)
{
double turb = -0.5;
double s = 1.00;
while (octaves--)
{
turb+=pnoise3((float)x,(float)y,(float)z) *s ;
s*= 0.5 ; x*=1.5; y*=1.5; z*=1.5;
}
return turb;
}

//stripes function
double noise3::stripes(double x, double f)
{
double t = 0.5 + 0.5 * sin(f * 2 * M_PI * x);
return t * t - 0.5;
}

//the final implementation
float val = 0.5 * stripes( worldPos[0]+ 2 * float(turbulence(worldPos[0],worldPos[1],worldPos[2],1)), 0.3);

Wednesday, September 21, 2005

Samples















Top:Chameleon skin (with leather and cloud nodes / Noise w Turbulence - effects reduced)

Bottom:Waves and water (Noise Function)

1. Made changes to Noise function.

- Implemented it as a node plugin. (3D Texture)
- Still in the process of improving the visuals
- Experimenting with different equations to produce effects(such as wrinkled/marble) as described in GPU Gems.
- Inclusion of 5th interpolant Hermite curve for better bumpmapping. (but considering not implementing because if Ray-tracing is used, shadows of the bumps will not be seen)
- Included Turbulence functions in it also.

2. Currently trying to write a Stripes plugin for the frog skin

- done!






Tuesday, September 06, 2005

Navier - Stokes Eqn on Fluid Behaviour

Navier Stokes (NS) Equations tries to define a method of fluid (liquid) behaviour.

Limitations of NS

  • continuous vol of fluid on a 2D rectangular domain
  • no interaction between air / sloshing water

Maths Background

Behaviour of fluids : Velocity Eqns -> determines how the fluid moves itself & the things that are on it.

Nature of fluids : Velocity varies on both time and space -> represent using a 'vector field'

Vector Field

Assumptions : -> on a 2D cartesian of M by N (M columns by N rows)

  • For every position vector x = (x,y) on vector field
  • Associated Velocity at time t -> u(x,t) = [u(x,t), v(x,t), w(x,t)]

Key is to think in time steps -> if we are able to calculate the u at the current velocity field, then we can move objects and smoke densities.

We assume an incompressible(volume constant in time), homogeneous(density rho constant in space) fluid.

Variables

x(x,y) : vector for 2d coordinate (cartesian grid)
u(x,t) : velocity field
p(x,t) : pressure/scalar field

At time t = 0, if u and p are known,

Fluid behaviour = Advection (velocity of e field carrying itself & others along the flow) + Pressure (molecules interaction with each other) + Diffusion (viscosity - how resistive a fluid is to flow -> resistance = diffusion of momentum) + External Forces (e.g. gravity)

References taken from GPU GEMS

Wednesday, August 31, 2005

Water Shader


Rolling waves

Abstract:
Probably the best way to make convincing sea water in Maya is to use an animated displacement map on a watery-textured plane. While this method produces highly realistic results, the long render times required for displacement make it impractical for many situations.
The method outlined in the title url does not produce ultra-real water waves. Instead, it is intended for use as a shortcut for use when low, rolling waves are acceptable in a shot. Because it uses Maya's particle system it makes tweaking and previewing wave motion fast and simple.

Limitations : no realistic fluid dynamic effects such as breaking wave tops or wakes with this method.

Progress Update

FrogSkin Plug-in

I have started work on writing the 1st FrogSkin shading node. Classification of the node is under "Texture/2D". With reference to some of the ideas used in the previous blog entry, I came up with the following strategy

  • Create a simple skin shader first without the lighting effects (ie, the frog on the left), using elements of simple 2 input colors (1 main and 1 alpha channel) and 1 output channel.

  • Bump-mapping to be introduced at a later date

  • Some of the nodes i will specifically look into are 'skin_bitmap', 'skin', 'incand_bitmap' and 'incand_blinn'

Friday, August 26, 2005

Frog Skin Shader - A hypershade View





Frog skin shader (left to right):
Painted bitmap assigned to color only; same frog with custom shader;
Expanded custom shader - uses advanced connections and operationswhich create special ambience and incandesence based on scene lighting.

Wednesday, August 24, 2005

Types of shaders - as defined by MR

Material Shaders

Material shaders are the primary type of shaders. All materials defined in the scene must at least define a material shader. Materials may also define other types of shaders, such as shadow, volume, photon, and environment shaders, which are optional and of secondary importance.

When mental ray casts a visible ray, such as those cast by the camera (called primary rays) or those that are cast for reflections and refractions (collectively called secondary rays), mental ray determines the next object in the scene that is hit by that ray. This process is called intersection testing. For example, when a primary ray cast from the camera through the viewing plane's pixel (100, 100) intersects with a yellow sphere, pixel (100, 100) in the output image will be painted yellow. (The actual process is slightly complicated by supersampling and filtering, which can cause more than one primary ray to contribute to a pixel.)

The core of mental ray has no concept of 'yellow.' This color is computed by the material shader attached to the sphere that was hit by the ray. mental ray records general information about the sphere object, such as point of intersection, normal vector, transformation matrix etc. in a data structure called the state, and calls the material shader attached to the object. More precisely, the material shader, along with its parameters (called shader parameters), is part of the material, which is attached to or inherited by the polygon or surface that forms the part of the object that was hit by the ray. Objects are usually built from multiple polygons and/or surfaces, each of which may have a different material.

Material shaders normally do quite complicated computations to arrive at the final color of a point on the object:

  • The shader parameters usually include constant ambient, diffuse, and specular colors and other parameters such as transparency, and possibly optional textures that need to be evaluated to compute the actual values at the intersection point. If textures are present, texture shaders are called by using one of the lookup functions provided by mental ray. Alternatively, shader assignment may be used for texturing.

  • The illumination computation sums up the contribution from various light sources listed in the shader parameters. To obtain the amount of light arriving from a light source, a light shader is called by calling a light trace or sample function provided by mental ray. Light shaders are discussed in a separate section below. After the illumination computation is finished, the ambient, diffuse, and specular colors have been combined into a single material color (assuming a more conventional material shader).

  • If the material is reflective, transparent, or using refraction, as indicated by appropriate shader parameters, the shader must cast secondary rays and apply the result to the material color calculated in the previous step. (Transparency is a variation of refractive transparency where the ray continues in the same direction, while refraction rays may alter the direction based on an index of refraction.) Secondary rays, like primary rays, cause mental ray to do intersection testing and call another material shader if the intersection test hit an object. For this reason, material shaders must be reentrant. In particular, a secondary refraction or transparency ray will hit the back side of the same object if face both is set in the options and the object is a closed volume.


    Texture shaders

    Texture shaders evaluate a texture and typically return a color, scalar, or vector (but like any shader, return values can be freely chosen and may even be structures of multiple values). Textures can either be procedural, for example evaluating a 3D texture based on noise functions or calling other shaders, or they can do an image lookup. The texture shader needs to know which point on the texture to look up, as a vector assigned to the coord parameter. Coordinate lookups are a very flexible way to implement all sorts of projections, wrapping, scaling, replication, distortion, cropping, and many other functions, so this is also implemented as another shader. It could be done inside the texture lookup shader itself, but separating it out into a separate shader allows all those projections and other manipulations to be implemented only once, instead of in every texture shader


    Volume Shaders

    Volume shaders may be attached to the camera or to a material. They modify the color returned from an intersection point to account for the distance the ray traveled through a volume. The most common application for volume shaders is atmospheric fog effects; for example, a simple volume shader may simulate fog by fading the input color to white depending on the ray distance. By definition, the distance dist given in the state is 0.0 and the intersection point is undefined if the ray has infinite length.

    Volume shaders are normally called in three situations. When a material shader returns, the volume shader that the material shader left in the state- >volume variable is called, without copying the state, as if it had been called as the last operation of the material shader. Copying the state is not necessary because the volume shader does not return to the material shader, so it is not necessary to preserve any variables in the state.

    Unless the shadow segment mode is in effect, volume shaders are also called when a light shader has returned; in this case the volume shader state- >volume is called once for the entire distance from the light source to the illuminated point (i.e., to the point that caused the material shader that sampled the light to be called). In shadow segment mode, volume shaders are not called for light rays but for every shadow ray segment from the illuminated point towards the light source. Some volume shaders may decide that they should not apply to light rays; this can be done by returning immediately if the state- >type variable is miRAY_LIGHT.

    Finally, volume shaders are called after an environment shader was called. Note that if a volume shader is called after the material, light, or other shader, the return value of that other shader is discarded and the return value of the volume shader is used. The reason is that a volume shader can substitute a non-black color even if the original shader has given up. Volume shaders return miFALSE if no light can pass through the given volume, and miTRUE if there is a non-black result color.

    Material shaders have two separate state variables dealing with volumes, volume and refraction_volume. If the material shader casts a refraction or transparency ray, the tracing function will copy the refraction volume shader, if there is one, to the volume shader after copying the state. This means that the next intersection point finds the refraction volume in state- >volume, which effectively means that once the ray has entered an object, that object's interior volume shader is used. However, the material shader is responsible to detect when a refraction ray exits an object, and overwrite state- >refraction_volume with an appropriate outside volume shader, such as state- >camera- >volume, or a volume shader found by following the state- >parent links.

    Since volume shaders modify a color calculated by a previous material shader, environment shader, or light shader, they differ from these shaders in that they receive an input color in the result argument that they are expected to modify.


    Environment shaders

    Environment shaders provide a color for rays that leave the scene entirely, and for rays that would exceed the trace depth limit


    Light Shaders

    Light shaders are called from other shaders by sampling a light using the mi_sample_light or mi_trace_light functions, which perform some calculations and then call the given light shader, or directly if a ray hits a source. mi_sample_light may also request that it is called more than once if an area light source is to be sampled. For an example for using mi_sample_light, see the section on material shaders above. mi_trace_light performs less exact shading for area lights, and is provided for backwards compatibility only.

    The light shader computes the amount of light contributed by the light source to a previous intersection point, stored in state- >point. The calculation may be based on the direction state- >dir to that point, and the distance state- >dist from the light source to that ray. There may also be shader parameters that specify directional and distance attenuation. Directional lights have no location; state- >dist is undefined in this case.

    Light shaders are also responsible for shadow casting. Shadows are computed by finding all objects that are in the path of the light from the light source to the illuminated intersection point. This is done in the light shader by casting `` shadow rays'' after the standard light color computation including attenuation is finished. Shadow rays are cast from the light source back towards the illuminated point (or vice versa if shadow segment mode is enabled), in the same direction of the light ray. Every time an occluding object is found, that object's shadow shader is called, if it has one, which reduces the amount of light based on the object's transparency and color. If an occluding object is found that has no shadow shader, it is assumed to be opaque, so no light from the light source can reach the illuminated point.
  • What are shaders?

    The word shader means different things to different people and in different software. I sorta like the definition that exists in the Advanced Renderman Book (however, I am somewhat biased),

    All shaders answer the question "What is going on at this spot"

    That might seem too general, but it's just about the only way to describe what shaders can do. For example, in prman, you have definitions for surface shaders, displacement shaders, light shaders, volume shaders and imager shaders.

    In Mental Ray, you have a whole list including material shaders, texture shaders, shadow shaders, photon shaders, etc.

    In 3dsmax as yet another example, they have generalized their shading system into the concept of maps and materials, each material can be a different "shader". The line between a map and a material /shader in prman is somewhat vague, while in max it's been split into two distinct areas.

    Friday, August 19, 2005

    Fractals

    ccc



    Chapter 7 of "Texturing and Modelling - A Procedural Approach" By David Ebert et al

    Fractals

    What is it?

    a geometrically complex object, with the complexity arising through repetition of form over some range of scale(size)

    Definition

    Imagine the object as a kind of cottage cheese, with lumps all of a particular size. Fractals can be built from it, by (1) scaling down the lateral size and height of the lumps (2) adding them back to the original lumps (3) do these several times - we have a FRACTAL!

    Technically speaking,
    Lateral Size : frequency of the function (spatial frequency)
    Height : amplitude
    Amount by which lateral size is changed : lacunarity (latin for gap) i.e imagine octaves in music

    In computer graphics, all fractals must be Band-Limited - reason (1) violates Nyquist sampling limit and causes aliasing (2) infinite loop may occur.

    Good for?

    natural phenomena. eg, mountains repeating over a range, triangular fractals for snowflakes, fire, water
    turbulence character : clouds, steam

    Wednesday, August 17, 2005

    Hypershade Editor

    How to map textures onto Polygons

    There are three ways to project and place a texture onto a model:

    Planar Mapping projects a texture map onto an object using the projection plane of the texture map. This type of projection is best suited to flat objects.

    Choose Polygons > Texture > Planar Mapping - to open the options window.

    Cylindrical and Spherical mapping bends a texture map into a cylindrical shape over a cylindrical-shaped object and a spherical shape over a spherical-shaped object.



    1.Press F11, or press the right mouse button and select Face from the marking menu, then marquee-select the entire object or click to select the faces you want to map.
    2.Click the beside Edit Polygons > Texture > Cylindrical or Spherical Mapping to open the options window.



    Note:

    1.Do not select As projection or As stencil. Interactive placement is for NURBS, not polygonal geometry.

    2.If there are no UVs on the object (see UVs and polygonal texture mapping for information about UVs), the texture will not display when in shaded Hardware Texturing mode. The texture can only be seen in the 3D view after you select a mapping technique, such as Cylindrical Mapping, to place the texture. These mapping methods create UVs if there are none on the object.