#macro Trace(P, D, recLev)
If the raysphere intersection macro was the core of the raytracer, then the Tracemacro is practically everything
else, the "body" of the raytracer.
The Tracemacro is a macro which takes the starting point of a ray, the direction of the ray and a recursion count
(which should always be 1 when calling the macro from outside; 1 could be its default value if POVRay supported
default values for macro parameters). It calculates and returns a color for that ray.
This is the macro we call for each pixel we want to calculate. That is, the starting point of the ray is our camera
location and the direction is the direction of the ray starting from there and going through the "pixel" we
are calculating. The macro returns the color of that pixel.
What the macro does is to see which sphere (if any) does the ray hit and then calculates the lighting for that
intersection point (which includes calculating reflection), and returns the color.
The Tracemacro is recursive, meaning that it calls itself. More specifically, it calls itself when it
wants to calculate the ray reflected from the surface of a sphere. The recLev value is used to stop this
recursion when the maximum recursion level is reached (ie. it calculates the reflection only if recLev <
MaxRecLev ).
Let's examine this relatively long macro part by part:
1.3.10.8.1 Calculating the closest intersection
#local minT = MaxDist;
#local closest = ObjAmnt;
// Find closest intersection:
#local Ind = 0;
#while(Ind < ObjAmnt)
#local T = calcRaySphereIntersection(P, D, Ind);
#if(T>0 & T<minT)
#local minT = T;
#local closest = Ind;
#end
#local Ind = Ind+1;
#end
A ray can hit several spheres and we need the closest intersection point (and to know which sphere does it belong
to). One could think that calculating the closest intersection is rather complicated, needing things like sorting all
the intersection points and such. However, it is quite simple, as seen in the code above.
If we remember from the previous part, the raysphere intersection macro returns a factor value which tells us how
much do we have to multiply the direction vector in order to get the intersection point. What we do is just to call
the raysphere intersection macro for each sphere and take the smallest returned value (which is greater than zero).
First we initialize the minT identifier, which will hold this smallest value to something big (this is
where we need the MaxDist value, although modifying this code to work around this limitation is trivial
and left to the user). Then we go through all the spheres and call the raysphere intersection macro for each one.
Then we look if the returned value was greater than 0 and smaller than minT , and if so, we assign the
value to minT . When the loop ends, we have the smallest intersection point in it.
Note: we also assign the index to the sphere which the closest intersection belongs
to in the closest identifier.
Here we use a small trick, and it is related to its initial value: ObjAmnt . Why did we initialize it
to that? The purpose of it was to initialize it to some value which is not a legal index to a sphere (ObjAmnt
is not a legal index as the indices go from 0 to ObjAmnt1 ); a negative value would have worked as well,
it really does not matter. If the ray does not hit any sphere, then this identifier is not changed and so we can see
it afterwards.
1.3.10.8.2 If the ray doesn't hit anything
// If not found, return background color:
#if(closest = ObjAmnt)
#local Pixel = BGColor;
If the ray did not hit any sphere, what we do is just to return the bacground color (defined by the BGColor
identifier).
1.3.10.8.3 Initializing color calculations
Now comes one of the most interesting parts of the raytracing process: How do we calculate the color of the
intersection point?
First we have to precalculate a couple of things:
#else
// Else calculate the color of the intersection point:
#local IP = P+minT*D;
#local R = Coord[closest][1].x;
#local Normal = (IPCoord[closest][0])/R;
#local V = PIP;
#local Refl = 2*Normal*(vdot(Normal, V))  V;
Naturally we need the intersection point itself (needed to calculate the normal vector and as the starting point of
the reflected ray). This is calculated into the IP identifier with the formula which I have been
repeating a few times during this tutorial.
Then we need the normal vector of the surface at the intersection point. A normal vector is a vector perpendicular
(ie. at 90 degrees) to the surface. For a sphere this is very easy to calculate: It is just the vector from the center
of the sphere to the intersection point.
Note: we normalize it (ie. convert it into a unit vector, ie. a vector of length 1)
by dividing it by the radius of the sphere. The normal vector needs to be normalized for lighting calculation.
Now a tricky one: We need the direction of the reflected ray. This vector is of course needed to calculate the
reflected ray, but it is also needed for specular lighting.
This is calculated into the Refl identifier in the code above. What we do is to take the vector from
the intersection point to the starting point (PIP ) and "mirror" it with respect to the normal
vector. The formula for "mirroring" a vector V with respect to a unit vector (let's call it Axis )
is:
MirroredV = 2*Axis*(Axis·V)  V
(We could look at the theory behind this formula in more detail, but let's not go too deep into math in this
tutorial, shall we?)
1.3.10.8.4 Going through the light sources
// Lighting:
#local Pixel = AmbientLight;
#local Ind = 0;
#while(Ind < LightAmnt)
#local L = LVect[Ind][0];
Now we can calculate the lighting of the intersection point. For this we need to go through all the light sources.
Note: L contains the direction vector which points towards the light
source, not its location.
We also initialize the color to be returned (Pixel ) with the ambient light value (given in the global
settings part). The goal is to add colors to this (the colors come from diffuse and specular lighting, and
reflection).
The very first thing to do for calculating the lighting for a light source is to see if the light source is
illuminating the intersection point in the first place (this is one of the nicest features of raytracing: shadow
calculations are laughably easy to do):
// Shadowtest:
#local Shadowed = false;
#local Ind2 = 0;
#while(Ind2 < ObjAmnt)
#if(Ind2!=closest & calcRaySphereIntersection(IP,L,nd2)>0)
#local Shadowed = true;
#local Ind2 = ObjAmnt;
#end
#local Ind2 = Ind2+1;
#end
What we do is to go through all the spheres (we skip the current sphere although it is not necessary, but a little
optimization is still a little optimization), take the intersection point as starting point and the light direction as
the direction vector and see if the raysphere intersection test returns a positive value for any of them (and quit
the loop immediately when one is found, as we do not need to check the rest anymore).
The result of the shadow test is put into the Shadowed identifier as a boolean value (true
if the point is shadowed).
The diffuse component of lighting is generated when a light ray hits a surface and it is reflected equally to all
directions. The brightest part of the surface is where the normal vector points directly in the direction of the
light. The lighting diminishes in relation to the cosine of the angle between the normal vector and the light vector.
#if(!Shadowed)
// Diffuse:
#local Factor = vdot(Normal, L);
#if(Factor > 0)
#local Pixel =
Pixel + LVect[Ind][1]*Coord[closest][2]*Factor;
#end
The code for diffuse lighting is surprisingly short.
There is an extremely nice trick in mathematics to get the cosine of the angle between two unit vectors: It is
their dotproduct.
What we do is to calculate the dotproduct of the normal vector and the light vector (both have been normalized
previously). If the dotproduct is negative it means that the normal vector points in the opposite direction than the
light vector. Thus we are only interested in positive values.
Thus, we add to the pixel color the color of the light source multiplied by the color of the surface of the sphere
multiplied by the dotproduct. This gives us the diffuse component of the lighting.
The specular component of lighting comes from the fact that most surfaces do not reflect light equally to all
directions, but they reflect more light to the "reflected ray" direction, that is, the surface has some
mirror properties. The brightest part of the surface is where the reflected ray points in the direction of the light.
Photorealistic lighting is a very complicated issue and there are lots of different lighting models out there,
which try to simulate realworld lighting more or less accurately. For our simple raytracer we just use a simple Phong
lighting model, which suffices more than enough.
// Specular:
#local Factor = vdot(vnormalize(Refl), L);
#if(Factor > 0)
#local Pixel = Pixel + LVect[Ind][1]*
pow(Factor, Coord[closest][3].x)*
Coord[closest][3].y;
#end
The calculation is similar to the diffuse lighting with the following differences:

We do not use the normal vector, but the reflected vector.

The color of the surface is not taken into account (a very simple Phong lighting model).

We do not take the dotproduct as is, but we raise it to a power given in the scene definition ("phong
size").

We use a brightness factor given in the scene definition to multiply the color ("phong amount").
Thus, the color we add to the pixel color is the color of the light source multiplied by the dotproduct (which is
raised to the given power) and by the given brightness amount.
Then we close the code blocks:
#end // if(!Shadowed)
#local Ind = Ind+1;
#end // while(Ind < LightAmnt)
1.3.10.8.8 Reflection Calculation
// Reflection:
#if(recLev < MaxRecLev & Coord[closest][1].y > 0)
#local Pixel =
Pixel + Trace(IP, Refl, recLev+1)*Coord[closest][1].y;
#end
Another nice aspect of raytracing is that reflection is very easy to calculate.
Here we check that the recursion level has not reached the limit and that the sphere has a reflection component
defined. If both are so, we add the reflected component (the color of the reflected ray multiplied by the reflection
factor) to the pixel color.
This is where the recursive call happens (the macro calls itself). The recursion level (recLev) is increased by one
for the next call so that somewhere down the line, the series of Trace() calls will know to stop (preventing a ray
from bouncing back and forth forever between two mirrors). This is basically how the max_trace_level global setting
works in POVRay.
Finally, we close the code blocks and return the pixel color from the macro:
#end // else
Pixel
#end
