Adding the lights is the first step toward a Whitted raytracer. For each ray that hits an object, several rays can be cast, reflection, refraction and shadow. The last one is the one created with lights.
Lights and shadow rays
Without light, objects are bland, seem to have no depth, … Light on a scene can be cast by several light sources. When a ray of the camera hits an object, the intersection point can be illuminated by one or several of those sources. Each of them contributes to the color of the object. If the light source direction is parallel with the normal of the object at the intersection point, the contribution will be maximum. If it is orthogonal, the contribution will be zero. The scalar product is the tool used to compute this quantity.
Each light source will be tested one after another. If there is no object between the light source and the intersection point, the light contribution will be added.
First, in the raytracer, the main method will be modified to get the normal through the MaterialPoint structure.
void Raytracer::computeColor(const Ray& ray, Color& color, unsigned int level=0) const
{
float dist;
Primitive* primitive = scene->getFirstCollision(ray, dist, tnear, tfar);
if(primitive == NULL)
return;
MaterialPoint caracteristics;
primitive->computeColorNormal(ray, dist, caracteristics);
color = scene->computeColor(ray.origin() + dist * ray.direction(), caracteristics, primitive);
}
Now, I add in the primitive class a property named diffuse, which indicates how much light is diffused by the object. Then, the scene can be modified:
const Color SimpleScene::computeColor(const Point3df& center, const MaterialPoint& caracteristics)
{
Color t_color(0.);
for(std::vector::const_iterator it = lights.begin(); it != lights.end(); ++it)
{
Vector3df path = (*it)->getCenter() - center;
float pathSize = sqrt(norm2(path));
path.normalize();
Ray ray(center, path);
if(testCollision(ray, pathSize))
continue;
float cosphi = path * caracteristics.normal * primitive->getDiffuse();
if(cosphi < 0.)
continue;
t_color += mult((caracteristics.color * cosphi), (*it)->computeColor(ray, pathSize));
}
bool SimpleScene::testCollision(const Ray& ray, float dist)
{
for(std::vector::const_iterator it = primitives.begin(); it != primitives.end(); ++it)
{
float t_dist;
bool test = (*it)->intersect(ray, t_dist);
if(test && (0.0001f < t_dist) && (t_dist < dist))
{
return true;
}
}
return false;
}
Now, there are several ways of computing the light color. You can just say that a light broadcasts the same color everywhere, or you can use a more physical equation, where the color diminishes when the object is far from the light. As the light emits in every direction, the decrease is proportional to the square of the distance. This is what I used at first (in the newest versions, I've used the non-physical equation for some comparisons tests).
Color Light::computeColor(const Ray& ray, float dist)
{
return color * (1. / (dist * dist));
}
Result
Here is the result of this computation:

One constant is hardcoded: 0.0001f. This is the reason why there are some artefacts in the result image (on the red and the green spheres). If this number is raised to 0.001, the image will be more accurate. But then, if the spheres are nearer, then the shadows may be inaccurate...