
 
used in the system will be made in the subsequent 
section: 
3.1 Illumination Estimation 
The Illumination Estimation method used to 
determine how the light of a certain scene is 
configured has a number of constraints and 
assumptions that will be listed here. These 
assumptions apply to the entire Augmented Reality 
system. 
The system is only useable outdoor during 
daytime. This is due to the assumption that the sun is 
the only major light source in an outdoor scene, and 
therefore the only direct light source needing 
estimation, where the sky is providing secondary 
lighting, which will be estimated as ambient light. 
The system is constrained to only running under 
conditions with no precipitation, as it will alter the 
reflectance properties of the surfaces in the scene.  
Furthermore the scene that is to be augmented 
must contain diffuse surfaces, as these will be the 
sources to estimation of the scene lighting. 
In order for the system to have the ability to 
estimate the light of the scene, a 3D model of the 
scene is also required, as well as an HDRI 
environment map recorded in the centre of the scene. 
Finally, as the light is estimated from the images 
recorded by a camera of the scene, the camera needs 
to be calibrated to fit the scene. The 3D model of the 
environment required in this system, needs only to 
be a simple representation, containing only the main 
surfaces of the scene. E.g. a square building needs 
only representation as a box. 
In calibration of the system to the scene, the user 
is prompted to mark on an environment map of the 
scene, which visible surfaces are considered diffuse, 
and can be used for estimation. 
When the system has been calibrated, the 
Illumination Estimation is able to analyse the images 
of the scene taken by the camera, and determine 
from the 3D model, the environment map, and a sun 
model the intensity of the direct light from the sun, 
as well as the intensity of the indirect lighting from 
the reflected surfaces in the scene. 
The result of the estimation is passed on to the 
rendering pipeline as RGB intensities for direct and 
ambient lighting and as a light vector giving the 
direction vector to the sun. The light parameters are 
compliant to the Phong shading model, as a variety 
of this model is used to derive the light estimated 
parameters. 
The Illumination Estimation analyses the images 
using 500 randomly selected pixel samples, from 
which the light parameters of the used model is 
estimated. Under the assumption that a sun model 
provides the direction vector to the sun, the method 
is able to estimate the light intensity of both direct 
and indirect light in the scene, if the camera has 
surfaces in both light and shadow within its frame. 
E.g. the method will estimate the RGB intensity of 
the sun to almost zero, when there is a heavy cloud 
cover, because it sees no noticeable difference 
between the area in direct light, and the area, that 
should be in shadow. 
The light parameters are estimated for every 
frame in the current implementation of the system, 
and runs at 10 fps. 
The estimation of light and shading of the virtual 
objects is furthermore based on the assumption, that 
the sunlight in an outdoor scene is purely directional. 
This is not completely correct in reality, but the 
angle difference to the incoming sunlight at two 
points in a scene that are e.g. 100 metres apart are 
insignificant and therefore the system uses the light 
direction given by the light estimation in the entire 
scene, which also helps speeding up all shading and 
shadowing calculations performed real-time. 
Another assumption of the project has been that 
outdoor environments with brick buildings and tiled 
stones are close to being diffuse, which is used to 
derive the illumination parameters. 
3.2 Basic Rendering 
This section describes how a virtual object is 
augmented into one frame when the local light 
parameters are known. 
When the lighting of the given scene has been 
estimated, this is used to place an object in the scene 
that is subjectively appearing as if it is part of the 
scene, instead of an object manipulated into the 
frame. 
The simplest way to do this is by placing the 
virtual object within the scene. Use the Phong 
shading model, supported by any 3D hardware, in 
conjunction with the estimated light parameters on 
the object. This will result in a virtual augmented 
object, which seemingly matches the lighting of the 
surrounding scene. Except the surfaces of the object 
not in direct sunlight will have a constant colour 
addition from the surroundings. 
To maintain the illusion that the virtual object is 
an integral part of the real scene, shadows play as 
big a role as the shading itself. Real objects must 
cast shadows onto the virtual object; the virtual 
object must cast shadows onto the real environment. 
To cast shadows from the virtual objects onto the 
real environment and vice versa, the Shadow 
Volume algorithm is used. 
The Shadow Volume algorithm (Crow, 1977) has 
been modified to use two sets of shadows; Virtual 
GRAPP 2006 - COMPUTER GRAPHICS THEORY AND APPLICATIONS
366