ther away. In this way the kernel function reduce bias
where the change in density is significant.
In his PhD thesis (Jensen, 1996) Jensen presents
the cone filter. This filter is used to reduce bias, such
that edges and structure in the illumination are less
blurred. As a kernel in the general radiance estimate
the cone filter has the following form
K(y)=
K(y)=
1−
√
|y|
k
(1−
2
3k
)π
if
|y| < 1,
0 otherwise,
(3)
where k ≥ 1 is a constant which controls the steep-
ness of the filter slope.
Another useful kernel is the Epanechnikov kernel.
The Epanechnikov kernel is known from statistics for
its bias reducing properties and it is furthermore pop-
ular because it is computationally inexpensive. In
computer graphics, Walter has employed it with good
results in (Walter, 1998). In 2D the Epanechnikov ker-
nel is given by
K(y)=
2
π
(1 − y)ify < 1,
0 otherwise.
(4)
In this paper we use the Epanechnikov kernel to ex-
amine our proposed method.
2.2 Bias Reduction
Bias reduction is a well examined subject, when con-
cerned with density estimation both within the field of
statistics and the field of computer graphics. Besides
filtering, numerous methods addressing the issue has
been presented.
The first method for reducing bias in photon map-
ping was suggested by Jensen (Jensen and Chris-
tensen, 1995). The method is called differential
checking and it reduces bias by making sure that the
support radius of the radiance estimate does not cross
boundaries of distinct lighting features. This is done
by expanding the support radius ensuring that the es-
timate does not increase or decrease, when more pho-
tons are included in the estimate.
Myszkowsky et al. (Myszkowski, 1997) suggested
to solve the problem in much the same way as Jensen
did with differential checking, however, they made
the method easier to control and more robust with re-
spect to noise. Myszkowsky et al. increase the sup-
port radius iteratively estimating the radiance in each
step. If new estimates differ more from previous than
what can be contributed variance, the iteration stops
as the difference is then assumed to be caused by bias.
More recently Schregle (Schregle, 2003) followed-
up their work using the same strategy but optimizing
speed and usability. Speed is optimized by using a
binary search for the optimal support radius. This
search starts in a range between a maximum and a
minimum user-defined support radius. The range is
split up, and the candidate, whose error is most likely
to be caused by variance and not bias, is searched.
Shirley et al. (Shirley et al., 1995) introduced an
algorithm for estimating global illumination. Like
photon mapping this algorithm uses density estima-
tion to approximate the illumination from particles
generated during a Monte Carlo-based particle trac-
ing step. However, unlike photon mapping the algo-
rithm is gemoetry-dependent - the illumination is tied
to the geometry. They called the algorithm the density
estimation framework and they refined it in a series of
papers.
The first edition of their framework did not try to
control bias. In (Walter et al., 1997) they extended
the framework to handle bias near polygonal bound-
aries. This was done by converting the density estima-
tion problem into one of regression. In this way they
could use common regression techniques
1
to elimi-
nate boundary bias.
Later Walter in his PhD thesis (Walter, 1998), re-
duced bias by controlling the support radius of the
estimate using statistics to recognize noise from bias.
Benefiting from the field of human perception he used
a measure for controlling the support radius such that
noise in the estimate was imperceptible to the human
eye.
Walter recognized that if bias was to be signifi-
cantly reduced, using his method, perceptual noise
had to be accepted in the vicinity of prominent edges
and other strong lighting features. This is a common
problem which also affects differential checking and
both Schregle’s and Myszkowsky’s method. Hence,
in the proximity of strong features such as the edges
of a caustic the support radius stops expanding and the
foundation on which the estimate is made is supported
by few photons. This means that when estimates are
made close to edges the support is limited and noise
may occur.
In diffusion based photon mapping we employ
the concept of nonlinear anisotropic diffusion in the
radiance estimate of photon mapping. Nonlinear
anisotropic diffusion is a well examined and popular
technique within the field of image analysis. It is a fil-
tering technique that adapts its smoothing according
to the image structure. This means that it smoothes
along edges and not across. It is known to be robust
and effective (Weickert, 1998). To our knowledge the
technique has not been employed in connection with
photon mapping.
In contrast to Myszkowsky, Schregle and Walter’s
approach our method will smooth along edges and
structures, it follows that its support will not be lim-
ited in the proximity of these.
1
Specifically they used locally-weighted polynomial
least-squares regression to eliminate boundary bias.
GRAPP 2006 - COMPUTER GRAPHICS THEORY AND APPLICATIONS
170