twice in the sequence. This is called a loop closing.
A correct topological map results if all loop closing
links are added.
In natural environments, loop closing based on
mere vision input can be very hard, for two rea-
sons. Firstly, the environment at one place can look
different at different time instances due to illumina-
tion changes, viewpoint differences and occlusions.
A comparison technique that is not robust to these
changes will overlook some loop closings. Secondly,
there can be different places that look alike, i.e. the
environment is self-similar. These would add erro-
neous loop closings and thus yield an incorrect topo-
logical map as well.
In (Wahlgren and Duckett, 2005), the authors de-
tect loops by comparing omnidirectional images with
local feature techniques, a robust technique we also
adopt. But, there method suffers indeed from self-
similarities, as we experienced in previous work
(Goedem
´
e et al., 2004b).
Also in loop closing, probabilistic methods are in-
troduced to cope with the uncertainty of link hy-
potheses. (Chen and Wang, 2005), for instance, use
Bayesian inference. (Beevers and Huang, 2005) re-
cently introduced Dempster-Shafer probability the-
ory into loop closing, which has the advantage that
ignorance can be modeled and no prior knowledge
is needed. Their approach is promising, but limited
to simple sensors and environments. In this paper,
we present a new framework for loop closing using
rich visual sensors in natural complex environments,
which is also based on Dempster-Shafer mathematics
but uses it differently.
We continue the paper with a brief summary of
Dempster-Shafer theory in section 2. Then we de-
scribe the details of our algorithm in section 3. Sec-
tion 4 details our real-world experiments. The paper
ends with a conclusion in section 5.
2 DEMPSTER-SHAFER
The proposed visual loop closing algorithm relies
on Dempster-Shafer theory (Dempster, 1967; Shafer,
1976) to collect evidence for each loop closing hy-
pothesis. Therefore, a brief overview of the central
concepts of Dempster-Shafer theory is presented in
this section.
Dempster-Shafer theory offers an alternative to tra-
ditional probabilistic theory for the mathematical rep-
resentation of uncertainty. The significant innova-
tion of this framework is that it makes a distinction
between multiple types of uncertainty. Unlike tra-
ditional probability theory, Dempster-Shafer defines
two types of uncertainty:
- Aleatory Uncertainty – the type of uncertainty
which results from the fact that a system can be-
have in random ways (a.k.a. stochastic or objective
uncertainty)
- Epistemic Uncertainty – the type of uncertainty
which results from the lack of knowledge about a
system (a.k.a. subjective uncertainty or ignorance)
This makes it a powerful technique to combine several
sources of evidence to try to prove a certain hypothe-
sis, where each of these sources can have a different
amount of knowledge (ignorance) about the hypothe-
sis. That is why Dempster-Shafer is typically used for
sensor fusion.
For a certain problem, the set of mutually exclu-
sive possibilities, called the frame of discernment, is
denoted by Θ. For instance, for a single hypothe-
sis H about an event this becomes Θ = {H, ¬H}.
For this set, traditional probability theory will define
two probabilities P (H) and P (¬H), with P (H) +
P (¬H) = 1. Dempster-Shafer’s analogous quantities
are called basic probability assignments or masses,
which are defined on the power set of Θ: 2
Θ
=
{A|A ⊆ Θ}. The mass m : 2
Θ
→ [0, 1] is a function
meeting the following conditions:
m(∅) = 0
X
A∈2
Θ
m(A) = 1. (1)
For the example of the single hypothesis H, the power
set becomes 2
Θ
= {∅, {H}, {¬H}, {H, ¬H}}. A
certain sensor or other information source can assign
masses to each of the elements of 2
Θ
. Because some
sensors do not have knowledge about the event (e.g.
it is out of the sensor’s field-of-view), they can assign
a certain fraction of their total mass to m({H, ¬H}).
This mass, called the ignorance, can be interpreted as
the probability mass assigned to the outcome ‘H OR
¬H’, i.e. when the sensor does not know about the
event, or is—to a certain degree—uncertain about the
outcome
1
.
Sets of masses about the same power set, coming
from different information sources can be combined
together using Dempster’s rule of combination:
m
1
⊕ m
2
(C) =
P
A∩B=C
m
1
(A)m
2
(B)
1 −
P
A∩B=∅
m
1
(A)m
2
(B)
(2)
This combination rule is useful to combine evidence
coming from different sources into one set of masses.
Because these masses can not be interpreted as classi-
cal probabilities, no conclusions about the hypothesis
can be drawn from them directly. That is why two ad-
ditional notions are defined, support and plausibility.
They are computed as:
Spt(A) =
X
B⊆A
m(B) P ls(A) =
X
A∩B6=∅
m(B)
(3)
1
This means also that no prior probability function is
needed, no knowledge can be expressed as total ignorance.
ICINCO 2006 - ROBOTICS AND AUTOMATION
4