toring routines as well as a mechanism for ensuring
data on what networks are available at a given location
is accurate and easily accessible. Such data must be
available in a format that allow it to be processed by
hosts with limited resources, and transmitted over po-
tentially low bandwidth networks without significant
impact. A few systems (Laasonen et al., 2004)(Soh
and Kim, 2004) have utilised data collected from pre-
vious journeys.
Another approach is a proactive-modelling ap-
proach in which a mathematical model is used to de-
termine the TBVH based on simple geometric cal-
culations. Though less accurate than the proactive
knowledge approach, this approach is flexible and can
be used in simulations as well as in real networks.
There is also a growing need to combine proactive
and reactive policies, with a view to developing an ar-
chitecture where it is possible to choose which policy
to use in a given situation. Hence, when there is accu-
rate information then a proactive policy may be used.
However, when there is no coverage data or the data is
unreliable, then the system can fall back to a reactive
policy mechanism.
3.6 Network Transport Layer
This layer concerns functions that would normally be
assigned to the network and transport layers of the
OSI model. Hence this layer examines addressing,
routing and transport issues in peripheral networks.
The current opinion is that all networks whether in
the core or on the periphery should be using TCP/IP.
This thinking has been reinforced by End-to-End ar-
guments which have been used throughout the ar-
chitectural discussions when the Internet was de-
signed (Saltzer et al., 1984). The current evolution of
the Internet questions some of these End-to-End ar-
guments. As indicated previously, today the Internet
is evolving into a very fast core network with mobile
networks on the periphery. This means that charac-
teristics of the core and the periphery are diverging in
terms of latency, throughput and error profiles.
In the light of this, the assumption that TCP/IP
should be used in peripheral networks for heteroge-
neous networking needs to be carefully re-examined.
Firstly we should question whether IP should be used
in such networks. An assumption of the current IP in-
frastructure is that every machine should have a glob-
ally unique IP address to use on the network. This
has, however, been directly challenged by the success
of Network Address Translation (NAT) techniques. In
NAT, a private IP address space is employed in the pe-
ripheral network while only a few global IP addresses
are used to actually communicate with other machines
on the Internet. The NAT software then provides the
translation between the global IP connection and the
local machine with its private IP address. Because
all datagrams must traverse the server performing the
NAT, it provides a point in the network where incom-
ing packets can be analysed and filtered as necessary.
In addition, it increases security by not making lo-
cal machines visible on the Internet, thus reducing the
potential for targeted security flaw exploits, and DoS
attacks against specific machines.
The success of NAT – which is considered to be far
from ideal by network purists, including the authors
– questions the assumption that all machines should
be assigned a globally unique IP address. NAT makes
the case that IP addresses should be confined to mov-
ing data within the core network. In the peripheral
networks some other form of local addressing may be
used with the translation between the networks tak-
ing place at the local gateway. Such an approach,
if deployed, will question the (often challenged) as-
sumption that there are insufficient IPv4 addresses.
Though the authors support the deployment of IPv6,
the key requirement for its deployment – to provide
an infinitely large global address space – needs to be
re-examined in the light of new realities.
3.6.1 TCP – Found Wanting
It is clear that TCP is unsuitable for wireless net-
works (Meyer, 1999)(Xylomenos et al., 2001). This
is primarily due to the fact that TCP interprets packet
loss as exclusively due to congestion and reacts by
substantially decreasing its send rate, and then em-
ploying its slow start mechanism. While such a con-
clusion may be valid for wired systems such as the
core network, peripheral wireless systems continu-
ously lose packets due to channel fading, interference,
vertical handovers, and other related effects. Most of
these transients are temporary and unrelated to net-
work congestion.
There have been several attempts to modify TCP
in the light of these effects, such as those described
in (Balakrishnan et al., 1997) and (Chandra et al.,
2003). Recently, there has been a move towards not
modifying the TCP protocol engine but making it
more responsive to temporary network outages (Scott
and Mapp, 2003). While this is useful, a clear, gen-
erally applicable, and elegant solution has not been
found.
3.7 The Case for Network Plurality
and Application Conformity
The idea that a different networking infrastructure
runs in peripheral networks brings with it many chal-
lenges. Most importantly, the ability to translate to
different naming and addressing schemes as pack-
ets are transmitted through different networks. Some
of issues are addressed in Plutarch (Crowcroft et al.,