‘collision detection’, etc. When a different task is re-
quired, a custom task can be created. Custom tasks
are coded using a scripting language, which is inter-
preted or compiled at runtime.
3.3 Events and Device Abstraction
Clearly, interaction is initiated by the user, who
communicates with the environment through de-
vices. Since the number of different device setups is
huge, NiMMiT makes an abstraction of these devices
through the use of events: according to the kind of
device which generates an event, events and devices
are grouped in ‘families’, sharing the same properties.
We define pointer and navigation devices, speech and
gesture recognition and user interface elements such
as menus and dialogs. By switching between devices
within the same family, theoretically, the interaction
itself is not affected, and hence, as a result of this ab-
straction, the diagram does not need to be changed
either. On the other hand, when a click-event is for in-
stance replaced by a speech command, the interaction
changes and a small change to the diagram (changing
the event arrow) is required.
4 CASE STUDY
In this section, we explain our notation by means of a
practical example. First, we describe the principles of
the ‘Object-In-Hand Metaphor’, the two-handed in-
teraction technique in our example. Next, we elab-
orate upon the diagrams. We start with the inter-
action technique for selecting an object. The result
will be hierarchically used in the diagram of the non-
dominant hand’s interaction. In the fourth section, the
relation to the manipulation with the dominant hand is
shortly described. Finally, we consider the support for
other types of multimodal interaction.
4.1 The Object-In-Hand Metaphor
As a case study, we have chosen to elaborate on the
Object-In-Hand metaphor, which we presented and
evaluated in (De Boeck et al., 2004). After an ob-
ject has been selected, the user can ‘grab’ the object
by bringing the fist of the non-dominant hand close
to the pointing device in the dominant hand. At that
instant, the selected object moves to the centre of the
screen, where it can be manipulated by the dominant
hand (figure 5). In our implementation, we allow the
user to select the object’s faces and change their tex-
ture. Since the Object-In-Hand metaphor requires the
user to utilize both hands, this example also illustrates
a synchronization mechanism between different inter-
action techniques.
Figure 4: Selecting an object.
4.2 Selecting an Object
As a first step, the user is required to select an ob-
ject. A number of selection metaphors exist, so the
designer has several alternatives. We have chosen a
virtual hand metaphor: highlight the object by touch-
ing it, and confirm the selection by clicking. This
interaction component can be easily expressed in the
NiMMiT notation, as depicted in figure 4.
The interaction technique starts in the state ‘Se-
lect’, which reacts to two events: a movement of the
pointer and a button click. Each time the pointer
moves (and the button is not clicked), the leftmost
task chain is executed. This chain contains two con-
secutive, predefined tasks: collision detection and
highlighting an object. The first task has two optional
input ports, indicating which objects should be taken
into account when checking for collisions. If optional
inputs have no connections, default values are used.
By default, the first task checks for collisions between
the pointer and all the objects in the virtual environ-
ment. If a collision occurs, the colliding object is
passed on via the output port.
The second task in the chain, the highlighting of an
object, will only be executed when all of its required
input ports receive a viable value. If the first task does
not detect a collision, this prerequisite is not satisfied
and the chain is aborted. Consequently, the system re-
turns to the state ‘Select’ and awaits new events. If the
highlighting task does receive an appropriate value,
the object is highlighted. Finally, the output is stored
in the label ‘highlighted’ and a task transition returns
the system to the state ‘Select’.
If a click event occurs while the system is waiting
in the state ‘Select’, the second task chain is executed.
It contains only one task: the selection of an object. If
GRAPP 2006 - COMPUTER GRAPHICS THEORY AND APPLICATIONS
228