Definition 6 (Credibility Function) Given a set of
unique identifiers I, we say C is a credibility function
if it maps all values of I into a some totally ordered
set.
The notion of credibility provides the agent with a
measure of strength belief per argument source. This
notion also provides a simplistic measure of trust. For
simplicity, we have define it as a function that maps a
set of unique identifiers into a total ordered set. How-
ever, one could define an arbitrary complex measure
taking into account of the context in which the argu-
ment is situated. For example, if the agent is a stock-
broker, then any arguments related to the share market
from this agent will be more credible than any agent
that is not a stock-broker. One could also define a
mapping to an arbiter set as long as there exist oper-
ators that are transitive and asymmetric defined over
the set.
Definition 7 (Defeat) Given a set of tagged argu-
ments A and the credibility function C, a relation
D ⊆A×Ais said to be a defeat relation on A.
We will write φDψ iff at least one of the following if
true:
• A
φ
attacks A
ψ
and A
ψ
does not attack A
φ
• A
φ
attacks A
ψ
and C(S
φ
) >C(S
ψ
)
• A
φ
and A
ψ
are in conflict, neither A
φ
attacks A
ψ
nor A
ψ
attacks A
φ
and C(S
φ
) >C(S
ψ
)
The tagging of arguments allows us to uniquely
identify the argument source, and so we make use of
this in our definition of defeat. Also, note that our no-
tion of defeat is not defined as a global relation but
as a per-agent defeat relation and is determined by a
credibility function C (see later definition 8).
Our definition of defeat also encapsulates vari-
ous types of defeat. For example, (Prakken and
Vreeswijk, 2002) states that assumption attack oc-
curs when one argument proves what was assumed
unprovable by the first (in other words, when a con-
clusion of one argument attacks the assumption of
another). We will say that assumption attack occurs
when facts of one argument attacks the assumption of
another argument. This is captured by attack(φ, ψ)∧
¬attack(ψ,φ).
Similarly, (Prakken and Vreeswijk, 2002) states
that rebuttal occurs when the conclusion of one ar-
gument attacks the premise of another. We deviate
slightly and say that rebuttal occurs when facts of one
argument attack facts of another argument. This is
captured by attack(φ, ψ) ∧ C(S
φ
) >C(S
ψ
).
Finally, (Prakken and Vreeswijk, 2002) states that
undercut occurs when an argument attacks an infer-
ence rule used in another argument. In our sys-
tem undercut occurs when the conclusion of the
arguments contradict but neither the facts nor as-
sumptions of both arguments contradict, as this
implies that the contradiction occurs during infer-
ence. This notion is capture by conf lict(φ, ψ) ∧
¬ (attack(φ, ψ) ∨ attack(ψ, φ)) ∧ C(S
φ
) >C(S
ψ
)
Definition 8 (Agent) Given a set of unique identifiers
I and a set of tagged arguments A, an agent is repre-
sented as a 4-tuple of the form I, A, C where
• I ∈I.
• A ⊆As.t. ∀φ : φ ∈A,ifS
φ
= I then φ ∈ A
• C is a credibility function. This represents the cred-
ibility (higher values are better) of other agents in
the system as evaluated by the agent.
Note that the credibility function is subject to revi-
sion by the agent during the execution of the system
as each agent adjusts it’s view of fellow agents. Note
also that the set of tagged arguments A is subjected
to change as individual agent discover new arguments
during the argumentation process. We do not require
the agent to know all arguments nor do we require
that the set of arguments be conflict-free. The notion
of conflict-free set is simply no two arguments in the
set defeat each other.
The uniqueness of this system is that there ex-
ists no global consensus on the credibility value on
each agent. This measure is recorded from individual
agent’s perspective and is stored by individual agents.
Given that there is no requirement for global consen-
sus on individual agent’s credibility, consensus on the
amount of adjustment on the credibility value is not
required either. However, for agents to be produc-
tive, we believed that there should be a consensus on
when and the kind of adjustment that should be per-
formed. A simple rule would be that if an observation
is made of an agent winning an argument, then that
agent’s credibility should be adjusted upwards and the
converse holds. One could also extend this to cap-
ture situation importance. For example, if an agent
is observed to have won an important argument, then
it’s credibility is revised upwards by a greater value
to that of an argument with less important. Similar
to human debates, this provides a notion of a “career
making win”. This also provides incentive for agents
to win the argument. We have provided directions in
which one could formula a true-to-life function, we
leave the function details to designers.
For convenience we will write D
ρ
to denote the de-
feat relation (from definition 7) as determined by the
A and C held by agent ρ.
Definition 9 (Stable) For an agent ρ, we say a set
S ⊆ A is a stable set of arguments for that agent iff
it is a maximal (w.r.t. set inclusion) set of arguments
such that:
•∀ψ ∈ A
ρ
− S, ∃φ ∈ S that conflicts with ψ.
•∀φ ∈ A
ρ
, ψ ∈ S where ψD
ρ
φ.
SOURCE SENSITIVE ARGUMENTATION SYSTEM
41