Message system decides whether a message will
be got from the system periodically through the
following two ways:
– Consumers declare the feature explicitly in their
subscriptions. This can be done by adding extra
information to the parameter of createConsumer()
or createDurableSubscriber() operation.
– System can use a heuristic method to predict
consumers’ periodically getting actions. This is
useful when there are a lot of periodical getting
actions but consumers have not declared their
periodical actions.
Our approach is to prefetch periodical messages
into the message cache just before the time
consumers get them. We must decide which
messages to be prefetched and when do the
prefetching tasks, as we will discuss below.
3.1 Message Prefetching
Due to the limited size of the cache, we should only
cache messages which are most likely recently used.
Our approach is based on the LRU algorithm
considering the periodical getting actions of
consumers and message relations.
When a message producer sends a message to
message system, the message will be logged directly
into the persistence store if it will not be used
recently according to subscription information.
Otherwise the message will be put into the cache. If
the cache is full, a message less important in the
cache will be deleted from the cache so that the new
message can be put into the cache. The importance
of a message is decided by the following factors:
message size, message priority, reside time in the
cache and user defined factors. We can use
administration tool to configure these factors.
Suppose a message will be got at time T
according to subscription information, we must
prefetch the message to the cache before time T.
Related messages will also be prefetched to the
cache because they may also be got from the system.
We must decide the time to begin to prefetch
messages. If we prefetch messages too early, the
cache will be used unnecessarily. We must estimate
the time T
prefetch
, time used in getting messages from
the persistence store to the cache. We can begin the
prefetching tasks at time T - T
prefetch
so that messages
will not reside in the cache unnecessarily.
But doing this leads to another problem. At time
T - T
prefetch
there may be a lots of messages need to
be prefetched. This will cause the system too busy
and subsequently decrease system performance.
Therefore we should do the prefetching tasks
without causing the system too busy. We can do
prefetch messages at a time before time T - T
prefetch
.
The performance is also influenced due to doing all
the prefetching tasks as one unit as shown in Fig. 3.
To reduce the influence of prefetching to other
system activities, the prefetching tasks is divided
into many pieces in our approach, as shown in Fig.
3.
We also use daemon threads to do the prefetching
tasks so that the prefetching action can be done at
time system is not busy. However, we can not
predicate when daemon thread will run. Thus we can
not guarantee that the prefetching tasks will be
finished before consumer gets the messages.
In ONCEAS MQ, we proposed a new approach,
which does the prefetching tasks in system idle time
using daemon thread to some extent and can also
guarantee the tasks will be finished before the time
consumer gets messages from the system.
We divide the tasks into n parts, and all the work
must be done in a time interval T
prefetch
before time T,
the time consumer gets messages from the system.
We delegate daemon thread to do the tasks, but if the
daemon thread hasn’t finished x% of the tasks when
the time elapsed x%, the system will do the
prefetching tasks using another non daemon thread
until x% of all the work finished.
3.2 Average Occupying Time in the
Cache
When a message has been sent to the system,
whether put it to the cache or just log it into the
persistence store is decided by the interval between
the time the message has been sent to the system and
the time the consumer gets it out from the system.
We must decide the value of the interval. In our
approach we define the value as the average
occupying time of a message in the cache. Suppose
the cache size is S, in a time interval of T, N
messages have been got out. The average occupying
time of a message is T*S/N. If a message will
Figure 3: Diffrent prefetching methods.
ICEIS 2006 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
522