the Software Testing class, which consequently has
the majority of the younger and less experienced stu-
dents in terms of years in Software Testing. Most
of the tours that were little, or not chosen, required
more testing time. In view of the determined time of
the dynamics, the students chose the tours based also
on what they believed to have the shortest execution
time.
As shown in Figure 1, after collecting and ana-
lyzing profile data from testers, it was be possible to
use the approach and, based on a future use of his-
torical data on test cases already registered, propose
an assignment of test tasks using the tours that were
ranked by the subjects.
Although there are differences between the tours
preferred by different levels of education, the ranking
presents options that could be proposed to different
levels of knowledge about tests, as shown by (Blinded
Author(s), 0000), who carried out his experiment with
a team composed mostly of undergraduate students.
After assigning the best tours ranked, it is possi-
ble to execute and record the test cases in order to
generate inputs for a future recommendation of tours,
concluding the first cycle of a continuous process pro-
posed in this work and presented in Figure 1.
6 CONCLUSIONS AND FUTURE
WORKS
This study aimed to identify profiles of testers to sup-
port the creation of a test task recommendation sys-
tem based on the Exploratory Testing approach with
the Tourist Metaphor. For this, we sought to gather as
much relevant information as possible to assign test
tasks based on the profile of the testers.
The information comes from both literature re-
view and empirical analysis, with a sample from three
groups of different levels of education, related to IT
and linked to the academy. This enabled the collec-
tion of information about profiles and the achieve-
ment of testing dynamics based on the Exploratory
Testing approach.
The personal characteristics of each tester influ-
ence his work with software tests and define a basic
strategy for structuring a test process that is based on
human characteristics in order to direct the attribution
of test tasks. This strategy should consider both the
test history of each tester and their profile, which are
incremented with each test cycle.
This study raises a valuable discussion about a hu-
manized process of assigning test tasks in order to
generate data for the definition of a recommendation
system for automatic assignment of test tasks based
on the profile of each tester. In addition to testing
tasks, this strategy can be extended to development
contexts, given that the profile of each developer, and
tester, can also influence the effectiveness of the ac-
tivity and the degree of satisfaction of the developer.
It is possible to highlight two main future works
derived from this research. The first one is to extract
the profiles with more testers as a sample, in order to
follow the exploratory testing process carried out, to
build a consistent database on profiles. From a more
solid database, the second future work is to apply ar-
tificial intelligence algorithms for automatic assign-
ment of test tasks based on the profile of testers.
Finally, consolidate the implementation of a rec-
ommendation system for assigning test tasks based on
the testers’ profile.
REFERENCES
Anvik, J., Hiew, L., and Murphy, G. C. (2006). Who should
fix this bug? In Proceedings of the 28th international
conference on Software engineering, pages 361–370.
ACM.
Bach, J. (2003). Exploratory testing explained.
Berner, S., Weber, R., and Keller, R. K. (2005). Observa-
tions and lessons learned from automated testing. In
Proceedings of the 27th international conference on
Software engineering, pages 571–579. ACM.
Bertolino, A. (2007). Software testing research: Achieve-
ments, challenges, dreams. In 2007 Future of Software
Engineering, pages 85–103. IEEE Computer Society.
Blinded Author(s) (0000). Blinded title. In Blinded Confer-
ence, pages 00–00.
Cruz, S. S., da Silva, F. Q., Monteiro, C. V., Santos, P.,
and Rossilei, I. (2011). Personality in software engi-
neering: Preliminary findings from a systematic litera-
ture review. In 15th annual conference on Evaluation
& assessment in software engineering (EASE 2011),
pages 1–10. IET.
Deak, A., St
˚
alhane, T., and Sindre, G. (2016). Challenges
and strategies for motivating software testing person-
nel. Information and software Technology, 73:1–15.
Dubey, A., Singi, K., and Kaulgud, V. (2017). Personas
and redundancies in crowdsourced testing. In 2017
IEEE 12th International Conference on Global Soft-
ware Engineering (ICGSE), pages 76–80. IEEE.
Geras, A. M., Smith, M. R., and Miller, J. (2004). A sur-
vey of software testing practices in alberta. Cana-
dian Journal of Electrical and Computer Engineering,
29(3):183–191.
Itkonen, J., M
¨
antyl
¨
a, M. V., and Lassenius, C. (2012). The
role of the tester’s knowledge in exploratory software
testing. IEEE Transactions on Software Engineering,
39(5):707–724.
Itkonen, J., M
¨
antyl
¨
a, M. V., and Lassenius, C. (2015).
Test better by exploring: Harnessing human skills and
knowledge. IEEE Software, 33(4):90–96.
Towards an Approach for Improving Exploratory Testing Tour Assignment based on Testers’ Profile
189