be interesting. Also, we can consider the extension of
using a portfolio of agents as the opponents, with dy-
namic additions and removals based on the win rates.
REFERENCES
Benbassat, A. and Sipper, M. (2011). Evolving board-game
players with genetic programming. pages 739–742.
Benbassat, A. and Sipper, M. (2012). Evolving both search
and strategy for reversi players using genetic program-
ming. pages 47–54.
Bhatt, A., Lee, S., de Mesentier Silva, F., Watson, C. W.,
Togelius, J., and Hoover, A. K. (2018). Exploring the
Hearthstone deck space. In PFDG, pages 1–10.
Bjørke, S. J. and Fludal, K. A. (2017). Deckbuilding in
magic: The gathering using a genetic algorithm. Mas-
ter’s thesis, NTNU.
Blizzard Entertainment (2004). Hearthstone. Blizzard En-
tertainment.
Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M.,
Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D.,
Samothrakis, S., and Colton, S. (2012). A Survey of
Monte Carlo Tree Search Methods. IEEE TCIAIG,
4(1):1–43.
Campbell, M., Hoane, A. J., and Hsu, F. (2002). Deep Blue.
Artificial intelligence, 134(1):57–83.
Chen, Z., Amato, C., Nguyen, T.-H. D., Cooper, S., Sun,
Y., and El-Nasr, M. S. (2018). Q-deckrec: A fast deck
recommendation system for collectible card games. In
IEEE CIG, pages 1–8. IEEE.
David, O. E., van den Herik, H. J., Koppel, M., and Ne-
tanyahu, N. S. (2013). Genetic algorithms for evolv-
ing computer chess programs. IEEE transactions on
evolutionary computation, 18(5):779–789.
Dire Wolf Digital and Sparkypants Studios (2017). The El-
der Scrolls: Legends. Bethesda Softworks.
Dockhorn, A. and Mostaghim, S. (2018). Hearthstone AI
Competition. https://dockhorn.antares.uberspace.de/
wordpress/.
Ferrer, G. and Martin, W. (1995). Using genetic program-
ming to evolve board evaluation functions.
García-Sánchez, P., Tonda, A., Fernández-Leiva, A. J., and
Cotta, C. (2020). Optimizing hearthstone agents us-
ing an evolutionary algorithm. Knowledge-Based Sys-
tems, 188:105032.
García-Sánchez, P., Tonda, A., Squillero, G., Mora, A., and
Merelo, J. J. (2016). Evolutionary deckbuilding in
Hearthstone. In IEEE CIG, pages 1–8.
Groß, R., Albrecht, K., Kantschik, W., and Banzhaf, W.
(2002). Evolving chess playing programs.
Hauptman, A. and Sipper, M. (2005). Gp-endchess: Using
genetic programming to evolve chess endgame play-
ers. volume 3447, pages 120–131.
Hoover, A. K., Togelius, J., Lee, S., and de Mesentier Silva,
F. (2019). The Many AI Challenges of Hearthstone.
KI-Künstliche Intelligenz, pages 1–11.
Janusz, A., Tajmajer, T., and
´
Swiechowski, M. (2017).
Helping AI to Play Hearthstone: AAIA’17 Data Min-
ing Challenge. In FedCSIS, pages 121–125. IEEE.
Justesen, N., Mahlmann, T., and Togelius, J. (2016). On-
line evolution for multi-action adversarial games. In
EvoCOP, pages 590–603. Springer.
Khan, G. M., Miller, J., and Halliday, D. (2008). Develop-
ing neural structure of two agents that play checkers
using cartesian genetic programming. pages 2169–
2174.
Kowalski, J. and Miernik, R. (2018). Legends of Code and
Magic. http://legendsofcodeandmagic.com.
Kowalski, J. and Miernik, R. (2020). Evolutionary Ap-
proach to Collectible Card Game Arena Deckbuilding
using Active Genes. In IEEE Congress on Evolution-
ary Computation.
Kusiak, M., Wal˛edzik, K., and Ma
´
ndziuk, J. (2007). Evolu-
tionary approach to the game of checkers. In Interna-
tional Conference on Adaptive and Natural Comput-
ing Algorithms, pages 432–440. Springer.
Lucas, S. M., Liu, J., and Perez-Liebana, D. (2018). The
n-tuple bandit evolutionary algorithm for game agent
optimisation. In 2018 IEEE CEC, pages 1–9. IEEE.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality. In Advances in
neural information processing systems, pages 3111–
3119.
Montoliu, R., Gaina, R. D., Pérez-Liebana, D., Delgado,
D., and Lucas, S. (2020). Efficient heuristic policy
optimisation for a challenging strategic card game. In
EvoAPPS, pages 403–418.
Perez, D., Samothrakis, S., Lucas, S., and Rohlfshagen, P.
(2013). Rolling horizon evolution versus tree search
for navigation in single-player real-time games. In
GECCO, pages 351–358.
Salem, M., Mora, A. M., Merelo, J. J., and García-Sánchez,
P. (2018). Evolving a torcs modular fuzzy driver using
genetic algorithms. In EvoAPPS, pages 342–357.
Samuel, A. L. (1959). Some studies in machine learning
using the game of checkers. IBM Journal of research
and development, 3(3):210–229.
Santos, A., Santos, P. A., and Melo, F. S. (2017). Monte
carlo tree search experiments in hearthstone. In IEEE
CIG, pages 272–279. IEEE.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai,
M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D.,
Graepel, T., et al. (2018). A general reinforcement
learning algorithm that masters chess, shogi, and Go
through self-play. Science, 362(6419):1140–1144.
´
Swiechowski, M., Tajmajer, T., and Janusz, A. (2018). Im-
proving Hearthstone AI by Combining MCTS and Su-
pervised Learning Algorithms. In IEEE CIG, pages
1–8. IEEE.
Vieira, R., Tavares, A. R., and Chaimowicz, L. (2020).
Drafting in collectible card games via reinforcement
learning. In IEEE SBGames, pages 54–61. IEEE.
Witkowski, M., Klasi
´
nski, Ł., and Meller, W. (2020). Imple-
mentation of collectible card Game AI with opponent
prediction. Engineer’s thesis, University of Wrocław.
Zhang, S. and Buro, M. (2017). Improving hearthstone ai
by learning high-level rollout policies and bucketing
chance node events. In IEEE CIG, pages 309–316.
IEEE.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
260