We have proposed the utility-based Q-learning concept that supposes an agent internally has an emotional mechanism that derives subjective utilities from objective rewards and the agent uses the utilities as rewards of Q-learning. We have also proposed such an emotional mechanism that facilitates cooperative actions in Prisoner's Dilemma (PD) games. However, this mechanism has been designed and implemented manually in order to force the agents to take cooperative actions in PD games. Since it seems slightly unnatural, this work considers whether such an emotional mechanism exists and where it comes from. We try to evolve such mechanisms that facilitate cooperative actions in PD games by conducting simulation experiments with a genetic algorithm, and we investigate the evolved mechanisms from various points of view. Categories and Subject Descriptors 1.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-intelligent agents, multiagent systems General Terms Experimentation.
|Published - 2011 1月 1
|10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011 - Taipei, Taiwan, Province of China
継続期間: 2011 5月 2 → 2011 5月 6
|10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011
|Taiwan, Province of China
|11/5/2 → 11/5/6
ASJC Scopus subject areas