Over the past few years, the scientific community has been studying the usefulness of evolutionary game theory to solve distributed control problems. In this paper we analyze a simple version of the Best Experienced Payoff (BEP) algorithm, a revision protocol recently proposed in the evolutionary game theory literature. This revision protocol is simple, completely decentralized and has minimum information requirements. Here we prove that adding some noise to this protocol can lead to efficient results in single-optimum coordination problems in little time, even in large populations of agents.
We also test the algorithm under a wide range of different conditions using computer simulation. In particular, we consider different numbers of agents and of strategies, and we analyze the robustness of the algorithm to different updating schemes (e.g. synchronous vs asynchronous) and to different types of interaction networks (e.g. ring, preferential attachment, small world and complete). In all cases, using the noisy version of BEP, the agents quickly approach a small neighborhood of the optimal state from every initial condition, and spend most of the time in that neighborhood.
Luis is a Professor in Economics, with a background in Computer Modelling and Mathematics. His main area of expertise is evolutionary game theory and the analysis of complex systems.
Luis received an M.Sc. degree in industrial engineering from the University of Valladolid (Spain) in 2002, a B.Sc. in business and management from the Open University of Catalunya (Spain) in 2003, and a Ph.D. degree in game theory and social simulation from Manchester Metropolitan University (UK) in 2008. Nowadays he teaches various courses related to Machine Learning, Finance, and Complex Systems Modelling at the University of Burgos (Spain).
Together with Segis Izquierdo and Bill Sandholm, Luis has designed, implemented, and released more than 30 open-source computational programs (see https://luis-r-izquierdo.github.io).