Hurts to Be Too Early: Benefits and Drawbacks of Communication in Multi-Agent Learning.
In a wide variety of applications, decisions for the operation of a network/system are made in a decentralized way by distributed autonomous agents. Examples of this type of distributed decision making arises in wireless and telecommunication networks (e.g., opportunistic spectrum access, dynamic resource allocation), management of the smart grid and electricity markets, and the operation of cyber-physical systems. An instance of practical interest in these problems is when the autonomous agents are willing to collaborate in order to achieve a common goal. In these environments, communication or information sharing can facilitate coordination among agents, so that they can collaborate on reaching a shared goal.
However, the rise of self-organizing multi-agent systems in fully unknown environments, such as those arising in edge computing applications, has introduced an additional challenge for these multi-agent systems. In particular, even in the absence of coordination problems, agents face the additional challenge of learning to act in the a priori unknown environment. The problem of learning to act through repeated interactions with an unknown environment, in the presence of other agents, is the subject of the multi-agent reinforcement learning literature.
In this work, we study the problem of multi-agent reinforcement learning in cooperative environments, and analytically evaluate the effects of information sharing on both the coordination and learning of the agents. We are particularly interested in the role of communication when agents have heterogeneous capabilities in assessing their shared environment. This is motivated by the possible heterogeneity in agents’ platforms; for instance, an agent might have a less accurate perception of the environment due to having weaker sensors, energy constraints, or limited storage. Such heterogeneity would be the case in fog computing, for example, where powerful cloud services and resource- limited edge nodes cooperate to assess the environment.
We identify two potential benefits of information sharing when agents’ information about the environment is heterogeneous: (1) it can facilitate coordination among agents, and (2) it can enhance the learning of all participants, including the better informed agents. We show however that these benefits of communication depend in general on its timing, so that delayed information sharing may be preferred in certain scenarios.
P. Naghizadeh, M. Gorlatova, A. S. Lan, and M. Chiang. "Hurts to Be Too Early: Benefits and Drawbacks of Communication in Multi-Agent Learning." In the proceedings of the 2019 IEEE Conference on Computer Communications (INFOCOM), pp. 622-630. IEEE, 2019.