| Abstract: | Toward the minimal weighted vertex cover (MWVC) in agent-based networking systems, this paper recasts it as a potential game and proposes a distributed learning algorithm based on relaxed greed and finite memory. With the concept of convention, we prove that our algorithm converges with probability 1 to Nash equilibria, which serve as the bridge connecting the game and the MWVC. More importantly, an additional degree of freedom is also provided for equilibrium refinement, such that increasing memory lengths and mutation probabilities contributes to the improvement of system-level objectives. Comparisons with typical methods, centralized and distributed, demonstrate the advantage of our algorithm for both weighted and unweighted versions. This paper not only provides a useful tool for the MWVC problem in decentralized environments but also paves an effective way for distributed coordination and optimization that could be modeled as potential games. |