Theoretically it means that two AI should be able to reach a consensus pretty quickly without needing to share a lot of information.
Unfortunately you need "honest, rational Bayesian agents with common priors" for this to work. Given that humans rarely agree with each other, it's interesting to think about where we fall short of that criteria.
Theoretically it means that two AI should be able to reach a consensus pretty quickly without needing to share a lot of information.
Unfortunately you need "honest, rational Bayesian agents with common priors" for this to work. Given that humans rarely agree with each other, it's interesting to think about where we fall short of that criteria.