The ML estimate is a posterior mode, assuming a flat prior. It's not immediately clear that it will always be possible to find a corresponding posterior mean. (From a Bayesian point of view, this is a difference in loss functions as opposed to priors over the parameters. With a posterior mean, you're making the optimal inference assuming a quadratic loss; a posterior mode is appropriate for a 0-or-1 loss.)