but then I've got the error saying there is not such container. But I have the errors that I mention in my first post: I cannot start the container because I've got the "network already has endpoint with name cont1", then I try deleting the container, and disconnecting from the network. It seems to have exited gracefully on host restart.
So I'm going to show the steps to reproduce the error with a remote server, but I'm confident it will be the same if you don't have a remote server: you just have to create the consul server on the same host-this was the way I was before. But I have successfully ruled that out: I have created the consul cluster on a remote server, and the consul node that I run on my docker host is only a consul client that connects to the remote consul server. NOTE: I previously thought this was a problem due to the fact that I was running a single-node consul cluster and this node was itself a docker on the host I was making crash. I have dug deeper into the problem and I think I have found a way to reproduce the error (at least I have been doing this three or four times and it failed every time, so I suppose this counts as reproducible!). I keep getting this error and this has become a real problem since once of our production servers was shutdown, and restarted, all (docker) systems were unable to restart due to this error.