-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Zookeeper Cluster Readiness Check in init container #102
base: develop
Are you sure you want to change the base?
Conversation
while : | ||
do | ||
echo "Checking for ZK readiness" | ||
ZK_OK=$(echo ruok | nc {{ .Release.Name }}-zookeeper-headless.{{ .Release.Namespace }} 2181) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ruok
just responds imok
if the ZooKeeper server is up and bound to the port [1]. That is fine for a healthcheck on a ZK server alone, but it is not sufficient for Stardog's purposes. Stardog needs to know that ZooKeeper is up and ready (i.e. all ZK instances are participating in the quorum).
My understanding of the k8s service is that it will cycle through the servers so while it may hit a follower which doesn't have the information Stardog needs, it would loop until it finds a server that does.
Are you finding that it never tries new servers and this loops endlessly? Or just that it tries and fails a few times before eventually succeeding?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it never tries new server - this happens very often - the result is that perhaps one of 3 stardog pods comes up while the others never do. Even hours later, the same follower is used. After helm uninstall
followed by another helm install
, you may get a different result. It's entirely random selection. Some other method needs to be implemented. This is so common, we must be missing something in our tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, are you saying that the liveness check is inadequate because a Zookeeper cluster can report it's in an ok state even though there is no leader? I wouldn't think that would be possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No pods will come up if ZooKeeper hasn't reported ready. Stardog won't move on to deploying until ZK reports ready as determined by this check iirc (though it's been awhile since I looked closely). I suspect something else is wrong with your deploys unless you've modified the charts to deploy Stardog even if this check hasn't passed.
Yes, a liveness check just determines if the server is alive, which isn't the same as if it's ready. For ZK, that is this ruok
command, similar to how Stardog has /admin/alive
for an alive check for the individual server and /admin/healthcheck
for a ready check for the cluster. The alive check just says the server is up. The ready check says it's up and actually ready (i.e. a member of the cluster).
I don't think ZK has something akin to a ready check for an ensemble, hence the need to check something else.
Problem
We've been facing intermittent failures with the existing method of checking Zookeeper cluster readiness within our init container in the StatefulSet. The init container was echoing the
mntr
command to port 2181 of the Zookeeper service and grepping forzk_synced_followers
. However, this check only works if the service directs the request to the leader of the Zookeeper pods. If a follower pod responds, thezk_synced_followers
parameter is not present, causing the check to fail erroneously.Solution
To address this issue, we're modifying the readiness check to echo the
ruok
command instead, and look for theimok
response. This is the same method used in the Liveness and Readiness probes of the Zookeeper deployment.This method of checking is more reliable and robust than the previous one, as it does not rely on the leader pod answering the readiness check and will work regardless of whether the service directs the request to the leader or a follower pod.
Testing
The problem with Stardog sporadically not reaching a ready state in Kubernetes has been reported by various customers, most recently IDA. I was easily able to reproduce the problem in my own k3s cluster as well as an Azure AKE cluster. This fix has been tested successfully in both k8s distributions with both 'Parallel' and 'OrderedReady' pod manangement policy.