-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] New Partiions created will cause Flink job failover when partition discovery is enabled while scanning #288
Comments
Seems main branch has introduced a try machesim with max try time 5... It may be still too short to wait leader ready if response of update metada return quickly.. Let me check whether the problem can still happen in latest branch.. |
It turns out the exception can still happen:
|
Btw, I think we should backoff when try again.. |
I wonder it will also happens if a leader is lost and the new leader is not be decided? It seems kafka add triable to ListOffsets in https://issues.apache.org/jira/browse/KAFKA-14821 in kafka 3.5.0. |
Search before asking
Fluss version
0.5.0
Minimal reproduce step
Create a partitioned table and a Flink job to subscribe it.. After new partitions is created, it may well throw
It'll cause job failover
What doesn't meet your expectations?
It's in the expectation that throws an exception since the leader may well still not be elected although partition has been created...Although it'll running normally after failover, we'd better to handle this cause to avoid failover again and again in every partition created.
Anything else?
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: