-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: customize pod informer to reduce memory usage #46
Conversation
65f566b
to
d75c33d
Compare
d75c33d
to
329980d
Compare
} | ||
defer semaphore.Release(1) | ||
} | ||
pods, err := client.CoreV1().Pods(corev1.NamespaceAll).List(ctx, options) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client-go informers use context.TODO()
. Intuitively I think it does make sense to shutdown the listwatch when the federatedclient is shutting down, but I can't be sure. @JackZxj @limhawjia @SOF3 @mrlihanbo WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can reorganize all context usages in a separate PR; the scope is too big and doesn't need to be fixed here immediately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree that context usage should be fixed in a separate PR. I'm just wondering whether using ctx
instead of context.TODO
here is correct. Using ctx
would mean the ListWatch
would shutdown when ctx
is cancelled.
329980d
to
33c28c4
Compare
33c28c4
to
27916af
Compare
@JackZxj Great enhancement. Thanks for the contribution! |
feat: customize pod informer to reduce memory usage
Use a custom Pod Informer to limit the number of concurrent list pods from member clusters, and simplify the metadata of the pods before we store them into the cache.
The reference conversations: #14
As result:

4 clusters, 100 kwok-node/cluster, 10 deployment/cluster, 1000 pod/deployment, total: 10k pod/cluster
summary: It can save at least half of the memory, and reduce the rate of change of memory usage