-
Notifications
You must be signed in to change notification settings - Fork 999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: clear instance type cache after ICE #7517
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for karpenter-docs-prod canceled.
|
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Pull Request Test Coverage Report for Build 12281157741Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
@jesseanttila-cai Can you help me understand why the automatic cache clean-up does not cover the case you are describing? Defined Cache TTL:
|
The automatic cache clean-up does indeed limit the effect of the issue, however in practice the number of ICE occurences within the TTL period can be large enough to have a very significant effect on memory usage. The graph shared in #7443 displays this behavior, including the effect of the cache TTL. The heap profile for the flamegraph in the comment is available here. |
Fixes #7443
Description
The
instanceTypesCache
in theInstanceType
provider uses a complex cache key that includes the SeqNum of theunavailableOfferings
cache. Since all entries ofinstanceTypesCache
become invalid wheneverunavailableOfferings
is modified, the cache can be flushed in this case to reduce memory usage.Changes to other parts of the
instanceTypesCache
key are not considered in this patch. It is likely that a similar issue could be triggered for example by dynamically updating the blockDeviceMappings of a nodeclass based on pod requirements. Since our setup most commonly modifies nodepool/nodeclass configurations in response to ICEs, this patch was sufficient to solve our memory usage issues.How was this change tested?
A patched version of the Karpenter controller was deployed in a development environment, and memory usage during cluster scale-up has been tracked for a period of two weeks. Peak memory usage appears to be reduced by as much as 80% in situations where several ICEs occur in quick succession, completely eliminating previously seen OOM issues.
Does this change impact docs?
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.