-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Konnectivity Proxy: move proxy-agent cpu limit to request. #103626
Conversation
@jkh52: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @jkh52. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
cpu: 50m | ||
limits: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was it intentional to change the cpu
to requests
but keeping the memory
as limits
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, intentional. I wanted to avoid artificial restrictions from cpu limit, as well as incrementally stay similar to GKE configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we experiencing bursty CPU behavior on the agent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we experiencing bursty CPU behavior on the agent?
No, this is just a best practice / cleanup (this PR shrank after a rebase but this seems worth keeping).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for this change.
Some background on the best practices, just in case: CPU limits, due to some bugs in the kernel in several versions, create throttling when sometimes is not needed. Given the way CFS works, also, when no cpu limits is used and there is contention, cpu will be allocated proportionally to the requests. No cpu limits is quite fine :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And mem limits work fine and you want them. So, changing only cpu limits to requests LGTM
0682316
to
8b9c439
Compare
/lgtm |
/retest |
8b9c439
to
4bbe11e
Compare
/lgtm |
/ok-to-test |
/retest |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale We still want this, and it even came up at kubernetes-sigs/apiserver-network-proxy#261 |
4bbe11e
to
d13ee80
Compare
/retest 👀 Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path expand_less 34s |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: caesarxuchao, cheftako, jkh52 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
/kind failing-test
/kind flake
What this PR does / why we need it:
While debugging e2e flakes at #102904 (dial timeout cases), we see agents reconnecting. While we don't yet collect agent logs (kubernetes/test-infra#22811), in the meantime these tuning tweaks may increase agent availability and help bring down flakiness.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: