Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamically set Disk Volume size. #1988

Open
gurpalw opened this issue Feb 13, 2025 · 3 comments
Open

Dynamically set Disk Volume size. #1988

gurpalw opened this issue Feb 13, 2025 · 3 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@gurpalw
Copy link

gurpalw commented Feb 13, 2025

Description

What problem are you trying to solve?

Karpenter provisions nodes of all sizes, which is great. But this means a 50GB root volume which may be sufficient for most nodes, will not be sufficient for the larger nodes as more pods can run therefore more images are downloaded.

Karpenter should be able to dynamically set the root volume size based on the size of node, using rules set by the karpenter user.

How important is this feature to you?
This will prevent us from excessive spend, as the alternative is to simply increase the nodeclasses volume size. this is inefficient since we share a EC2NodeClass across many different node type/sizes. It will also prevent DiskPressure evictions.

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@gurpalw gurpalw added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 13, 2025
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 13, 2025
@jonathan-innis
Copy link
Member

+1, this idea looks quite similar to aws/karpenter-provider-aws#2394 in the AWS provider repo -- one thought here is if we just capture the total pod requests for the ephemeral storage, then we could use that to scale the volume size -- the only issue that you might run into there is that you might end up having CPU/memory leftover on the instance and that will go unused -- we could potentially think about assigning some "default" ephemeral storage for every Gi or CPU that is going unused to create a little headroom on the instance

@jonathan-innis
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 13, 2025
@jonathan-innis
Copy link
Member

/priority backlog

@k8s-ci-robot k8s-ci-robot added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Feb 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants