You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Devpod Version : 0.6.7 Provider: Kubernetes Flavour: Azure Kubernetes Service
Is your feature request related to a problem?
Thank you for the amazing product and all the hard work that goes into it! I’ve encountered an issue related to resource scheduling when using devpod up. Currently, the workflow starts with an architecture detection pod to determine the node’s platform/architecture (e.g., x86, arm64). This pod then creates the actual devpod, which must be scheduled on the same node to avoid architecture mismatches.
The issue arises because the architecture detection pod requires minimal resources and can be scheduled on smaller nodes (e.g., 2 cores, 8GB RAM). However, if the devpod requests larger resources (e.g., 16GB RAM), it cannot be scheduled on the same node, leading to scheduling failures.
Which solution do you suggest?
I propose adding an option to specify the desired platform/architecture directly via DevPod settings or the CLI. This would eliminate the need to run the architecture detection pod entirely, simplifying the process and avoiding scheduling conflicts.
Which alternative solutions exist?
An alternative could involve aligning the resource requests/limits of the architecture detection pod with those of the devpod, ensuring they are scheduled on compatible nodes. However, this approach increases resource requirements temporarily and might not be ideal.
Additional context
Allowing users to pass the platform/architecture directly would streamline workflows and provide more control over the scheduling process, reducing unnecessary pod creation and resource usage.
Thank you again for the fantastic product and for considering this request! Looking forward to your thoughts on implementing this feature.
The text was updated successfully, but these errors were encountered:
Hey @anshuman852, thanks for this suggestion. It makes a lot of sense to me, specially for smaller node pools.
Specifying the architecture in the kubernetes provider options seems to be the most straight forward way to implement and easiest to use
Devpod Version : 0.6.7
Provider: Kubernetes
Flavour: Azure Kubernetes Service
Is your feature request related to a problem?
Thank you for the amazing product and all the hard work that goes into it! I’ve encountered an issue related to resource scheduling when using devpod up. Currently, the workflow starts with an architecture detection pod to determine the node’s platform/architecture (e.g., x86, arm64). This pod then creates the actual devpod, which must be scheduled on the same node to avoid architecture mismatches.
The issue arises because the architecture detection pod requires minimal resources and can be scheduled on smaller nodes (e.g., 2 cores, 8GB RAM). However, if the devpod requests larger resources (e.g., 16GB RAM), it cannot be scheduled on the same node, leading to scheduling failures.
Which solution do you suggest?
I propose adding an option to specify the desired platform/architecture directly via DevPod settings or the CLI. This would eliminate the need to run the architecture detection pod entirely, simplifying the process and avoiding scheduling conflicts.
Which alternative solutions exist?
An alternative could involve aligning the resource requests/limits of the architecture detection pod with those of the devpod, ensuring they are scheduled on compatible nodes. However, this approach increases resource requirements temporarily and might not be ideal.
Additional context
Allowing users to pass the platform/architecture directly would streamline workflows and provide more control over the scheduling process, reducing unnecessary pod creation and resource usage.
Thank you again for the fantastic product and for considering this request! Looking forward to your thoughts on implementing this feature.
The text was updated successfully, but these errors were encountered: