-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In-cluster deployment looks for a kubeconfig #1826
Comments
+1 saw the same error |
Same issue here... Is in-cluster deployment just straight up broken? How did you resolve this @rosh-cha or @thomaspetit ? |
No, haven't had the time to look into it as I've been quite busy fixing my bootstrap script for my local clusters. Might want to revisit this soon with the latest multi-cluster features that I read about (for in-cluster deployments). |
I'm facing the same issue, no luck with deploying according to the https://headlamp.dev/docs/latest/installation/in-cluster/ :( |
Same issue here, not sure how to get started... |
Same issue with Keycloak OIDC configuration |
Same issue, I really just want to bypass the OIDC authentication as I already have auth happening via a reverse proxy. Is there anyway to pass kubeconfig in via values.yaml? or if not does anyone know how I might remove the need for auth? Apologies if that makes no sense, I am far from an expert :) it may be that kubeconfig doesn't need passing and I just need to somehow disable the need for OIDC token |
FYI, we're currently loading a static kubeconfig file in cluster by mounting onto volumeMounts:
- name: "headlamp-kubeconfig"
mountPath: "/home/headlamp/.config/Headlamp/kubeconfigs/"
volumes:
- name: "headlamp-kubeconfig"
secret:
secretName: "kubeconfig-vc-crossplane-consumer"
mountPath: "/home/headlamp/.config/Headlamp/kubeconfigs/" We still need to test support for multiple kubeconfig files, by passing the /CC @Guilamb |
Fixes #1826 Signed-off-by: René Dudfield <[email protected]>
Ah, I didn't think anyone would load kube configs in-cluster. As far as I know, the errors are just misleading for most people. In the case when Headlamp is running in-cluster (and not as an app) then these should be warning logs and not error logs. @gberche-orange I wonder if there's another way to support what you're doing there with the kube config file? It's unclear to me if we should support kube config at all in cluster? Or if it should be enabled explicitly when used in-cluster (because most people won't use it and the logs would be misleading). |
I agree that many wont be using kubeconfigs in the cluster but removing it completely can affect the existing workflows for people so making it available based on a config can be the way to go. |
Our use case is to use headlamp to demo RBAC user experience from various actors (app developers, app operators, platform operators). Ability to load multiple distinct kubeconfig files is useful to be as close to real conditions (i.e. where each actor is given a kubeconfig to use of cluster) |
So I've tried using the |
I'm on the latest stable version of the headlamp ( I fixed it by modifying the following values to the helm initContainers:
- command:
- /bin/sh
- "-c"
- |
kubectl config set-cluster main --server=https://kubernetes.default.svc --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-credentials main --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-context main --cluster=main --user=main
kubectl config use-context main
env:
- name: KUBERNETES_SERVICE_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: KUBERNETES_SERVICE_PORT
value: "6443"
- name: KUBECONFIG
value: /home/headlamp/.config/Headlamp/kubeconfigs/config
image: bitnami/kubectl:1.32.0
name: create-kubeconfig
securityContext:
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 101
runAsNonRoot: true
runAsUser: 100
volumeMounts:
- mountPath: /home/headlamp/.config/Headlamp/kubeconfigs
name: kubeconfig
volumeMounts:
- mountPath: /home/headlamp/.config/Headlamp/kubeconfigs/config
name: kubeconfig
readOnly: true
subPath: config
volumes:
- name: kubeconfig
emptyDir: {}
|
values.yml works. but for some reason it doesn't ask for a token. immediately grants full rights When is a new release expected in which access to the kubernetes cluster will be out of the box during the helm installation? |
Trying to deploy headlamp using the kubernetes-headlamp.yaml in to a k8s cluster.
Does the in cluster deployment suppose to look for the kubeconfig?
The text was updated successfully, but these errors were encountered: