-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-4969: Cluster Domain Downward API #4972
base: master
Are you sure you want to change the base?
Conversation
nightkr
commented
Nov 21, 2024
- One-line PR description: Initial KEP draft
- Issue link: Cluster Domain Downward API #4969
- Other comments:
Welcome @nightkr! |
Hi @nightkr. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just two minor nitpicks/typos
Currently there is no way for cluster workloads to query for this domain name, | ||
leaving them either use relative domain names or take it as manual configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently there is no way for cluster workloads to query for this domain name, | |
leaving them either use relative domain names or take it as manual configuration. | |
Currently, there is no way for cluster workloads to query this domain name, | |
leaving them to either use relative domain names or configure it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left "query for this" as-is, because I read the revised variant as "ask what the domain name (that I already know) means" rather than "ask what the domain name is".
Currently there is no way for cluster workloads to query for this domain name, | ||
leaving them either use relative domain names or take it as manual configuration. | ||
|
||
This KEP proposes adding a new Downward API for that workloads can use to request it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This KEP proposes adding a new Downward API for that workloads can use to request it. | |
This KEP proposes adding a new Downward API that workloads can use to request it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is out of scope of this KEP, but I guess this change sets a precedent of other configs that could be passed down into the Pod.
I wonder if someone with more experience than me has an idea/vision of what that could look like, which may then determine what name to decide on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking forward, I suspect that both cases will become relevant eventually. This property just occupies an awkward spot since it doesn't really have one clean owner.
/ok-to-test |
/retest |
<!-- | ||
This section is incredibly important for producing high-quality, user-focused | ||
documentation such as release notes or a development roadmap. It should be | ||
possible to collect this information before implementation begins, in order to | ||
avoid requiring implementors to split their attention between writing release | ||
notes and implementing the feature itself. KEP editors and SIG Docs | ||
should help to ensure that the tone and content of the `Summary` section is | ||
useful for a wide audience. | ||
|
||
A good summary is probably at least a paragraph in length. | ||
|
||
Both in this section and below, follow the guidelines of the [documentation | ||
style guide]. In particular, wrap lines to a reasonable length, to make it | ||
easier for reviewers to cite specific portions, and to minimize diff churn on | ||
updates. | ||
|
||
[documentation style guide]: https://github.com/kubernetes/community/blob/master/contributors/guide/style-guide.md | ||
--> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<!-- | |
This section is incredibly important for producing high-quality, user-focused | |
documentation such as release notes or a development roadmap. It should be | |
possible to collect this information before implementation begins, in order to | |
avoid requiring implementors to split their attention between writing release | |
notes and implementing the feature itself. KEP editors and SIG Docs | |
should help to ensure that the tone and content of the `Summary` section is | |
useful for a wide audience. | |
A good summary is probably at least a paragraph in length. | |
Both in this section and below, follow the guidelines of the [documentation | |
style guide]. In particular, wrap lines to a reasonable length, to make it | |
easier for reviewers to cite specific portions, and to minimize diff churn on | |
updates. | |
[documentation style guide]: https://github.com/kubernetes/community/blob/master/contributors/guide/style-guide.md | |
--> |
clusterPropertyRef: clusterDomain | ||
``` | ||
|
||
`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra credit: define a command line argument that relies on interpolating $(CLUSTER_DOMAIN)
environments, since `node-b` might not be able to resolve `cluster.local` | ||
FQDNs correctly. | ||
|
||
For this KEP to make sense, this would have to be explicitly prohibited. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if that's true.
For Pod consumes, the downward API is implemented by the kubelet, so each kubelet can expose its local view of the cluster domain.
We would still strongly recommend against having multiple cluster domains defined across your cluster - anything else sounds really unwise - but technically it can be made to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's one of those.. it would be implementable without making such a declaration, but it would likely be another thing leading to pretty confusing behaviour. Maybe the language can be softened somewhat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can mention the need to emphasize that the existing thing, already a bad idea, is even more of a bad idea.
<!-- | ||
What are the caveats to the proposal? | ||
What are some important details that didn't come across above? | ||
Go in to as much detail as necessary here. | ||
This might be a good place to talk about core concepts and how they relate. | ||
--> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A related detail.
- A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via
.status
. - A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a somethingz HTTP endpoint.
- A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a Prometheus static-series label
egkubelet_cluster_domain{domain_name="cluster.example"} 1
If we decide to make the API server aware of cluster domain, adding that info could help with troubleshooting and general observability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's already available via the kubelet's /configz (mentioned in Alternatives). I don't have a strong opinion either way on adding it to the Node status.
Co-authored-by: Tim Bannister <[email protected]>
Co-authored-by: Tim Bannister <[email protected]>
The ConfigMap written by k3s[^prior-art-k3s] could be blessed, requiring that | ||
all other distributions also provide it. However, this would require additional | ||
migration effort from each distribution. | ||
|
||
Additionally, this would be problematic to query for: users would have to query | ||
it manually using the Kubernetes API (since ConfigMaps cannot be mounted across | ||
Namespaces), and users would require RBAC permission to query wherever it is stored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of the non-goal I was mostly referring to configuring the kubelet itself (which would have been a retread of #281). Maybe that could be clarified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd clarify that. I took the non-goal to mean that the value might be aligned across the cluster, but that this KEP explicitly avoids having Kubernetes help with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This roughly shares the arguments for/against as [the ConfigMap alternative](#alternative-configmap), | ||
although it would allow more precise RBAC policy targeting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.
# The following PRR answers are required at alpha release | ||
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: MyFeature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could pick the feature gate name even at provisional stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. PodClusterDomain
would align with PodHostIPs
(the only other downward API feature gate listed on https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/), but happy to hear other takes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, especially for provisional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
## Design Details | ||
|
||
A new Downward API `clusterPropertyRef: clusterDomain` would be introduced, which can be projected into an environment variable or a volume file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's this a reference to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was mostly trying to be consistent with fieldRef
and resourceFieldRef
, which also don't quite correspond to the pod/container objects (since they're not quite 1:1), but I'm certainly not married to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually a node property, and making it a cluster-wide property is an explicit non-goal of the KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to clarify the non-goal in c3f33dd
Co-authored-by: Tim Bannister <[email protected]>
Co-authored-by: Tim Bannister <[email protected]>
Currently, there is no way for cluster workloads to query for this domain name, | ||
leaving them to either use relative domain names or configure it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could mention that the API server doesn't know this domain name.
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is really a node/kubelet property, not a cluster property, is a kubelet config option https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
--cluster-domain string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the question, is it a property of the kubelet or is it a property of the cluster that just-so-happens to currently be configured at the kubelet level?
As far as I can tell, I'd argue it seems to be the latter. Back when we discussed it in SIG-Network, @thockin mentioned that support for different clusterDomains across kubelets was always theoretical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point is that realistically you can not assume consistency ... we very much want it to be a cluster level property but then is when you enters into chicken and egg problems, Kubelet must be able to work disconnected, and is common for some distros to use this property to bootstrap a cluster (kubeadm, openshift, ...) so how can you configure a global dns domain if the cluster was not created, what happens if the domain changes once the kubelet connects , see https://docs.google.com/document/d/1Dx7Qu5rHGaqoWue-JmlwYO9g_kgOaQzwaeggUsLooKo/edit?tab=t.0#heading=h.rkh0f6t1c3vc
My main concern is that what happens if tomorrow people start to have clusters with split domains? they can perfectly do it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm missing something, but as I understand it kubeadm will always initialize it to the value it gets from its ClusterConfiguration before it even launches the kubelet?
Some properties will always need to be consistent for the kubelet to be in a valid state (apiserver URL, certificates, etc).
My main concern is that what happens if tomorrow people start to have clusters with split domains? they can perfectly do it
They could also have clusters with overlapping pod CIDRs. At some point we have to delineate what's a valid state for the cluster to be in and what isn't.
I'm fine with declaring that split domain configurations are valid and supported (or that there is an intention to work towards that at some point). But that's not the impression that I got from either you or @thockin so far.
Why should this KEP _not_ be implemented? | ||
--> | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nightkr I just realized that there is alread a way of exposing the fqdn to the pods https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields, it may be a bit hackin and help to shape this KEP too or may be is enoug to satisfy this demand.
If you set the subdomain field in the pod spec, you can get the FQDN
The Pod spec also has an optional subdomain field which can be used to indicate that the pod is part of sub-group of the namespace. For example, a Pod with spec.hostname set to "foo", and spec.subdomain set to "bar", in namespace "my-namespace", will have its hostname set to "foo" and its fully qualified domain name (FQDN) set to "foo.bar.my-namespace.svc.cluster.local" (once more, as observed from within the Pod).
Deploy this pod
apiVersion: v1
kind: Pod
metadata:
name: fqdn-pod
spec:
subdomain: x
containers:
- name: my-container
image: busybox:stable
command: ["sleep", "infinity"]
The FQDN is available internally to the Pod
$ kubectl exec -it fqdn-pod -- hostname -f
fqdn-pod.x.default.svc.cluster.local
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was worried about running into length limitations, but that doesn't seem to be an issue in practice. It was perfectly happy to regurgitate dummy-deploy-566c59b7dd-dlvv4.qwerasdfzxcvqwerasdfzxcvqwerasdfzxcvqwerasdfzxcvqwerasdfzxcvqwe.ohgodwhydidicreatethissuperlongnamespacen amethisissocursed.svc.cluster.local
without any issues.
It can also be retrieved from the kubelet's `/configz` endpoint, however this is | ||
[considered unstable](https://github.com/kubernetes/kubernetes/blob/9d967ff97332a024b8ae5ba89c83c239474f42fd/staging/src/k8s.io/component-base/configz/OWNERS#L3-L5). | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO if you depend on that to get the cluster domain field and it changes breaking consumers, I would consider that a regression and we'll fix it.
I don't think that comment means that this endpoint or functionality is going to disappear, it is more about the schema of the config, we have a similar thing with kube-proxy config that is v1alpha1, but at this point and after more than X years it seems to me those are now stable APIs we can not change,.
cc: @thockin
#### Story 1 | ||
|
||
The Pod `foo` needs to access its sibling Service `bar` in the same namespace. | ||
It adds two `env` bindings: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be simpler to have a downward API binding to request the FQDN of a particular service (which would include both the namespace and the cluster domain name in a single value).
env:
- name: SERVICE1
valueFrom:
serviceReference:
name: bar
- name: SERVICE2
valueFrom:
serviceReference:
namespace: otherns
name: blah
→
SERVICE1=bar.myns.svc.cluster.local
SERVICE2=blah.otherns.svc.cluster.local
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO tricky without ReferenceGrant. Users might expect that the service reference actually honors the existence of the other Service, or might make API changes to built that expectation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That also wouldn't really work for our (@stackabletech) use case/current API contract.
Our operators generate configmaps with all the details you'll need to connect to the service managed by the operators (including URLs). Our operators' pod manifests don't know what specific CR objects they'll end up managing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, it would be good to update the user story (or add a second user story) explaining that. The user stories are there to help explain why the particular solution you chose is the best solution.
This also becomes problematic for TLS, since there is no way to distinguish | ||
which of these two cases a certificate applies to. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what you mean by this. A TLS certificate always has the FQDN in it, doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically no (subjectAltName can be an email address, an IP address, a URI, maybe a few other things).
But if it's a DNS name then the client validates against the FQDN and not any other domain. Clients that do not are asking for trouble.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tested this. At least curl
tests the certificate's names (CN and SANs) against the verbatim hostname in the URL, before expanding into the FQDN.
So if your FQDN is foo.bar
, search domain is bar
, and run curl https://foo
then the certificate will be validated against foo
, not foo.bar
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To curl
, the FQDN there is foo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the given context, foo.
does not resolve. nslookup foo
returns foo.bar.
. If we consider foo.
a valid FQDN for the query then we have watered down the term so far that it has become meaningless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the KEP text, let's try to clarify what we're saying here.
Co-authored-by: Tim Bannister <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nightkr The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @bowei |