Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Targetconfig crd #716

Draft
wants to merge 48 commits into
base: main
Choose a base branch
from
Draft

Conversation

aerosouund
Copy link
Contributor

@aerosouund aerosouund commented Jan 24, 2025

  • Crd definition
  • Crd storage
  • Informer initialization
  • Testing
  • Makefile cleanup

@@ -203,6 +206,27 @@ func (r *Resolver) CustomIDGenerators() map[string]result.IDGenerator {
return generators
}

func (r *Resolver) AddTargetConfigEventHandlers() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should move this into a controller/client struct. The resolver is only to manage dependencies and not this kind of business logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, this is me still testing out stuff. The Informer initialization task i have in the description is to mark this exact problem

@@ -40,6 +40,9 @@ func (k *k8sPolicyReportClient) Stop() {
func (k *k8sPolicyReportClient) Sync(stopper chan struct{}) error {
factory := metadatainformer.NewSharedInformerFactory(k.metaClient, 15*time.Minute)

// tcInformer := tcinformer.NewSharedInformerFactory(tcv1alpha1.New(&rest.RESTClient{}), time.Second)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you wanted to try things out, don't forget to clean everything up again.

mx *sync.Mutex
clients []Client
targets map[string]*Target
crdTargets map[string]*Target
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason to separate crd targets from config targets?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to first write out all the flows for storing crd targets separate from the existing config and once everything is figured out merge them. but no, they both can be stored in the same map as i see the map key is just a random uuid

@fjogeleit
Copy link
Member

the test make target is missing. Please ensure to add it again and that all tests passing

@fjogeleit fjogeleit linked an issue Jan 26, 2025 that may be closed by this pull request
cmd/run.go Outdated
} else {
k8sConfig, err = rest.InClusterConfig()
}
k8sConfig, err := clientcmd.BuildConfigFromFlags("", "/Users/ammaryasser/Downloads/ips")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleanup

// +optional
GCS GCSOptions `json:"gcs,omitempty"`

TargetType string `json:"targetType,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is needed, you could make S3 etc. to pointers and check which property is set. You can add a schema validation that only one of this fields can be set and not multiple ones.

func (f *TargetFactory) CreateSingleClient(tc *v1alpha1.TargetConfig) (*target.Target, error) {
var t *target.Target
switch tc.Spec.TargetType {
case "s3":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as mentioned I would make S3 etc to pointers and check if they are not nil

Copy link
Contributor Author

@aerosouund aerosouund Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this enforce priority on some level ? if the code goes: if s3 isn't nil.. else if slack isn't nil.. etc then for a config that has both (even though this is invalid) it will make it so that s3 takes precedence over whatever. with this field i can treat them all the same and only look at the field set in TargetType and also it simplifies the validation logic. if a TargetConfig is of type s3 and the s3 field is nil then its invalid. rather than having "if all the fields are not set then the config is invalid"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can enforce that only one field is set on the schema, so the API Server will not allow multiple types to be set.
(https://github.com/kyverno/kyverno/blob/main/api/kyverno/v1/common_types.go#L99)

Maybe @JimBugwadia or @eddycharly has an opinion on the TargetType. I think it should not be more cumbersome if needed. If the user configures the s3 and has to set targetType might confuse people.

Signed-off-by: aerosouund <[email protected]>
Signed-off-by: aerosouund <[email protected]>
@fjogeleit
Copy link
Member

The TargetConfig "slack-notifier" is invalid: 
* spec.slack.certificate: Required value
* spec.slack.channel: Required value
* spec.slack.headers: Required value
* spec.slack.skipTLS: Required value

They are optional and should not be required, please check also other targets to ensure that all optional values are correctly configured.

@fjogeleit
Copy link
Member

Slack Notifications for a new resource are published twice

@fjogeleit
Copy link
Member

Following Scenario did not work:

  • Start Policy Reporter
  • Apply new Slack TargetConfiguration
  • Apply new Policy
  • (x) Get Pushes for new violation of existing resources (got no pushes at all)

Expected:

  • Get pushes with new violations only

TargetConfig:

kind: TargetConfig
apiVersion: wgpolicyk8s.io/v1alpha1
metadata:
  name: slack-notifier
spec:
  targetType: slack
  slack:
    webhook: https://hooks.slack.com/...
    certificate: ""
    channel: "kyverno"
    headers: {}
    skipTLS: false
  skipExistingOnStartup: true

// if the client has skip existing preload its cache with the existing results
for _, client := range targets.Targets() {
if client.Client.SkipExistingOnStartup() {
client.Client.SetCache(r.resultCache.Clone())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please clean this up, as it has no affect right now

if len(event.Targets.Clients()) < r.targetConfigClient.TargetConfigCount() && !r.targetConfigClient.HasSynced() {
continue
}
r.polrRestartCh <- struct{}{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also needs cleanup, I would appreciate if we could just remove everything we do not need right now. You can keep it in a different branch for later but I do not want to role out "dead code" which could have a negative impact.

@@ -88,34 +91,8 @@ func (l *ResultListener) Listen(event report.LifecycleEvent) {
return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why has this listeners so many changes when we agreed to not touch this logic for now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature: Allow Reconfiguration when config file changes
3 participants