We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to run an indeterminate number of tests in parallel across multiple test runners without tagging individual tests.
Add a modulo and target option that deterministically selects every Nth scenario to run.
This should also work the same across all instances using the same seed.
I would pass this in as an environment variable to the configuration options.
r1$ SEED=1234 MOD=3 INT=1 make test.... (scenarios 1,4,7,10) r2$ SEED=1234 MOD=3 INT=2 make test... (scenarios 2,5,8,11) r3$ SEED=1234 MOD=3 INT=3 make test... (scenarios 3,6,9,12)
I can do this with tags, but forgetting or typo on a tag will ignore the test completely.
No response
The text was updated successfully, but these errors were encountered:
I have made a PR for this feature: #678
Sorry, something went wrong.
Hi, thank you for this contribution!
I'm not sure godog API extension is needed for such a behavior. The problem seems a bit too specialized and uncommon to me.
I think needed behavior can be achieved with better flexibility using existing "before scenario" hooks and godog.ErrSkip.
godog.ErrSkip
For example (based on https://github.com/cucumber/godog/blob/main/_examples/godogs/godogs_test.go):
var ( scenarioIdx = 0 mod = 3 target = 2 ) func InitializeScenario(ctx *godog.ScenarioContext) { ctx.Before(func(ctx context.Context, sc *godog.Scenario) (context.Context, error) { scenarioIdx++ if scenarioIdx%mod != target { return nil, godog.ErrSkip } return ctx, nil }) ...
Would such approach work in you case?
No branches or pull requests
🤔 What's the problem you're trying to solve?
I want to run an indeterminate number of tests in parallel across multiple test runners without tagging individual tests.
✨ What's your proposed solution?
Add a modulo and target option that deterministically selects every Nth scenario to run.
This should also work the same across all instances using the same seed.
I would pass this in as an environment variable to the configuration options.
⛏ Have you considered any alternatives or workarounds?
I can do this with tags, but forgetting or typo on a tag will ignore the test completely.
📚 Any additional context?
No response
The text was updated successfully, but these errors were encountered: