Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document test results conflict logic #435

Open
s3ththompson opened this issue Jun 9, 2022 · 5 comments
Open

Document test results conflict logic #435

s3ththompson opened this issue Jun 9, 2022 · 5 comments
Labels
question Further information is requested

Comments

@s3ththompson
Copy link
Member

What are the conditions which create a conflict between results?

Which fields are directly compared, and which fields are silently ignored when determining conflicts?

What is the difference between what constitutes a pass and a fail, versus an unexpected behavior?

cc @mzgoddard @howard-e

@mzgoddard
Copy link

What are the conditions which create a conflict between results?

@s3ththompson by conflict what do you mean? Fields in the test renderer are validated to make sure they have a value. So I'm not sure what is a conflict in the test renderer.

The form submission logic in aria-at-test-run.mjs uses the logic in the validation logic of the same file. If validation logic creates a TestRunState and if it contains validation issues, submission stops at that point.

Which fields are directly compared, and which fields are silently ignored when determining conflicts?

Maybe I am not grasping part of the context for this issue. Fields are not compared by the test renderer.

What is the difference between what constitutes a pass and a fail, versus an unexpected behavior?

A pass or fail depends on the wording of the assertion.

The default list of unexpected behaviors in the test renderer is a collection of conclusions and events.

The test harness in aria-at uses the test result created by testResultJSON.

I wish the test renderer had better definitions for failing assertions and unexpected behaviors but these are elements that the project has left intentionally undefined up to now. The current test renderer logic was an iteration on and to remove tech debt for recording test results by human testers. It did not at that time try to iterate on what and how we record those results.

@s3ththompson
Copy link
Member Author

@mzgoddard thanks for the extra context! A conflict is displayed in the App when two or more different testers submit results that differ in some way. Sounds like I mistook the test renderer as the place where those conflicts are determined.

I think @howard-e might be able to help us track down that logic. I believe that some fields from the object produced by testResultJSON must match exactly, while others are not directly compared when testing for “equality” between results.

@howard-e
Copy link
Contributor

Sure, I'll have a look into how that logic works. But perhaps @alflennik may have more context on what I believe is the GQL based, conflicts resolver that's being discussed.

@s3ththompson
Copy link
Member Author

Aha! The trail continues—I didn’t realize it was happening in the resolvers. Sounds like @alflennik may have worked on that last.

@s3ththompson s3ththompson changed the title Document test renderer conflict logic Document test results conflict logic Jun 10, 2022
@alflennik
Copy link
Contributor

Hopefully this is clear. This summarizes the logic in server/resolvers/TestPlanReport/conflictsResolver.js.

  • For each test result, each scenario result, i.e. each command, is compared.
  • For each scenario result, the unexpected behaviors and assertion results are compared. The actual captured output is not compared.
  • For the unexpected behaviors, the type of unexpected behavior is compared, i.e. "AT became excessively sluggish." If multiple testers chose "Other" and provided a textual explanation, the text must be identical or the result will be considered a conflict.
  • For each assertion result, we compare whether the test passed or failed, as well as the failure reason (incorrect output or no output).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants