Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requirements for Gathering Screen Reader and Browser Version Info #403

Closed
jscholes opened this issue Feb 17, 2022 · 4 comments
Closed

Requirements for Gathering Screen Reader and Browser Version Info #403

jscholes opened this issue Feb 17, 2022 · 4 comments
Assignees
Labels
enhancement New feature or request UI

Comments

@jscholes
Copy link
Contributor

The following requirements were agreed during the February 17, 2022 Community Group meeting.

Screen Reader

  • Dropdown for selecting screen reader (JAWS, NVDA, etc.).
  • After selecting the screen reader, dropdown of versions/builds, to ensure the data is comparable, rather than having people copy/paste/type it in a freeform fashion.
  • Only include stable/production builds, not beta/preview/snapshots.
  • Stretch goal: have "Other" item at end of version dropdown, so user can provide preview/beta build info.
  • Adding new screen reader builds should be the job of the test admin.
  • Stretch goal: have the detection and addition of new screen reader versions/builds be automated.
  • Question: how many previous builds of each screen reader should be tracked in the database? Answer: not very many, should canvas the people who are actively testing to determine the default dataset.

Browser

  • Gather it from user agent, and mark the browser dropdown (e.g. Chrome) and version/build number field as "auto-detected".
  • Assumes that users are running tests on the same setup as where they are recording results. If not, users should copy carefully from one machine to another, if they differ.
  • Should be read-only by default, to avoid accidental edits, with a button to alter it only if necessary.

Operating System (OS)

  • Don't need specific OS build number right now.
  • Overall OS, e.g. Windows vs macOS, is implied by the screen reader at present, at least until VO is tested on iOS and macOS, so we don't need to track that either.
  • Can manually dig into specific OS versioning if there are significant conflicts in results not explained by screen reader/browser version differences.

Other Stuff

This information should be captured at the start of a test run, not repeatedly per test. I.e. user starts executing a test plan, and they must provide this info before continuing to the first test. We do not want people to change this info part way through a plan.

CC @isaacdurazo

@jscholes jscholes added enhancement New feature or request UI labels Feb 17, 2022
@s3ththompson
Copy link
Member

During the March 3, 2022 Community Group meeting, we discussed some open questions raised by @isaacdurazo to help clarify the requirements for this feature. IRC log here: ARIA-AT 3/3 Minutes

Managing list of version options

  • Matt suggests an admin page to maintain a list of versions would be helpful now and for the foreseeable future
  • Gives admins editorial control over which types of releases are supported as options
  • Version options would likely include human readable string (to be displayed in UI) and machine-readable metadata (e.g. date of release, or numeric version info) to allow for sorting and help the AT driver select the appropriate option in the future
  • Version options are only managed manually for AT versions
  • Browser versions should be smarter since we can use user agent to make a guess about browser and since browser release conventions are better established
    • Browser version should be automatically suggested by parsing user agent, but editable in case someone is recording results in a different browser than the one they tested
    • Browser version options should be programmatically enumerated based on known convention (e.g. Chrome version is Major.Minor, major version increments every release) or based on existing APIs that enumerate browser versions

When / where does specifying browser / AT version fit into the tester user journey?

  • Needs to prompt at the beginning of every testing session (e.g. when user starts testing a given test plan or returns to continue testing a test plan in the future)
    • e.g. "Please select your browser / AT version"
  • Need to prompt when user re-edits a test that has already been submitted, if session browser / AT version is different than recorded browser / AT version
    • Use case
      • Alyssa submitted results for NVDA disclosure, 2021.4
      • If Matt goes in and has NVDA 2022.2, if he’s rerunning a test and changing a result… page would prompt to edit (do I want to update)
    • e.g. "You are currently running NVDA 2022.2, but are editing a test result that was submitted with NVDA 2021.4. Do you want to update the version information saved with these results?"

What happens if a tester changes resumes testing a test plan in a new session with a different browser / AT version?

  • Environment data should be recorded per test
  • Test plan results may be a collection of results with different version info
  • Need to work out how to display this version info on the reports page

How does someone edit their previously submitted browser / AT version?

  • Previously recorded Browser / AT version should be displayed at the top of the test page which shows submitted results
  • Editing test results should also make Browser / AT version editable at top of page

What happens to the way the test queue is organized?

  • No change in the present
  • Headings on test queue just group entries by screen reader and browser. No need for "Latest"
  • If you start running a test plan under "NVDA and Chrome", the AT type and browser type are prefilled and disabled

How will version information be displayed on the test queue page?

  • TBD, based on March 10, 2022 Community Group meeting

@jscholes
Copy link
Contributor Author

jscholes commented Mar 9, 2022

Thanks for these notes, @s3ththompson! One question:

e.g. "You are currently running NVDA 2022.2, but are editing a test result that was submitted with NVDA 2021.4. Do you want to update the version information saved with these results?"

In this scenario, how does the app know that Matt isn't running NVDA 2021.4 anymore? Without the automation API in place for manual testing, it will have no access to programmatically query the running screen reader and version. So, for this message to appear, does it suggest that Matt has already updated the version info at the start of the session?

At a higher level, how do we feel about people changing their AT and/or browser version during a single test run? Personally, it seems undesirable, e.g. to do 10 tests with NVDA 2021.4, then 10 with NVDA 2022.1, then 5 with NVDA 2022.2, and so on. I feel like we should be discouraging that behaviour, which implies that once you start a test run, the information should be fixed in some way, and not prompted for the next time the tester dips into running that test plan.

@s3ththompson
Copy link
Member

In this scenario, how does the app know that Matt isn't running NVDA 2021.4 anymore? [...] So, for this message to appear, does it suggest that Matt has already updated the version info at the start of the session?

Yes, as described this scenario assumes that Matt entered his latest AT / browser information at the start of his session, e.g. after initially clicking on the test plan from the test queue page.

At a higher level, how do we feel about people changing their AT and/or browser version during a single test run?

On the call we discussed that this behavior is undesirable from a consistency perspective, but probably inevitable and therefore should have an explicit prompt. One reason it seems inevitable (even if testers are discouraged from changing versions during testing a single test plan) is that browser update cycles are relatively frequent, and may even be automatically updated on the tester's behalf, between sessions. Additionally, per the example above, the app needs to account for the fact that admins may edit results submitted by other testers... the likelihood that the admin is running exactly the same version when they are reviewing results that the tester used when submitting seems even lower. If the admin is only editing an assertion, they could dismiss the prompt to update the recorded AT / browser version. But if they are editing output, then the results really need to reflect that the new output came from a more recent AT / browser version.

In a larger sense (and this is relevant to tomorrow's discussion about how to display the info on the reports page), I think we will need to discuss how a published test plan report might reflect the fact that results may be composed of data from multiple testers (running slightly different AT / browser versions) and different versions across individual tests.

As a personal aside, in a more abstract sense, I wonder whether results data needs to be "unbundled" a bit. The question about whether browser / AT versions can differ across individual tests in a test plan reminds me of the other issue that it's difficult to update one test without replacing the entire test plan. Perhaps test plans need to be less of a monolithic unit and more just a way to label a group or collection of individual tests? cc @jesdaigle since this came up previously when discussing the database schema

@s3ththompson
Copy link
Member

Let's continue the discussion in #406 now that @isaacdurazo has worked through the different mockups here!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request UI
Projects
None yet
Development

No branches or pull requests

3 participants