Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attach diagnostics to tests #11

Open
Leont opened this issue Jan 21, 2015 · 9 comments
Open

Attach diagnostics to tests #11

Leont opened this issue Jan 21, 2015 · 9 comments

Comments

@Leont
Copy link

Leont commented Jan 21, 2015

Currently, tests and diagnostics are entirely decoupled from a parsing point of view. The obvious approach of trying to parse a diagnostic after a test line fails is not streaming friendly, which is rather undesirable.

We should solve this somehow. Possible solutions are:

  1. Always require a diagnostic
  2. Add a marker after the description (much like directives)
  3. Always require diagnostic after a failure (and maybe on success too if given a pragma)

Probably there are other viable approaches too. I think I'd prefer option 3.

@jonathanKingston
Copy link
Member

How does forcing the producer to always spit out diagnostics fix the issue?

Sorry if I am not getting it, perhaps further explanation or examples might help.

@Leont
Copy link
Author

Leont commented Jan 21, 2015

How does forcing the producer to always spit out diagnostics fix the issue?

Because that way, you always know when to expect diagnostics: after a failure.

@jonathanKingston
Copy link
Member

I have always just checked each line for a directive, held onto the line and on the next pass through if it doesn't match the directive add diagnostics to the previous line as a document (not parsing the document).

Again I feel I am missing you point though.

@Leont
Copy link
Author

Leont commented Jan 21, 2015

Again I feel I am missing you point though.

You have to wait until the next result before you can process it. Which is IMHO against the streaming philosophy of TAP, as well as being an annoying processing complication.

@jonathanKingston
Copy link
Member

@Leont I'm not really sure how you could solve this 100% without making it all on one line.
Obviously that wouldn't be suitable but as the YAML is variable number of lines I can't see that always making it present will really help.
I also think we shouldn't deny a successful test from outputting YAML also, in which case it would always have to be there.

Much like the seconds of buffer for digital TV, I think it is legitimate for something consuming the TAP to always be delayed until the end of the test description, which means either:

  • The ... at the end of the YAMLish
  • Another test

However to potentially simplify this the ... or similar could end the line of the test:

ok 1 - thing happened...
ok 2 - thing happened
   ---
      thing: test x did a broken
   ...

However I'm not really a fan of that output.

@exodist
Copy link

exodist commented Feb 3, 2015

Just wanted to point out that the dev release of Test-Simple will make it easy to implement whatever is decided here. Internally diagnostics are now attached to the ok's that produced them (this is when they should be, some diags are independent). However old tools will need to update in order to attach their diags, so there could be a long process for tools to join the program.

@jonathanKingston
Copy link
Member

I'm still back to not understanding the advantage of this. Yes stricter output is something a producer might always want to trigger but variable length of YAML it doesn't help predict the number of tests or help the parser assume text positions.

@kinow
Copy link
Member

kinow commented Feb 8, 2015

@Leont it took me some time to understand the issue here. IIUC, when streaming tests, there's no way to guarantee that when the parser finds a diagnostic, this diagnostic must be attached to the last test result found.

So if we have the following TAP stream:

1..2
ok 1
  ---
  name : yadda yadda
  ...
not ok 2

The following scenario with two threads could happen:

  • the producer emits the plan 1..2
  • the producer thread#1 emits the first test result ok 1
  • the producer thread#2 emits the second test result not ok 2
  • the producer thread#1 emits the diagnostic of the first test result

Here, there's no way to know whether the diagnostic should be attached to test 1 or to test 2. Correct me if I'm wrong @Leont.

How about we define in the schema of diagnostics (I think we mentioned that somewhere here in GitHub) an entry to link back to the test result? Something like:

1..2
ok 1
  ---
  name : yadda yadda
  testresult: 
    - plan 1
    - number 1
  ...
not ok 2

Just food for thought

@kinow
Copy link
Member

kinow commented Feb 8, 2015

(I think we mentioned that somewhere here in GitHub)

See #9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants