-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache #4
Comments
how about going one step further and use the cached output to test your documentation. You could maybe write two files to disk. One The test could check if there are any differences between both files and provide a way to approve the changes (copy It might be difficult to decide in which cases you want to regenerate the cache ( |
That would be useful to prevent involuntary changes, yes. Reproducibily is an important aspect, even though I was more interested into the performance once. I suppose we could hit two (virtual) birds with one stone and provide options that would allow users to do that themselves, easily? For example by configuring the cache folder to be |
But you probably don't want to regenerate your docs just to test the docs. A dedicated tool/pytest-plugin would be useful to execute just the tests and compare the output with the last output on disk. |
Not currently. Markdown-Exec is based on PyMDown-Extensions' SuperFences, which is a Python-Markdown extension, and so we need to run |
I wonder if testing the code blocks shouldn't be done separately. If you have a lot of them, it's probably a good idea anyway to write them in their own Python/other files, and inject them with pymdownx-snippets, also allowing easier testing since the Python/other files can now be imported/used anywhere. |
Some context: They html comments are evaluated here https://github.com/15r10nk/inline-snapshot/blob/main/tests/test_docs.py I started to use markdown-exec in my documetation: Which looks great 👍, but caused two issues so far: 15r10nk/inline-snapshot#98 The problem is that the code is always evaluated again and that it is difficult to check (manually) if the output is correct. I would like to have the same safety with the markdown-exec documentation which I have with my own test. Saving the output in git and use exactly this output for rerunning the tests in ci would also be nice.
I don't like indirection in my documentation (and in general). I want that the tests in the code blocks are part of my documentation. Another Idea is that you not only save the output to disk but the input too.
This would make it easier to understand ... because you have less indirection 😃 |
Thank you for the context @15r10nk, this is super helpful! I believe the two issues you mention could be solved by making sure your code blocks fail, in which case markdown-exec logs a warning, which will make strict MkDocs builds fail too. For example, your Bash code block (https://github.com/15r10nk/inline-snapshot/blob/69aa4e6daff81c57cd07ccc6379649b536125693/docs/pytest.md?plain=1#L42-L65) could use Same thing for the issue where pytest is found but dirty-equals is missing: pytest would fail with code 1, and Bash's In short, make sure that errors in code blocks are not silent (especially in Bash code blocks) 🙂
Fair concern! No IDE will understand the indirection, and won't let you ctrl-click the filepath I believe, so that makes it harder indeed. However with inlined code blocks you don't enjoy the same level of editor support, like auto-completion, linting, etc., and you need specialized tools to format code in Markdown (I think Ruff still doesn't do that yet?). So yeah, that's a tradeoff I suppose.
Interesting! So, IIUC:
I can see the value of such a feature. I'd just like to note that in most cases, you just want the code block to "succeed" without really checking the output, since this output could change (yet still be valid) for many reasons (for example: a new pytest version slighly updating output format), and validating it manually every time could be a lot of maintenance work. I'd love to see a few examples where we would actually want to assert the exact value of the output, if you can think of some! |
Is your feature request related to a problem? Please describe.
Generating text/images/svg can slow down rendering a lot.
Describe the solution you'd like
Allow to cache things either during serve (memory), or across multiple builds (filesystem).
A
cache
option could be added. If boolean value, use hash of the code block's contents as ID.Otherwise use
cache
option value as ID. Only useful for cross-builds cache.Items can then be deleted from the cache by deleting the files in /tmp with the ID as name.
Boost priority
The text was updated successfully, but these errors were encountered: