Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Added Ollama support
Description
I added support for Ollama which can now be used in conjunction with
spacy-llm
. I added all the models currently supported as well, but perhaps it's better to let users define those themselves? I thought it nicer to just have a set of pre-baked models available...Corresponding documentation PR
Docs updates are at explosion/spaCy#13465
Types of change
feature
Quickstart / Example of usage
...and using a
config.cfg
file...Then make sure you have everything set up to run locally:
Perhaps needless to say, you'll need a machine with a GPU to run this / use the Ollama models.
Then run the Python file and you should get something like:
Checklist
tests
andusage_examples/tests
, and all new and existing tests passed. This includespytest
ran with--external
)pytest
ran with--gpu
)