-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python cannot use custom ollama model #188
Comments
can you provide the content (or some representative sample) of you modelfile? I suspect this is due to a behavioural difference in the cli and the api. in the cli, a model containing |
Also, I am wonder if the calls of the ollama chat are independent. I am repeating the exact function call ollama.chat multiple times in a python script, however each time it gives a different (wrong) result. However, the quality of the answer is improving until the correct answer is given after the 4th call. for example
with the exact function call and message, the output are If i change the variable name from a to b, and rerun the script, the answer return is not right away b4, but goes through the cycle of b1, b2, b3, b4. And if I change the variable back to a, I do not get a4 directly, but also need to go through the a1 to a4 cycle first before the correct answer can be returned. Is the individual function call of ollama.chat independent? or is knowledge from one call being passed to the next one? and if not, why the answer seems to go through the same pattern to converge to the correct answer? if the knowledge is retained, why switching a variable name and switch it back seems to have removed all the memory of the system? |
I have created a custom model using the
ollama create custom_model -f modelfile
. The custom model is based on codellama. Some examples and context are provided in the modelfile. In the CLI interface, the custom model ran well, giving answer based on the examples.However, when I was trying to use the custom model in python, using
ollama.chat(model='custom_model
. The same user prompt would give answer as if I am using the original codellama model with no training examples.I have tried using a wrong model name (custommodel instead of custom_model), and it can be detected that no such model exists. So I think python is picking up the custom model, however, it seems like the context is left out. Also when I run
ollama run custom_model
, the provided context in the model file always being printed out. Does it mean that the model is being retrained from the original codellama model everytime when I run it? Instead of having the same trained custom model everytime I run the custom model?It seems to me that all the few shot example provided in the modelfile used to train the custom_model is not provided to the custom model when using
ollama.chat
. The result seems to be just like using the base model that the custom model is trained on.Is there any way to use the custom model appropriately in python, just like running it in CLI?
The text was updated successfully, but these errors were encountered: